report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The United States has about a 295,000-mile network of gas transmission pipelines that are owned and operated by approximately 900 operators. These pipelines are important to the nation because they transport nearly all the natural gas used, which provides about a quarter of the nation’s energy supply. Pipelines do not experience many of the safety threats faced by other forms of freight transportation because they are mostly underground; but they are subject to failures that occur over time—such as leaks and ruptures resulting from corrosion or welding defects—and failures that are independent of time—such as damage from excavation, land movement, or incorrect operation. For the most part, two types of pipelines transport gas products: (1) gas transmission pipelines and (2) local distribution pipelines. Gas transmission pipelines typically move gas products over long distances from sources to communities and are primarily interstate. They typically operate at a higher stress level (higher operating pressure in relation to wall strength). By contrast, local distribution pipelines receive gas from transmission pipelines and distribute it to commercial and residential end users. Local distribution pipelines, which are primarily intrastate, typically operate under lower-stress conditions. Local distribution companies may also operate small portions of transmission pipelines—typically under lower stress—and are therefore subject to the assessment and reassessment requirements of the Pipeline Safety Improvement Act of 2002. Before the 2002 act, operators were subject to PHMSA’s minimum safety standards for the design, construction, testing, inspection, operation, and maintenance of gas transmission pipelines; these standards are applied to all pipelines. However, this approach does not account for differences in the kinds of threats and the degrees of risk that pipelines face. For example, pipelines located in the Pacific Northwest are more susceptible to damage from geologic hazards, such as land movement, than pipelines in some other areas of the country; but PHMSA’s safety standards do not take these threats into account in a systematic way. By contrast, the risk-based approach of the 2002 act—called the integrity management approach— requires pipeline operators to develop programs to systematically identify threats and mitigate risks to gas transmission pipeline segments located in highly populated or frequently used areas. In addition to PHMSA’s integrity management program, operators must still meet the minimum safety standards. As of December 2005 (latest data available), 447 gas pipeline operators reported to PHMSA that about 20,000 miles of their pipelines (about 7 percent of all gas transmission pipeline miles) lie in highly populated or frequently used areas. Individual operators reported that they have as many as about 1,600 miles and as few as 0.02 miles of transmission pipeline in these areas. Under PHMSA’s regulations, gas pipeline operators may use any of three primary approaches to conduct baseline assessments on pipeline segments lying in highly populated or frequently used areas. In-line inspection: In-line inspection involves running a specialized tool through the pipeline to detect and record anomalies, such as metal loss and damage. In-line inspection allows operators to determine the nature of any problems without either shutting down the pipeline for extended periods or potentially damaging the pipeline, as in hydrostatic testing (described below). In-line inspection devices can be run only from facilities established for launching and retrieving them. These launching and retrieval locations may extend beyond highly populated or frequently used areas. Operators will typically gather information along the entire distance between launching and retrieval locations to gain additional safety information; this is called over-testing. Direct assessment: Direct assessment is a nonintrusive, above-ground instrument inspection that uses two or more types of diagnostic tools, such as a close interval survey, at predetermined intervals along the pipeline. Once the data are analyzed, the operator excavates and inspects segments of the pipeline suspected to have safety threats. Hydrostatic testing: Hydrostatic testing entails sealing off a portion of the pipeline, removing the gas product, filling it with water, and increasing the pressure of the water above the rated strength of the pipeline to test its integrity. If the pipeline leaks or ruptures, the pipeline is excavated to determine the cause of the failure. Operators must shut down pipelines to perform hydrostatic testing. Also, this form of testing can damage the pipeline due to high pressure testing. Finally, operators must be able to dispose of large quantities of water in an environmentally responsible manner. Under PHMSA’s regulations, which incorporate voluntary industry consensus standards for managing the system integrity of gas pipelines, operators must reassess their gas transmission pipeline segments for safety threats overall at least every 10, 15, or 20 years (consistent with industry consensus standards), depending on the condition of the pipelines and the stress under which the pipeline segments are operated. PHMSA’s regulations allow operators to limit the statutorily required 7-year reassessment to corrosion damage. In performing reassessments to meet the 7-year requirement, operators may employ a technique called confirmatory direct assessment. This technique is similar to direct assessment; however, operators are required to use only one type of assessment tool, rather than at least two types required under direct assessment. According to PHMSA, it allowed this more limited assessment because the 7-year reassessment for corrosion confirms the acceptable integrity of a gas transmission pipeline, already ensured by assessments and reassessments for safety threats conducted at 10-, 15-, or 20-year intervals under the industry consensus standards incorporated in the agency’s regulations. (See fig. 2.) About 2010, operators will be expected to begin reassessing some segments of their pipelines for corrosion under the 7-year reassessment requirement while they are completing baseline assessments of other segments—called “the overlap.” It is important to note that the reassessment intervals under the industry consensus standards, the 7-year reassessment requirement for corrosion, and PHMSA’s regulations for time-dependent threats represent the maximum number of years between reassessments. If pipeline conditions dictate more frequent reassessments—for example, 5 or fewer years—then pipeline operators must do so to comply with PHMSA’s regulations. In addition, between reassessments, operators must continually ensure that their gas transmission pipelines are safe. PHMSA’s regulations require all operators—whether or not they are located in highly populated or frequently used areas to patrol their pipelines, survey for leakage, maintain valves, ensure that corrosion-preventing cathodic protection is working properly, and take prevention and mitigation measures to prevent excavation damage. PHMSA, within the Department of Transportation, attempts to ensure the safe operation of pipelines through regulation, industry consensus standards, research, education (e.g., to prevent excavation-related damage), oversight of the industry through inspections, and enforcement, when safety problems are found. PHMSA employs about 165 people in its pipeline safety program, about half of whom are pipeline inspectors who inspect operators’ implementation of integrity management programs for gas and hazardous liquid (e.g., oil, gasoline, and anhydrous ammonia) pipelines, in addition to other more traditional compliance programs. PHMSA currently has 22 inspectors trained to conduct integrity management inspections, of which 9 are devoted exclusively to the program. In addition, PHMSA expects to be assisted by about 180 inspectors in 46 states and the District of Columbia in overseeing intrastate natural gas transmission pipelines. Periodic reassessments of pipeline threats are beneficial because threats— such as the corrosive nature of the gas being transported—can change over time. Baseline assessment findings conducted to date and the generally safe condition of gas transmission pipelines suggest that the 7-year requirement appears to be conservative. Most operators of gas transmission pipelines reported to PHMSA that their baseline assessments have disclosed 340 problems for which immediate repairs have been made. This is encouraging because these pipeline segments are supposed to be the riskiest and few have been systematically assessed until now. Regarding the industry safety record, the industry is generally safe and no corrosion-related incidents resulting in deaths or injuries have occurred in the past 5-1/2 years (from January 2001 through early July 2006) anywhere in the nation, let alone in highly populated or frequently used areas. It is therefore likely to be safe in most cases to allow longer maximum intervals that coincide with industry consensus standards. PHMSA and state pipeline agencies plan to inspect all operators’ integrity management activities, which should serve as a safeguard if longer reassessment intervals for corrosion are permitted. Through December 2005 (latest data available), 76 percent of the operators (182 of 241) reporting baseline assessment activity to PHMSA told the agency that their gas transmission pipelines were in good condition and free of major defects, requiring only minor repairs. (These assessments covered about 6,700 miles, or about one-third of the nationwide total to be assessed). The remaining 59 operators reported 340 problems for which immediate repairs have been completed. (See fig. 1.) Fifty-two operators (21 percent) reported nine or fewer problems for which immediate repairs have been completed; and seven operators (3 percent) reported 10 or more problems. Most of the problems stem from the seven operators reporting 10 or more problems and concern only a small portion of their gas transmission pipelines. Specifically, these seven operators represent nearly 60 percent of the total problems requiring immediate repairs, and the problems occurred in only 7 percent of 6,700 miles of baseline assessments conducted. Since PHMSA does not require that operators report to it the nature of the problems, we do not know how many of the 340 problems, if any, were due to corrosion. We contacted 52 operators about the baseline assessments they have completed and their plans for the rest, and the results were largely consistent with the overall data reported to PHMSA. Forty-four of these operators have begun baseline assessments, and 37 of these 44 (84 percent) told us that they found few safety problems that required reducing pipeline pressure and performing immediate repairs in response to baseline assessments in highly populated or frequently used areas. These 44 operators have assessed about 4,100 miles of gas transmission pipeline, representing about 61 percent of the 6,700 miles of baseline assessment results reported to PHMSA and about 21 percent of the total number of pipeline miles in highly populated or frequently used areas nationwide. It is encouraging that the majority of operators nationwide reported few or no problems involving immediate repairs, because (1) operators are to assess pipeline segments facing the greatest risk of failure from leaks or ruptures first, as required by the 2002 act, and (2) 54 percent of the operators we contacted (28 of 52) had not conducted risk-based assessments of their pipeline segments for safety threats prior to the integrity management program. Although the PHMSA regulations focus the 7-year reassessment requirement on corrosion because it is the most frequent cause of time- dependent pipeline incidents, the industry has had a good safety record prior to and during the initial years of integrity management. It is not possible to determine which incidents occurred in highly populated or frequently used areas from summary historical data published by PHMSA. However, nationwide, these incidents are relatively rare. Over the past 5½ years (from January 2001 through early July 2006), there were 143 corrosion-related incidents over the 295,000-mile transmission system (26 per year, on average)—none of which resulted in death or injury. In addition, according to PHMSA, during the first 2 years of integrity management (2004 and 2005), operators reported that corrosion caused 49 leaks, 16 failures, and two incidents involving significant property damage, but no fatalities and injuries, in highly populated or frequently used areas. Both the positive results found during baseline assessments conducted to date and the overall good safety industry record suggest that gas transmission pipeline operators that have thus far performed baseline assessments overall are doing a good job in managing corrosion. Further, since operators, are required to identify and repair significant problems, the overall safety and condition of the gas transmission pipeline system should be enhanced before reassessments begin toward the end of the decade. Because many gas transmission pipelines had never been assessed before integrity management, operators we contacted pointed out that the new knowledge gained through baseline assessments represents one of the greatest benefits of the integrity management program. They also support reassessments, in part because all operators are subject to the same requirements. However, most support a risk-based reassessment requirement, consistent with overall integrity management, over the fixed 7-year requirement prescribed by the 2002 act. Operators also told us they prefer a risk-based reassessment requirement that is based on research and historical information. Most operators told us they prefer reassessing pipelines based on the characteristics and conditions of the pipeline rather than on the 7-year requirement prescribed in the 2002 act. About 80 percent of the 52 operators that we contacted prefer that reassessment intervals be based on the condition and characteristics of the pipeline segment. About half of these operators (28) expressed a preference for the industry consensus standard developed by the American Society of Mechanical Engineers (ASME B31.8S-2004) for setting reassessment intervals for time- dependent threats because it incorporates a risk-based approach (for pipeline failure) and is based on science and engineering knowledge. This standard sets reassessment intervals at a maximum of 10 years for high- stress pipeline segments, 15 years for medium-stress segments, and 20 years for low-stress segments. Maximum reassessment intervals, such as those in the industry consensus standard, incorporate such risk concepts as built-in safety factors (e.g., wall stress, test pressure, or predicted failure) and pipeline conditions. The maximum intervals of 10, 15, and 20 years are based on worst-case corrosion growth rates. The industry consensus standards were developed in 2001 and updated in 2004 based on, among other things, (1) the experience and expertise of engineers, consultants, operators, local distribution companies, and pipeline manufacturers; (2) more than 20 technical studies conducted by the Gas Technology Institute, ranging from pipeline design factors to natural gas pipeline risk management; and (3) other industry consensus standards, including the National Association of Corrosion Engineers standards, on topics such as corrosion. Contributors have been practicing aspects of risk-based assessments for over 10 years. This standard serves as a foundation for most sections of PHMSA’s integrity management regulations. The mechanical engineering society’s standard was reviewed by the American National Standards Institute. The institute found that the standard was developed in an environment of openness, balance, consensus, and due process and therefore approved it as an American National Standard. While the mechanical engineering standards are voluntary for the industry, PHMSA incorporated them as mandatory in its gas transmission integrity management regulations. The mechanical engineering society’s standard for setting reassessment intervals is not the only industry consensus standard in PHMSA’s integrity management regulations. The regulations incorporate other industry consensus standards for using direct assessment for corrosion, calculating pipeline wall strength, and for determining temporary reductions in operating pressure. In addition, it is federal policy to encourage the use of industry consensus standards: the Congress expressed a preference for technical standards developed by consensus bodies over agency-unique standards in the National Technology Transfer and Advancement Act of 1995. The Office of Management and Budget’s Circular A-119 provides guidance to federal agencies on the use of voluntary consensus standards, including the attributes that define such standards. Of the 52 operators we contacted, 44 had undertaken baseline assessments, and 23 of the 44 have calculated their own reassessment intervals. Twenty of these 23 operators indicated that, based on the conditions they identified during their baseline assessments, they would reassess their gas transmission pipelines at maximum intervals of 10, 15, or 20 years—as allowed by industry consensus standards—if the 7-year reassessment requirement were not in place. The remaining three operators told us that they would reassess their pipelines at intervals shorter than the industry consensus standards but longer than 7 years because of the conditions of their pipelines. These results add weight to our assessment that the 7-year requirement may be conservative for most pipelines. Industry consensus standards allow for maximum reassessment intervals for time-dependent threats of 10, 15, or 20 years only if the operator can adequately demonstrate that corrosion will not become a threat within the chosen time interval. If an operator cannot demonstrate that corrosion does not pose a threat, (e.g., threats posed by shipping gas that is more corrosive then was shipped previously), then the reassessment must occur sooner, perhaps at 7 or even 5 or fewer years. Furthermore, according to industry consensus standards, it typically takes longer than the 10, 15, or 20 years specified in the standard for corrosion problems to result in a leak or rupture. As a means of ensuring that assessments and reassessments are done competently, PHMSA regulations and industry consensus standards require that operators develop and document the steps they take to ensure the quality of these activities. This includes ensuring that persons involved are competent and able to carry out the activities. In addition, operators are encouraged to conduct internal audits of their quality control approaches and third-party reviews of their entire integrity management programs. It is important to note that, in addition to periodic reassessments, operators must perform prevention and mitigation activities on a continual basis. PHMSA regulations require that all operators of gas transmission pipelines, including those outside highly populated or frequently used areas, patrol their pipelines, survey for leakage, maintain valves, ensure that corrosion- preventing cathodic protection is working properly, and take other prevention and mitigation measures. Finally, PHMSA and the state pipeline agencies are inspecting operators’ integrity management plans that were mandated by the 2002 act to provide their gas transmission pipeline reassessment approaches and intervals, among other things, to ensure that operators continually and appropriately assess the conditions of their pipeline segments in highly populated or frequently used areas. These inspections should serve as a check on whether operators have identified threats facing these pipeline segments and determined appropriate reassessment intervals. PHMSA and states have begun inspections and expect to complete most of the first round no later than 2009. As of June 2006, PHMSA had completed 20 of about 100 inspections and, as of January 2006, states had begun or had completed 117 of about 670 inspections. Initial results from these inspections show that operators are doing well in assessing their pipelines and making repairs, but some need to better document their programs. Based on the initial inspection results to date, PHMSA and states did not find many issues that warranted enforcement actions. Although some uncertainty exists, sufficient resources may be available for operators to reassess their gas transmission pipelines. Operators and inspection contractors we contacted told us that the services and tools needed to conduct periodic reassessments will likely be available to most operators. However, operators expressed their uncertainty about whether qualified direct assessment and confirmatory direct assessment contractors will be available. This is important because operators plan to use these methods to reassess about half of their pipeline mileage. Contractors told us that they will likely have the capacity to meet demands, even during periods when baseline assessments and reassessments may overlap. The severity of this overlap, however, remains unclear. Although operators that we contacted expect baseline assessment and reassessment activity to decrease from 2010 through 2012, an Interstate National Gas Association of America (INGAA) and American Gas Association (AGA) polling of their members suggests that activity will rise markedly. Thirty-seven out of 52 operators (71 percent), one in-line inspection association, and all four inspection contractors that provide direct assessment or in-line inspection tool services that we contacted told us that the services and tools needed to conduct periodic reassessments will likely be available to most operators. All but 3 of the operators reported that they plan to rely on contractors to conduct all or a portion of their reassessments, and 9 of 52 operators have signed, or would like to sign, long-term contracts that extend contractor services through a number of years. However, few have scheduled reassessments with contractors, as they are several years in the future and operators are concentrating on baseline assessments. The 48 operators that reported both baseline and reassessment schedules told us that they plan to reassess 42 percent of their gas transmission pipeline miles in highly populated or frequently used areas, using in-line inspection, and 54 percent of their miles using direct assessment or confirmatory direct assessment methods. (See fig. 3.) Operators expect to assess only 4 percent of their pipeline miles using hydrostatic testing for several reasons: (1) this form of testing requires shutting down their pipelines, (2) other assessment methods yield more robust information about the condition of their pipelines, (3) hydrostatic testing can weaken or damage pipelines, and (4) large quantities of water must be disposed of in an environmentally responsible manner. The Inline Inspection Association and the two in-line inspection contractors that we contacted told us that sufficient capacity exists within the industry to meet current and future operator demands. However, operators and inspection contractors expressed uncertainty about whether qualified direct assessment and confirmatory direct assessment contractors will be available. This is important because operators plan to use these methods to reassess about half of their gas transmission pipeline mileage. Unlike the in-line inspection method, which is an established and less intrusive practice that 27 of 52 operators have used on their pipelines at least once prior to the integrity management program, two direct assessment contractors told us that there is limited expertise in this field. One said that newer contractors coming into the market to meet demand may not be qualified. The operators planning to use direct assessment for their pipelines are generally those with smaller-diameter pipelines that cannot accommodate in-line inspection tools. At a recent INGAA integrity management workshop, in-line inspection and direct assessment inspection contractors emphasized that, although they currently have the resources to meet operator demand and continue to train new inspectors, operators need to plan ahead to ensure resource availability for future years, when resources may be more constrained. The workshop also highlighted technological developments for assessment tools that will make assessments more efficient. Other stakeholders have told us that there are new tools being developed that will enable smaller-diameter pipelines to accommodate in-line inspection tools. For example, the Department of Energy is developing tiny robotic sensors that can detect flaws in plastic natural gas pipelines without interrupting the flow of gas. An industry concern about the 7-year reassessment requirement is that operators will be required to conduct reassessments starting no later than 2010, while they are still in the 10-year period (2003 through 2012) for conducting baseline assessments. Industry is concerned that this could create a spike in demand for contractor services, and operators would have to compete for the limited number of contractors to carry out both. As a result, operators might not be able to meet the reassessment requirement. The information provided by the operators that we contacted shows a marked overall increase in assessment and reassessment activity in 2010 (a 16 percent increase over 2009 activity) and then a gradual decrease of activity through 2012. (See fig. 4.) Operators expect this decrease because they plan to have completed a large number of baseline assessments between 2005 and 2007 in order to meet the statutory deadline for completing at least half of their baseline assessments by December 2007 (3 years before the predicted overlap). In contrast, INGAA and AGA, after polling their members in 2006, found a steady overall increase in total expected baseline assessments and reassessments during the overlap period. INGAA and AGA found that baseline assessments and reassessments would start to increase in 2009 and rise steadily through 2012. (See fig. 5.) Assessment activity would increase by 5 percent in 2010 over the 2009 level; in 2011, by 8 percent over the preceding year; and in 2012, by 10 percent over the 2011 level. The difference between our findings and those of INGAA and AGA is not easy to explain. (See fig. 6.) Both efforts reported on comparable numbers of operators (47 for us and 56 for INGAA/AGA) and total transmission pipeline miles (154,000 for us and 180,000 for INGAA/AGA). To some extent, the difference may be due to the variations in the pipeline operators that responded to both efforts. About 72 percent of operators we polled were different from those polled by INGAA and AGA. However, even where both efforts collected information from the same operators, the information was sometimes markedly different. Another reason for the difference may be due to methodology. For example, we gathered our information through semistructured interviews with a systematically selected set of pipeline operators based on larger and smaller transmission pipelines and local distribution companies with the highest proportion of pipeline miles in highly populated or frequently used areas to total system miles, among other things. INGAA and AGA gathered their information by sending out a self-administered data collection instrument to their members, and reported results based on those members who responded. In addition, INGAA and AGA asked operators for data somewhat differently from methods we used, which may have led to some differences in results. Evidence as a result of baseline assessments, the industry’s overall safety record, the existence of accepted risk-based assessment standards, and PHMSA’s actions to inspect how operators are identifying corrosion threats to their pipelines and setting reassessment intervals suggests a risk-based approach to reassessing gas transmission pipeline segments for corrosion can achieve the safety objectives of the 2002 act. Evidence gathered to date suggests that operators that have thus performed baseline assessments are doing a good job overall managing corrosion. Since the large majority of pipeline operators that we contacted had not systematically assessed their transmission pipelines for corrosion risks before the onset of the gas integrity management program, if corrosion were a rapidly growing problem, we would have expected a larger proportion of pipelines to report problems requiring immediate repairs. But, this was not the case. Furthermore, adopting a risk-based approach to setting reassessment intervals does not automatically allow operators to reassess their pipeline segments less frequently than under the 7-year requirement. Rather, if conditions warrant, an operator would be required to reassess a pipeline segment as frequently as needed—perhaps even more frequently than every 7 years. Finally, a risk-based reassessment requirement would be consistent with the overall approach to integrity management that the Congress put in place with the 2002 act. Safeguards are in place to ensure that gas transmission operators determine reassessment intervals competently. PHMSA regulations and industry consensus standards require that operators ensure that persons involved have the experience and expertise to carry out the activities. Operators are also encouraged to conduct internal audits of their quality control approaches and third-party reviews of their integrity management programs. PHMSA and the state pipeline agencies are inspecting operators’ compliance with integrity management reassessment requirements, among other things, to ensure that operators continually and appropriately assess the conditions of their gas transmission pipeline segments in highly populated or frequently used areas. In summary, the available evidence supports a conclusion that a risk-based reassessment approach based on technical data, risk factors, and engineering analyses can achieve the 2002 act’s safety objectives. Such an approach would provide for reassessments to be tailored to the corrosion threats faced by the pipeline segment and would not result in reassessments that are either too infrequent or premature. Evidence to date suggests that gas transmission pipelines are generally in good shape based on assessments, following up with immediate repairs and safeguards being in place to ensure operators determine reassessments appropriately. In our view, it is not necessary to wait until baseline assessments and a round of reassessments have been completed before considering whether to retain or modify the 7-year reassessment requirement. To better align reassessments with safety risks, the Congress should consider amending section 14 of the Pipeline Safety Improvement Act of 2002 to permit pipeline operators to reassess their gas transmission pipeline segments at intervals based on technical data, risk factors, and engineering analyses. Such a revision would allow PHMSA to establish maximum reassessment intervals, and to require shorter reassessment intervals as conditions warrant. We provided a draft of this report to the Departments of Transportation and Energy for their review and comment. The Department of Transportation generally agreed with the report’s findings. The Department of Energy had no comments. We are sending copies of this report to congressional committees and subcommittees with responsibility for transportation safety issues; the Secretary of Transportation; the Secretary of Energy; the Administrator, PHMSA; the Assistant Administrator and Chief Safety Officer, PHMSA; the Deputy Secretary for Natural Gas and Petroleum Technology, Department of Energy; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. As the Pipeline Safety Improvement Act of 2002 was being considered, the Interstate Natural Gas Association of America (INGAA) analyzed the possible impact of requiring assessments and periodic reassessments and found that significant disruptions in the natural gas supply and considerable price increases could occur. A more moderate impact was predicted in three subsequent analyses—(1) two reviews of the INGAA study performed for the Pipeline and Hazardous Materials Administration (PHMSA) by the John A. Volpe National Transportation Systems Center and by the Department of Energy during the congressional debate over the pipeline bill, and (2) a post-act PHMSA evaluation of its implementing regulations. A waiver provision was included in the 2002 act after INGAA’s study was completed; this may serve as a safety valve if it appears that the natural gas supply may be disrupted. Finally, our discussions with 50 natural gas pipeline operators also suggest a more moderate potential impact than INGAA found. INGAA’s study estimated that periodic assessments under integrity management could lead to a monthly reduction in natural gas supply of about 1 to 3 percent, along with price increases to customers, among others, ranging from $382 million to over $1 billion (in 2002 dollars) from 2002 through 2010, depending on the frequency of assessments. Most of this price increase would be due to supply disruption and some due to capital expenditures. INGAA considered the monthly reduction in supply to be significant because it assumed that gas transmission pipelines would be removed from service during testing and that some areas of the country would be more vulnerable to supply disruptions than others. Both Volpe’s and the Department of Energy’s 2002 reviews of the INGAA study concluded that gas transmission pipelines would not be significantly affected by periodic assessments. The reviews, however, did not attempt to quantify overall estimates of gas disruptions or price impacts. Rather, they examined the major assumptions in the INGAA study and discussed whether the study’s results seemed reasonable. PHMSA’s final regulatory evaluation, which was completed in 2004 to assess the impact of PHMSA’s regulations on implementing the 2002 act, concluded that transmission pipelines’ natural gas supply may be somewhat disrupted as a result of assessments and that cost increases may occur. However, PHMSA acknowledged that it could not estimate the impact of assessments on gas prices. In general, the reviews found that the INGAA study’s estimates of price impacts represent a worst-case scenario because of several overly pessimistic assumptions. For example, the INGAA study underestimated the ability of the pipeline network to mitigate disruptions. INGAA assumed that pipeline assessments would generally reduce pipeline capacity temporarily, thereby disrupting the supply and increasing the price of natural gas. Yet, both Volpe’s and the Department of Energy’s reviews found that the INGAA study did not sufficiently account for redundancies in the nation’s natural gas transmission pipeline network. Redundancies enable operators to mitigate potential disruptions during assessments by rerouting gas through the network. Operators we contacted that have higher-stress gas transmission pipelines generally indicated that their pipeline infrastructure is versatile and includes such redundancies as parallel pipelines or looping capabilities that allow gas to flow to customers while portions of the pipeline are assessed or repaired. (See fig. 7.) Operators of lower-stress pipelines reported that they typically use a set of laterals, which feed an interconnected gas distribution system and allow them to plan around disruptions. In addition, lower-stress operators can use liquid or compressed natural gas that is located at their facilities or transported by trucks to specified locations. Forty-four of the 50 natural gas operators (88 percent) that we contacted have some type of alternative gas supply, such as storage facilities and other gas suppliers, to meet customers’ short-term needs. assumed that a large amount of transmission mileage would require assessments because of over-testing. The INGAA study concluded that the number of gas transmission pipeline miles within highly populated or frequently used areas is only about 5 percent of the total mileage in the U.S. Nonetheless, the study assumed that over 80 percent of mainline interstate pipeline miles would require assessing, because the pipeline miles that are located within the highly populated areas are scattered throughout the pipeline system, and inspection methods like in-line testing can only be inserted and retrieved in certain locations that may lie outside highly populated or frequently used locations. As a result, the study assumed that operators of these pipelines would assess over 1,500 percent more miles than are within the highly populated areas. On the basis of comments from industry groups, PHMSA’s regulatory evaluation assumed that operators would assess about 625 percent more miles when using in-line inspection testing and about 25 percent more miles when using hydrostatic testing, but no over-testing when using the direct assessment method. Baseline assessment results to date seem to support the lower over-testing estimate: as of December 31, 2005, on the basis of performance reports submitted to PHMSA, operators assessed about 650 percent more miles overall than are located in highly populated or frequently used areas. assumed that only hydrostatic testing would be used on delivery laterals. The INGAA study predicted that operators would use only hydrostatic testing on lateral gas transmission pipelines because it assumed that very few laterals can accommodate in-line testing. Under hydrostatic testing, water pressure is used to test the condition of pipelines; therefore, all of the capacity of a pipeline segment must be removed for a period of time. Volpe’s review concluded that this particular assumption represents the worst possible impact of assessments on lateral pipelines because it does not allow for the use of in-line testing or direct assessment. Based on discussions with operators and public comments on PHMSA’s draft regulatory analysis, the PHMSA regulatory evaluation also assumed that few operators would use hydrostatic testing. INGAA’s study also did not address the development of new technologies that could allow in-line inspection of smaller diameter pipelines. As discussed earlier, new technology is being developed. Finally, operators we contacted reported that they do not plan to use hydrostatic testing extensively. As discussed earlier, only about 4 percent of the mileage will be reassessed using hydrostatic testing. This testing will typically be over relatively small lengths of pipeline (from 0.8 to 331 miles). did not incorporate the ability of operators to obtain waivers. The INGAA study did not consider the possible impact of a waiver provision in the 2002 act on maintaining the natural gas supply. This was understandable because the waiver provision was added to the bills under consideration after the INGAA study was completed. The act allows the PHMSA to waive or modify any requirement for operators to conduct reassessments when they need to maintain product supply as long as it is consistent with pipeline safety. Twenty-one of the 50 natural gas operators (42 percent) that we contacted said that they would consider applying for a waiver, if needed, and 23 (46 percent) told us that they did not plan to apply for a waiver. Three of the operators were uncertain, and the remaining three operators did not provide us with a response. Fourteen of the 26 operators that either did not plan to apply for a waiver or were unsure about doing so said that it is too early to determine the need for applying for waivers. They obtained the necessary equipment to conduct assessments or developed plans for handling potential natural gas supply disruptions. Pipeline operators we contacted told us that assessments and repairs of even their riskiest gas transmission pipelines have not significantly disrupted the natural gas supplied to customers, such as local distribution companies and power plants. These 50 natural gas transmission operators and local distribution companies had assessed about 4,100 miles of pipeline in highly populated or frequently used areas, as of December 2005 (latest data available)—or about 21 percent of the total gas transmission mileage in these areas in the nation and about 62 percent of the pipeline mileage located in frequently used or highly populated areas assessed to date. Of the 44 operators that have begun baseline assessments, 26 (59 percent) indicated that their assessments and repairs did not require them to shut down their pipelines or reduce their operating pressure. Sixteen operators (36 percent) reported minor disruptions in their gas supply because they temporarily shut down pipelines and reduced operating pressure to conduct assessments or repairs. These operators told us that they used alternative gas sources, such as liquefied natural gas, to sustain their customers’ gas supply. The remaining two operators (5 percent) were located in regions that have limited excess gas capacity. Both operators reported that they could not meet all of the natural gas needs of their customers when their pipelines were shut down to perform assessments or repairs. Some customers, especially those with interruptible contracts, did not receive gas from the pipelines for several days, but they were able to obtain gas from alternative sources. Eleven of the 44 operators were located in regions that have limited excess gas capacity—the Northeast, the Rocky Mountains, and the Southwest— and reported minor supply disruptions. Five of the 11 operators—all of which operate lower-stress gas transmission pipelines—reported that none of these disruptions in natural gas supply were caused by assessments or repairs. Four operators reported instances in which immediate repairs caused a reduction in operating pressure; however, they maintained natural gas supply by relying on alternative gas sources. Since PHMSA does not require that operators report to it the nature of the problems, we do not know how many immediate repairs, if any, were due to corrosion. And, as previously mentioned, 2 of the 11 operators reported natural gas supply disruptions; although they had to shut down their pipelines due to assessments or repairs, customers were able to obtain natural gas from other sources. In early 2006, INGAA and AGA polled their members about their experiences with and plans for conducting assessments and reassessments during off-peak and peak months. Overall, INGAA and AGA found that, from 2003 to 2012, members plan to conduct 76 percent of their baseline assessments and reassessments on their gas transmission pipelines (as measured in miles) during the off-peak spring and summer months, 18 percent in the fall, and 6 percent in the winter. According to an INGAA official, most of the assessment activity that results in temporary reductions in gas supply due to repairs being made will likely affect markets regionally. If assessments occur when pipelines are constrained for capacity, an increase in delivered gas prices will occur. Overall, assessments will only affect small groups of the nation’s population, but they will have a consumer price impact in those affected areas. Our findings from these operators, while not necessarily representative of all operators, are encouraging. First, these findings do represent a sizeable proportion (61 percent) of the mileage assessed to date. Second, the segments that operators assessed were supposed to be the riskiest segments (those most susceptible to ruptures or leaks) of the gas transmission pipelines located in highly populated or frequently used areas. If so, there should be fewer repairs needed for subsequent baseline assessments of less risky segments, and hence fewer disruptions in supply. The 2006 INGAA and AGA polling of their members did not explicitly ask for the extent to which their members experienced supply disruptions because of baseline assessments or repairs. However, INGAA and AGA did ask members to identify the amount of pipeline modifications and repairs that would be necessary for conducting baseline assessments and reassessments, activities that could disrupt supply. Overall, INGAA and AGA found that about 50,000 of the 180,000 miles of gas transmission pipelines that were reported by responding operators are scheduled for or have already undergone (1) modifications to allow in-line inspection tools to access pipeline segments (2) repairs to eliminate major defects or (3) monitoring for minor problems. According to a senior INGAA official, assessments and pipeline modifications can generally follow a prearranged schedule; however, pipeline repairs are unpredictable. Repairs often require pipelines to be shut down, which could have an effect on natural gas supply. However, PHMSA officials report that only the worst pipeline problems require pipelines to be shutdown for repair. From 2003 to 2012, 38,000 of the 50,000 pipeline miles (76 percent) have been scheduled for modifications or repairs during the off-peak spring and summer months to mitigate supply disruptions. Officials from the Office of Oil and Gas within the Department of Energy told us that the integrity management program, including the 7-year reassessment requirement, is not likely to significantly disrupt the natural gas supply. They told us that operators have, among other things, sufficient system redundancies, such as parallel lines, to maintain product supply. The Department of Energy has completed several regional analyses of the possible effects of the disruptions in the natural gas supply caused by such events as extreme weather conditions (e.g., extended cold periods and hurricanes). It is completing other analyses as well. However, because these are being done at the regional level, their results are too broad to help inform us about more localized and subregional potential disruptions. To understand how the findings from operators’ baseline assessments inform us about the need to reassess gas transmission pipelines at least every 7 years, we reviewed the requirements of the Pipeline Safety Improvement Act of 2002 and PHMSA’s implementing regulations. We also reviewed information about setting reassessment intervals for gas transmission pipelines, including industry consensus standards for maximum reassessment intervals developed by the American Society of Mechanical Engineers, and documents obtained from PHMSA, industry, and other stakeholders. We discussed this issue with officials from PHMSA, other federal agencies, industry associations, companies that perform research in this area, state safety representatives, and safety advocacy groups. (These organizations are listed at the end of this appendix.) We also analyzed data from PHMSA on the number of immediate repairs reported by operators as a result of baseline assessments conducted through December 2005 (latest data available) and the number of natural gas pipeline incidents reported to PHMSA. We contacted 52 pipeline operators (50 natural gas and 2 hydrogen operators) from among the 447 operators that reported that they operate gas transmission pipelines in highly populated or frequently used areas. Forty-four of these operators have begun baseline assessments. We selected those operators for which the baseline assessments and reassessments could be expected to have the greatest impact, all else being equal: larger and smaller transmission pipelines and local distribution companies with the highest proportion of pipeline miles in highly populated or frequently used areas to total system miles. We also selected operators located in three regions of the country that several studies and our stakeholders consider to be vulnerable to energy supply disruptions: the Northeast, the Southwest, and the Rocky Mountains. The 52 operators reported that they have assessed about 4,100 of the 6,700 miles (61 percent) of pipeline segments, as of December 2005. Overall, these operators have assessed about 21 percent of the 20,000 miles of pipeline that operators have reported as being within highly populated or frequently used areas. Because we used a nonprobability method of selecting these operators, we cannot project our findings nationwide. Contacting a larger number of operators or selecting them through a statistical sample would not have been feasible due to resource and time constraints. Nonetheless, these 52 operators do represent a substantial portion of the miles assessed to date and of the total number of reported miles of pipeline in highly populated or frequently used areas. For these 52 operators, we conducted semistructured interviews to collect qualitative and quantitative information on the degree to which they found anomalies during the baseline assessments and, based on these results, the frequency with which they would reassess these pipeline segments under American Society for Mechanical Engineers standards for managing the system integrity of gas pipelines (ASME B31.8S-2004) if the 7-year reassessment requirement were not in place. As part of our work, we asked operators to identify the steps that they take to ensure the quality of their baseline assessments and reassessments, such as ensuring that competent persons are involved in determining reassessment intervals and conducting periodic internal or third-party reviews of their integrity management programs, as recommended by PHMSA regulations and industry standards. We relied on the operators’ professional judgment in reporting on the conditions they found during their assessments. To determine the extent to which gas transmission pipeline operators and local distribution companies will likely have the resources to reassess their pipelines, at least every 7 years, we synthesized testimonial and documentary evidence obtained from our discussions with (1) 52 operators (as described above) and (2) pipeline assessment tool contractors, direct assessment vendors, and industry associations on the prospective availability of equipment, equipment operators, and data analysts to interpret results. We synthesized the information from the 52 operators to determine the aggregate level of actual and planned assessments and reassessments through 2012. We compared our findings with the results from an INGAA/AGA data collection effort, conducted in 2006, on the same topic. We then discussed our results with INGAA and analyzed the data obtained from both efforts to try to understand any differences in results. To assess the reliability of information provided to us from PHMSA, INGAA, and AGA, we performed a number of analyses. For the information provided to us from PHMSA, we compared the number of immediate repairs operators reported to us to the number of immediate repairs they reported to PHMSA. To assess the reliability of the data provided to us from INGAA and AGA, we also compared the reported responses of operators that were included in INGAA/AGA’s and our efforts. In addition, we checked the accuracy of INGAA/AGA’s calculations. We determined that the data were sufficiently reliable for the types of analyses we present in this report. To determine the potential impact of the 7-year reassessment requirement on the nation’s natural gas supply, we contacted officials from PHMSA, the Department of Energy, industry associations, and research firms to discuss how the potential shutdown of gas transmission pipelines or operation under reduced pressure—as a result of baseline assessments, reassessments, and repairs—might affect the continued supply of natural gas. We also obtained information from the Department of Energy on the results of analyses of the overall vulnerability of natural gas supplies in several regions of the nation to extreme conditions, such as extreme cold weather. Further, we asked the 50 natural gas operators that we contacted about the vulnerability of their pipelines to supply disruption and the potential impact on customers. This included 11 operators located in the three regions of the country that have limited excess supply gas capacity. We also discussed how their baseline assessments and any resulting repairs have affected their customers to date. Finally, we compared operators’ experiences in performing assessments, reassessments, and repairs to the assumptions made in the 2002 INGAA study of the potential effects of the proposed integrity management program, two reviews of this study, and PHMSA’s final regulatory evaluation. The reviews were performed by the John A. Volpe National Transportation Systems Center and the Department of Energy at the request of PHMSA. In addition to the above, James Ratzenberger, Assistant Director; Timothy Bober; Anne Dilger; Seth Dykes; Timothy Guinane; Brandon Haller; Bert Japikse; and Matthew LaTour made key contributions to this report. | The Pipeline Safety Improvement Act of 2002 requires that operators (1) assess gas transmission pipeline segments in about 20,000 miles of highly populated or frequently used areas by 2012 for safety threats, such as incorrect operation and corrosion (called baseline assessments), (2) remedy defects, and (3) reassess these segments at least every 7 years. Under the Pipeline and Hazardous Materials Safety Administration's (PHMSA) regulations, operators must reassess their pipeline segments for corrosion at least every 7 years and for all safety threats at least every 10, 15, or 20 years, based on industry consensus standards--and more frequently if conditions warrant. Operators must also carry out other prevention and mitigation measures. To meet a requirement in the 2002 act, this study addresses how the results of baseline assessments and other information inform us on the need to reassess gas transmission pipelines every 7 years and whether inspection services and tools are likely to be available to do so, among other things. In conducting its work, GAO contacted 52 operators that have carried out about two-thirds of the baseline assessments conducted to date. Periodic reassessments of gas transmission pipelines are useful because safety threats can change. However, the 7-year requirement appears to be conservative because (1) most operators found few major problems during baseline assessments, and (2) serious pipeline incidents involving corrosion are rare, among other reasons. Through December 2005 (latest data available), 76 percent of the operators (182 of 241) that had begun baseline assessments reported to PHMSA that their pipelines required only minor repairs. These results are encouraging because operators are required to assess their riskiest segments first. Since operators are also required to repair these problems, the overall safety and condition of their pipelines should be enhanced before reassessments begin. In addition, PHMSA data suggest that serious gas transmission pipeline problems due to corrosion are rare. For example, there have been no deaths or injuries as a result of incidents due to corrosion since 2001. Of the 52 operators contacted that have calculated reassessment intervals, the large majority (20 of 23) told GAO that based on conditions identified during baseline assessments, they could safely reassess their pipelines for corrosion, every 10, 15, or 20 years--as industry consensus standards prescribe unless pipeline conditions warrant an earlier assessment. Sufficient resources may be available for operators' reassessment activities, but some uncertainty exists. For the most part, the 52 operators that GAO contacted expect to be able to obtain the services and tools needed through 2012. However, they expressed some concern about whether enough qualified vendors for the confirmatory and direct assessment methods (above-ground inspections followed by excavations) would be available. Industry associations and GAO attempted to determine the degree to which activity would increase from 2010 to 2012, when operators begin reassessing pipelines while completing baseline assessments. An industry effort showed an increase in assessment and reassessment activity, but GAO's showed a decrease. The reasons for the differences are not clear but may be due, in part, to differences in the operators contacted and the methodologies used in collecting this information. |
Working-age adults with disabilities may obtain cash benefits from a number of private and public programs. After the onset of a disabling condition, workers needing long-term cash benefits may receive assistance from workers’ compensation, private disability insurance, or DI. However, in 1996, only 26 percent of private sector employees had long-term disability coverage under employer-sponsored private insurance plans. Thus, the DI program is an important provider of monthly benefits to workers who are no longer able to work because of a severe long-term disability. Most Social Security disabled beneficiaries, including disabled workers and their dependents, receive benefits from the DI program. However, adult disabled children who are dependents of deceased or retired workers, and disabled workers who have reached retirement age and their dependents, receive monthly benefits from the OASI program. In 1999, about 6.5 million beneficiaries received DI cash benefits totaling about $51.3 billion, while about 38.0 million beneficiaries received OASI cash benefits totaling about $334.4 billion. Benefits for both OASI and DI beneficiaries are based on the application of the Social Security benefit formula to the worker’s average monthly lifetime earnings. The resulting monthly benefit is the amount payable to a worker who becomes entitled to disability benefits or retires at the NRA. Because monthly benefits for DI and OASI beneficiaries are based on the same benefit formula, any change in this formula, as has been proposed in some Social Security reform plans, could affect benefits disabled workers as well as retired workers receive. Both DI and OASI monthly benefits will also be affected by other proposed Social Security reform changes, such as decreases in the cost-of-living adjustment (COLA). However, only OASI monthly benefits are affected by proposed changes in the retirement age. Under current law, the age at which an individual is first eligible to receive full retirement benefits, or NRA, is gradually increasing from 65 to 66 for those who turn 62 in 2005 and to 67 for those who turn 62 in 2022. Benefits retired workers take before NRA are subject to an actuarial reduction. Benefits taken by workers who postpone retirement and work between NRA and age 70 are increased through a delayed retirement credit for each month retirement is delayed. The benefit formula is weighted in favor of workers with lower earnings, so that benefits replace a larger proportion of their earnings. Benefits are adjusted each year, based on increases in the Consumer Price Index (CPI) in order to account for inflation. Auxiliary benefits are paid to eligible dependents and are 50 percent of the Social Security benefit that the disabled or retired worker receives, subject to a maximum family limit on benefits. Upon the death of an insured worker, the eligible spouse receives 100 percent of the worker’s benefit (subject to reduction for age) and the eligible surviving child receives 75 percent of the benefit. Individuals who receive low levels of DI or OASI benefits can supplement them with benefits from SSI. The SSI program, which was authorized in 1972 under title XVI of the Social Security Act, is funded through general revenues and provides monthly benefits to aged, blind, and disabled individuals who have income and resources below specified thresholds. The DI and SSI programs use the same criteria and procedures for determining disability. However, unlike DI beneficiaries, SSI recipients do not need to have a work history to qualify for benefits. The maximum federal SSI monthly benefit in 1999 was $500 for an individual. This monthly benefit level is reduced, depending on a recipient’s income and other sources of support, such as Social Security benefits. In 1999, 36 percent of SSI recipients also received Social Security benefits from either OASI or DI. The average federal monthly benefit in 1999 was $249 for the aged, $351 for the blind, and $364 for the disabled. In addition to the federal SSI benefits, some states provide supplemental benefits that are intended to reflect regional differences in living costs. Social Security is financed primarily on a pay-as-you-go basis, which means that the Social Security payroll taxes that current workers pay are used to pay for current benefits. In 1999, there were approximately 3.4 workers for every beneficiary, but this number is projected to fall to 2.1 by 2030. Because of this change in the ratio of workers to beneficiaries, and other factors, the Social Security trust funds will have a projected financial shortfall or funding gap of approximately $3 trillion over the next 75 years. According to estimates in the 2000 Trustees Report, the OASI trust fund is projected to have sufficient funds to fully finance benefits until 2039, while the DI trust fund is projected to have sufficient funds to fully finance benefits until 2023. After the trust funds are exhausted—that is, after 2039 for the OASI trust fund and 2023 for the DI trust fund—the annual tax revenues of the trust funds are expected to be sufficient to cover only about 70 percent of annual expenditures. In order to address the solvency of the trust funds, a number of Social Security reforms have been proposed. We assessed five of these proposals, some of which maintain the level of current law benefits and some of which reduce and restructure these benefits. Table 6 in appendix I lists the provisions in each proposal. Two of the proposals we studied, President Clinton’s proposal and the Archer-Shaw proposal, maintain the current level and structure of benefits. (See table 2.) Three of the reform proposals we studied—Kasich, Kolbe- Stenholm, and Gregg-Kerrey-Breaux-Grassley—both reduce and restructure current benefits. (See table 3.) The proposals we studied vary in the degree to which they explicitly refer to disabled beneficiaries. President Clinton’s proposal refers to maintaining current-law benefits for both retired and disabled workers. The Archer- Shaw proposal implicitly refers to both disabled and retired workers when it states that beneficiaries will be guaranteed at least current-law benefits. However, it explicitly refers to disabled workers when it discusses distributions from the IAs. Workers can receive distributions from their IAs when they become entitled to either DI or OASI benefits. The Kasich proposal does not explicitly refer to disabled beneficiaries when discussing changes in benefits or the establishment of IAs, although disability benefits are affected by the provisions in the Kasich proposal. Rather, it emphasizes that the provisions described will not affect the benefits of retired workers or those near retirement. The discussion of the expected returns to the IAs clearly refers only to retired workers, with their longer work history. Most of the provisions in the Gregg-Kerrey-Breaux-Grassley and Kolbe- Stenholm proposals explicitly refer to disabled or retired workers. Under both proposals, the benefits of disabled workers are affected by one reduction in the PIA formula but are exempted from a second reduction. Benefits of both disabled and retired workers are affected by reductions in the COLA. However, the provision in both proposals that increases the benefit computation period amends a clause in the Social Security Act that refers only to retired workers. The provision increasing the retirement age affects only the benefits of retired workers. Under both proposals, the restrictions on IA distributions refer to receipt either at retirement age or at the attainment of a particular level of funds in the IA. Under the Gregg- Kerrey-Breaux-Grassley proposal, the insurance benefit is reduced by an offset related to the amount of contributions to the IA. DI beneficiaries are exempt from this adjustment to the insurance benefit when benefits are first received. However, at retirement age, when they are able to gain access to the income from their IAs, insurance benefits are reduced by the appropriate offset. Estimates by SSA’s Office of the Chief Actuary indicate that all the proposals would improve the solvency of the combined DI and OASI trust funds, with the extent of the improvement varying across proposals. In addition, most of the specific provisions in the proposals, such as transfers from general funds and reductions in benefit levels, would have a positive effect on the solvency of the DI trust fund. However, a provision such as the increase in the retirement age would have a negative effect on the DI trust fund while at the same time improving the OASI trust fund balance. The reform proposals we studied had a range of effects on the trust funds’ solvency as measured by the actuarial balance. The actuarial balance as calculated by the Office of the Chief Actuary is the difference between the present value of the Social Security program’s revenues and costs over a 75- year period and is expressed as a percentage of taxable payroll. If revenues exceed costs, the actuarial balance is positive; if costs exceed revenues, the actuarial balance is negative, indicating a deficit. In 1999, under current law, the Social Security program faced an actuarial deficit equal to 2.07 percent of taxable payroll. This figure represents the amount of the payroll tax rate increase in 1999 that would establish actuarial balance in the Social Security trust funds over the subsequent 75 years. In other words, increasing the payroll tax rate from the current 12.4 percent to 14.47 percent of payroll would establish actuarial balance in the trust funds. The Office of the Chief Actuary provides annual estimates of the actuarial balance for the combined OASI and DI trust funds under current law and, when requested by the Congress or the executive branch, estimates of the actuarial balance under reform proposals. The estimates of the actuarial balance under current law and each of the reform proposals we studied are presented in table 4. The actuaries estimated that the trust funds’ deficit of 2.07 percent of taxable payroll under current law would either be sharply reduced or become a surplus for the combined trust funds under the reform proposals we studied. A surplus in the combined trust funds could mean a surplus in one trust fund and a deficit in the other. However, a reallocation of payroll tax rates between the two funds would be expected in this case. The President’s proposal would reduce the actuarial deficit but is not expected to eliminate it. This would be achieved through general fund transfers every year from 2011 to 2050 and by allowing some limited investment in equities, which have a higher rate of return than do the government bonds in which the trust funds have traditionally been invested. Estimates for two other proposals result in a small surplus for the combined trust funds. The Kolbe-Stenholm proposal would generate its surplus through benefit cuts and general fund transfers. In the Archer- Shaw proposal, general fund transfers would finance the contributions to the IAs that the proposal would establish. The proceeds from these accounts would be transferred to the trust funds when benefits are received. The proposal also calls for reducing payroll taxes in response to the additional trust fund revenue expected to accrue from the proceeds of these IAs. The other proposals we examined would result in larger estimated actuarial surpluses for the combined trust funds. The Kasich proposal would accomplish this by reducing the initial level of insurance benefits and then further decreasing insurance benefits by a fixed percentage for each year of contribution to an IA, as well as by borrowing from the general fund. The Gregg-Kerrey-Breaux-Grassley proposal would achieve its surplus through a mix of benefit cuts and revenue transfers that would offset the loss of trust fund revenues resulting from the redirection of a portion of the payroll taxes to the IAs. The reform proposals we studied differ in the magnitude of the stipulated transfers from general revenue. Transfers are smaller under the Kolbe- Stenholm and Gregg-Kerrey-Breaux-Grassley proposals, which contain a number of provisions to achieve solvency by changing benefits or revenues. Under the Kolbe-Stenholm proposal, general revenue transfers range from 0.03 percent of taxable payroll in 2000 to 0.80 percent of taxable payroll in 2060. General revenue transfers under the Gregg-Kerrey-Breaux- Grassley proposal range from 0.6 percent of taxable payroll in 2000 to 1.2 percent of taxable payroll in 2060. General revenue transfers are larger under the proposals with fewer alternative provisions for attaining solvency. Under the Kasich proposal, for example, the magnitude of the transfers ranges from 1.17 percent of taxable payroll in 2000 to 1.57 percent of taxable payroll around 2030. Under the President’s proposal, transfers range from a high of 2.41 percent of taxable payroll to a low of 0.52 percent of taxable payroll between 2011 and 2050. Finally, the Archer-Shaw proposal calls for a general revenue transfer equal to 2 percent of taxable payroll beginning in 2000. Although most provisions in the proposals we examined potentially have a positive effect on the solvency of the DI trust fund, some provisions would have a negative effect. The President’s proposal has two provisions—the transfer of funds from general revenue to the combined OASI and DI trust funds and the investment of a portion of these funds in equities. According to the Office of the Chief Actuary, both provisions would be expected to have a positive effect on the solvency of the DI trust fund. The Archer-Shaw proposal calls for a gradual transfer of the income from the IA balances, which are financed from general revenue, to the trust funds. In the case of disabled workers, the income from the IA balances would be transferred to the DI trust fund. This provision also would have a positive effect on the DI trust fund. The Kasich proposal contains three provisions that would have a positive effect on DI trust fund solvency: the indexing of benefits to prices rather than to wages, which reduces benefits; the reduction in benefits for individuals who opt to contribute a portion of their payroll tax to an IA; and the borrowing of funds from general revenue. The loss of payroll tax revenue associated with individuals opting for IAs would increase the DI trust fund’s deficit, and the general fund loans are designed to compensate for this. Both the Kolbe-Stenholm and the Gregg-Kerrey-Breaux-Grassley proposals contain multiple provisions that would affect DI trust fund solvency. Provisions that reduce the COLA and change the PIA formula so as to reduce benefits for disabled workers lower program costs and, therefore, improve the actuarial balance for the DI program. However, provisions such as the redirection of payroll taxes to IAs and the establishment of a minimum benefit have potentially a negative effect on the DI trust fund. Redirecting payroll taxes reduces revenues to the trust fund while establishing a minimum benefit increases program costs for beneficiaries who were receiving benefits below the minimum. Even provisions that appear to be focused on retirement benefits can have an effect on the DI trust fund. For example, increasing the retirement age also increases the age at which disability insurance benefits are converted to retirement insurance benefits. As a result, disability beneficiaries remain on the DI program longer, increasing costs to the DI program. We were able to use the SSASIM model to estimate the effects on solvency of certain of the provisions in the reform proposals. Our estimates using the model are based on the intermediate assumptions reported in the 1999 Social Security Trustees Report because the SSA’s Office of the Chief Actuary used these assumptions to score the Social Security reform proposals we analyzed. Table 5 presents our results. Reductions in benefits have a positive effect on DI trust fund solvency. The increase in the retirement age results in the expected negative effect on solvency of the DI trust fund. Two reform proposals we studied either maintain current-law benefits— the President’s proposal—or guarantee that the beneficiary would receive at least the amount of current-law benefits—the Archer-Shaw proposal. The remaining three reform proposals—Kasich, Kolbe-Stenholm, and Gregg-Kerrey-Breaux-Grassley—would affect the levels of insurance benefits DI and OASI beneficiaries receive by changing the PIA formula for calculating initial benefits, reducing the COLA, raising the retirement age, or increasing the number of years of earnings used in computing benefits. How a beneficiary’s total benefit income (reduced insurance benefits plus IA income) under these three proposals compares with the benefits received under a maintain-benefits scenario or a maintain-tax-rates scenario depends both on the extent of the decrease in the insurance benefits and on the amount of income received from the IA. Our maintain-benefits scenario achieves solvency through increased payroll taxes while current-law benefits are maintained. Our maintain-tax- rates scenario achieves solvency through benefit reductions while holding current payroll tax rates at today’s levels. These two scenarios represent a range of benefit levels, with the maintenance of current-law benefits being at the upper end and the reduced benefits necessary for the maintenance of current payroll taxes being at the lower end. We compared the benefit income received under each of the three proposals with that received under the maintain-benefits scenario and the maintain-tax-rates scenario for each of three beneficiary groups with the selected characteristics that we simulated: disabled workers, dependents of disabled workers (including spouses, children younger than 18, and adult disabled children), and adult disabled children who are dependents of retired workers. We made the comparisons under each of several different assumptions about the year in which the worker was born, the worker’s earnings level, and the worker’s age when the worker first received DI benefits. We chose the ages of initial benefit receipt to reflect SSA data indicating that individuals are receiving DI benefits at younger ages. For the IAs in our analysis, we assumed that individuals would have portfolios with a smaller percentage invested in equities as they got older. We assumed the return on equities would be a constant, inflation-adjusted 7 percent per year, which reflects the long-term historical average return on equities. According to our estimates, the disabled beneficiaries with the selected characteristics we simulated would, in general, receive higher benefits under the maintain-benefits scenario than they would under the Kasich, Kolbe-Stenholm, or Gregg-Kerrey-Breaux-Grassley proposals. Figures 1 and 2 present the results for workers as well as their dependents. The workers were born in 1986 and have low or average earnings and, in the case of disabled workers, first receive DI benefits at the age of 45 and never work again. These reform proposals would reduce insurance benefits while providing income from the IAs. Under these proposals, it is possible that the IA income might compensate for the decline in insurance benefits resulting from other provisions. However, this is less likely for disabled-worker beneficiaries than for retired-worker beneficiaries because disabled workers are likely to have shorter work histories and thus have smaller IA balances. The reductions in benefits resulting from the decline in the COLA and the changes in the PIA formula are so great that the income from the IA would be insufficient to completely compensate for this loss for the disabled-worker beneficiaries with the selected characteristics that we examined. Disabled workers with low earnings and their dependents would receive greater benefit income under the Gregg-Kerrey-Breaux-Grassley proposal than under the maintain-benefits solvency scenario. However, this higher benefit income is largely the result of changes in the PIA formula that increase the progressivity of the benefit structure. For the proposals we examined, we included the income from the IA only in the benefit income of the disabled or retired worker, not in that of the worker’s dependents, since apportioning the IA income among family members is an individual matter and would vary by household. Consequently, benefit income for dependents of disabled or retired workers would be reduced under the Gregg-Kerrey-Breaux-Grassley, Kasich, and Kolbe-Stenholm proposals not only because of reductions in the insurance benefit but also because it does not include income from individual accounts. In addition, the insurance benefits of dependents include only the amount received during the years in which the worker on whose earnings record the benefits are payable is receiving insurance benefits. Contrary to the results of our comparisons with the maintain-benefits scenario, in our comparison of each of the three proposals with the maintain-tax-rates scenario, we found that in most cases the beneficiary would receive higher benefit income under the proposals than under the scenario. However, dependents of low-earner disabled workers under the Kasich proposal would receive benefit income that is less than under the maintain-tax-rates scenario. Also, adult disabled children of retired workers would receive somewhat lower benefit income under all three proposals in almost all cases. These results are presented in figures 1 and 2. The benefit income received under the three proposals would generally be greater than the benefits received under the maintain-tax-rates solvency scenario because the proposals have provisions for achieving solvency, such as general revenue transfers, in addition to reducing benefits. As a result, the insurance benefits would not have to decline as much as in the maintain-tax-rates scenarios. Further, the benefit income workers would receive under the proposals includes income from IAs. We also examined individual provisions within the three proposals to assess their contribution to the change in the level of insurance benefits received. Reductions in the COLA instituted under the Gregg-Kerrey- Breaux-Grassley and Kolbe-Stenholm proposals would decrease insurance benefits relatively little compared with the maintain-benefits scenario for both disabled workers and their beneficiaries and for adult disabled children of retired workers. Figure 3 presents the estimated effects of the decrease in the COLA on workers born in 1986 who first receive disability benefits at the age of 45 and never work again. The pattern of change in the present value of benefit income for dependents of disabled workers and for adult disabled children who are dependents of retired workers is similar to that shown in figure 3 for disabled workers. Changes in the PIA formula, however, generally result in large reductions in insurance benefits relative to the maintain-benefits scenario. The one exception is a provision of the Gregg-Kerrey-Breaux-Grassley proposal that would increase benefits for workers with certain levels of earnings, thereby increasing benefits for low earners and decreasing benefits by a relatively smaller amount for average earners. Figure 4 displays the effects on disabled workers of changes in the benefit calculation formula. The pattern in the present value of benefit income for the two other categories of beneficiaries is similar to that shown in figure 4 for disabled workers. The insurance benefits of adult disabled children who are dependents of retired workers would also be significantly decreased by an additional change in the PIA formula applicable only to OASI benefits under the Gregg-Kerrey-Breaux-Grassley and Kolbe-Stenholm proposals. Figure 5 displays the effects on the insurance benefits of adult disabled children resulting from this PIA change. As we stated earlier and as is shown in figure 3, reductions in the COLA result in relatively small declines in the level of current-law benefits. Consequently, the levels of insurance benefits that would be received under this provision would be greater than the benefit income received under the maintain-tax-rates scenario, in which benefits would be reduced to levels supportable by current payroll tax rates. Despite the large reductions in insurance benefits resulting from the changes in the PIA formula, most disabled beneficiaries would be better off under this provision in the proposals than under the maintain-tax-rates scenario. The exception occurs for all three types of low-earner beneficiaries under the Kasich proposal’s change in the PIA formula. This provision in the Kasich proposal indexes initial benefits to prices rather than to wages, resulting in a sharp decline in benefits. The effects of the PIA changes on the disabled worker are shown in figure 4. According to our estimates, the effect on the disabled worker’s benefit income of the IA provision alone is positive under the Gregg-Kerrey- Breaux-Grassley, Kasich, and Kolbe-Stenholm proposals. Benefit income would increase the most under the Kolbe-Stenholm proposal because the IA income does not reduce insurance benefits. Benefit income under the Gregg-Kerrey-Breaux-Grassley proposal would also increase but by less because the proposal reduces insurance benefits by an amount that reflects the present value of the government contributions to the IA plus the interest that would have accrued had these contributions been invested at the interest rate earned by the OASDI trust funds. The benefit income received under the Kasich proposal would also be less than that received under the Kolbe-Stenholm proposal because under the Kasich proposal insurance benefits would be reduced by a fixed percentage for each year of contributions to the IA. Figure 6 shows the effect of the IA provision for both the low-earning and the average-earning disabled worker. In our analysis, we assigned the income from the IA to the disabled or retired worker, not to the worker’s dependents, because the apportionment of the IA income among family members is an individual matter and would vary by household. Thus, our estimates reflect the most that the worker would receive from the IAs, whereas our estimates for the dependents reflect the most that their benefits would be reduced under these proposals. Accordingly, for the dependent of the disabled worker and for the adult disabled child, the IA will not increase benefit income because we assumed that these beneficiaries, unlike the worker, receive no income from the IA. Under the Kolbe-Stenholm proposal, there would be no reduction in the benefit income of dependents because changes in IA income do not affect the level of insurance benefits. However, the Kasich proposal would decrease the insurance benefit of the worker by a set percentage for each year of contributions to the IA. The Gregg-Kerrey- Breaux-Grassley proposal would reduce the insurance benefit of the worker by an amount that reflects the present value of the government contribution to the IA plus the interest that would have accrued had these contributions been invested at the interest rate the OASDI trust funds earn. The insurance benefit that the dependent receives is a proportion of what the worker receives. Consequently, the insurance benefit that the dependent receives would be reduced under our assumption that dependents receive no compensating income from the IA under the Gregg- Kerrey-Breaux-Grassley and Kasich proposals. (See figures 7 and 8.) The IA provision would increase benefit income for disabled workers compared with the maintain-benefits scenario. Consequently, the benefit income of the disabled workers we examined would also be greater than the benefits available under the maintain-tax-rates scenario. In our analysis, we assigned all the IA income to the disabled or retired worker and none to the worker’s dependents. As a result, under the Gregg-Kerrey- Breaux-Grassley and Kasich proposals, dependents would experience the reduction in insurance benefits related to the existence of an IA but would not receive any compensating income from the IA, under our assumptions. However, even the reduced insurance benefits that dependents would receive would be greater than the benefits they would receive under the maintain-tax-rates scenario. In the analysis presented so far, we have provided graphs showing the effect of the reform proposals on the worker who first receives DI benefits at the age of 45 and never works again. However, the income from the IA is affected by the number of years for which contributions are made to the IA and, therefore, by the age at which the worker leaves the labor force and begins receiving DI benefits. To see how the income received from the IA would vary by age of first receipt of DI benefits, we compared the income received from the IA by workers who began receiving DI benefits at different ages. Figures 9 and 10 provide the net addition of the IAs to benefit income—that is, the addition to benefit income after reductions are made in the insurance benefit in response to the income from the IA. Figures 9 and 10 indicate that the income received from the IA increases with the age of first receipt of DI benefits. The later that DI benefits are received, the greater the number of years in the labor force, the number of years funds are deposited in the IA, and the number of years the IAs accrue compound interest. The addition of IA income to benefit income across ages would be greatest under the Kolbe-Stenholm proposal, which would not reduce the insurance benefit in response to the income received from the IA. The Gregg-Kerrey-Breaux-Grassley proposal would reduce the insurance benefit by an amount that reflects the present value of the government contributions to the IA plus the interest that would have accrued had these contributions been invested at the interest rate earned by the OASDI trust funds. The Kasich proposal would reduce the insurance benefit by one-third of a percent for each year of participation in the IA. Some Social Security reform proposals could increase costs for the SSI program. Individuals receiving benefits from both Social Security (DI or OASI) and SSI might become eligible for larger SSI benefits if their Social Security benefits decrease as a result of reform. In addition, some Social Security beneficiaries not currently eligible for SSI might become eligible if their Social Security benefits declined as a result of reform. As we stated earlier, we estimated that three Social Security reform proposals—Gregg-Kerrey-Breaux-Grassley, Kasich, and Kolbe-Stenholm— would lower Social Security benefit income, which includes income from IAs, in most of the cases we studied. For DI and OASI beneficiaries who also receive SSI, the decrease in Social Security benefit income would lower their unearned income, which means that their SSI benefit would increase. This would have no effect on the number of recipients but would increase the cost to the program. For the beneficiaries who receive only Social Security and not SSI, the previously mentioned decrease in benefit income would lower unearned income, which would make some eligible for SSI benefits. This would increase both the number of beneficiaries and the cost to the program. However, the full effect on SSI would not be felt immediately because most of the individual provisions within these proposals are to be phased in over time and in many cases are not to be completely in effect until 2020. Given the complexity of the interactions between Social Security and SSI and the difficulty of projecting SSI caseloads so far into the future, it would be extremely difficult to estimate precisely what the effects of reform proposals would be on SSI program costs. In the cases we studied, our analyses indicate that most disabled beneficiaries would receive higher benefits under Social Security reform proposals than under a solvency scenario that maintained payroll tax rates while reducing benefits. However, most disabled beneficiaries with the characteristics we studied would receive lower benefits under reform than under a solvency scenario that maintained current-law benefits while raising payroll taxes. This reduction in benefits under reform to levels below that of current law would occur even though we assumed an optimal set of conditions for disabled beneficiaries: full-time work until receipt of DI benefits and low administrative costs and no annuitization costs for the IAs. Consequently, the typical DI beneficiary could receive lower benefits than the DI beneficiaries with the selected characteristics we studied. The proposals we studied treat DI beneficiaries similarly to OASI beneficiaries. However, the circumstances facing disabled workers differ from those facing retired workers. For example, the disabled worker’s options for alternative sources of income, especially earnings-related income, to augment the reduced benefits are likely to be more limited than are those for the retired worker. Further, DI beneficiaries are entering the program at younger ages and remaining in the program in most cases until death or retirement. Thus, disabled beneficiaries could be subject to these reductions in benefits for many years. They will also have smaller balances in their IAs because of fewer working years in which to make IA contributions and accrue compounded interest. In addition, under several proposals, disabled beneficiaries cannot gain access to income from individual accounts until they reach retirement age. These differences between disabled and retired workers suggest that Social Security reform proposals should be viewed not only in light of their effects on retired workers but also explicitly for their effect on disabled beneficiaries and their families. We provided a draft of this report to SSA. In commenting on this report, the agency noted that we addressed an important topic that has until now received little attention. Specifically, SSA highlighted two points in our report as being important for policy makers considering changes to Social Security: that individual accounts might not fully offset Social Security insurance benefit reductions for some beneficiaries and that SSI benefits might increase as they compensate for the decline in DI benefits resulting from Social Security reform. However, the agency had some concerns about our use of a “best case” scenario to estimate the effects of policy options and about the assumptions underlying this “best case” scenario, citing specifically earnings levels, life expectancy, and investment return assumptions that SSA thought did not reflect the actual situation of disabled beneficiaries. On the basis of these concerns, the agency suggested that we give the report balance by adding a “worst case” scenario. SSA also expressed concern regarding our focus on lifetime benefits, a measure that it believes does not adequately reflect living standards at specific points in time. Finally, SSA suggested that we include a measure reporting on money’s worth or internal rates of return in our table 1 that compares costs and benefits of Social Security reform proposals. SSA also made a number of technical comments, which we incorporated where appropriate. Our use of a “best case” scenario demonstrated that, even under the best of circumstances, Social Security reform proposals would reduce current-law benefits to DI beneficiaries—people who would find it more difficult than most nondisabled retired workers to replace lost benefits with other sources of income such as earnings. We did not examine “worse case” scenarios because the “best case” scenario demonstrates that most DI beneficiaries would be adversely affected by the reform proposals we analyzed. While including the “worst case” scenario SSA suggested could provide a specific lower limit to a range of possible benefit outcomes, that lower limit would be useful only if accompanied by an evaluation of the adequacy of that benefit level, which is beyond the scope of this report. In building a “best case” scenario, we used the earnings of men because they tend to have higher earnings than women do. To examine low-wage earners, we simulated workers who earn 45 percent of average earners, which is the standard low level of earnings the Office of the Chief Actuary uses. Benefits declined at this earnings level as they would for workers earning even less. We assumed individuals lived until 79 because almost one-third of individuals first receiving DI benefits at age 45 live that long, and the number of these individuals is significant enough to warrant study. With respect to SSA’s concern about our use of an equity return of 7 percent, we note that this is a figure currently used in projections, including those of the Office of the Chief Actuary. We chose not to adjust for risk because there is no one risk-adjusted measure that everyone agrees is the best measure, and we believed that our analysis would be more clearly understood with the simplifying “best case” assumptions. With respect to SSA’s concern with our focus on lifetime benefits, we acknowledge that we do not address the issue of variations across plans in living standards before retirement age resulting from differences in account access rules. This is certainly an issue on which future reports could usefully focus. As for the inclusion of money’s worth or internal rate of return measures, we agree that such analysis would be useful, but these measures are beyond the scope of this report. SSA’s written comments are printed in appendix III. We are sending copies of this report to the Commissioner of the Social Security Administration and others who are interested. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. The major contributors to this report are Carol Dawn Petersen, Assistant Director, (202) 512-7066; Barbara A. Smith, Senior Economist; Michael Collins, Economist; and Kim Granger, Economist. Table 6 lists the provisions in the five proposals we studied. Table 7 shows that the access to the IA and the relationship between the IA and the insurance benefit vary across the proposals we studied. Under the Archer-Shaw and Kasich proposals, individuals can obtain funds from their IAs at the age of retirement or when they become eligible for Disability Insurance (DI) benefits. Under the Gregg-Kerrey-Breaux-Grassley and Kolbe-Stenholm proposals, disabled individuals are able to obtain IA income before retirement age only if the funds in the IA are sufficient to provide a monthly income that, when added to the insurance benefit, is at least equal to 1/12 of the current poverty line. According to the Social Security Administration (SSA), this threshold for account access would be virtually impossible for workers disabled at a relatively young age to meet because they would not have the time to build up an IA. In addition, insurance benefits are not affected by the presence of IA income under the Archer-Shaw and Kolbe-Stenholm proposals. Under the Gregg-Kerrey- Breaux-Grassley and Kasich proposals, there are reductions in the insurance benefit because of the existence of an IA. This scenario maintains current payroll tax rates while reducing Social Security benefits to levels supportable by these tax rates. There are many ways to reduce benefits, including waiting until the trust funds are exhausted and abruptly reducing benefits by the full amount necessary to be supported by current payroll taxes. We decided to follow a more gradual approach similar to that used in the “MTR (maintain tax rates) Proposal” presented in the Report of the 1994-96 Advisory Council on Social Security. The Council’s proposal reduces the 0.32 and 0.15 PIA formula factors by 0.5 percent for 1998-2011 and 1.5 percent for 2012-30. The PIA adjustments used in this report also reduce the 0.32 and 0.15 formula factors but by 2.0 percent for 2000-13 and 3.0 percent for 2014-32, which results in the percentage reductions in benefits shown in table 8. These percentage declines in benefits result in trust fund solvency through 2074 under the 1999 Trustee’s Report intermediate assumptions. We assume no behavioral changes in response to the decline in benefits because it is not clear how individuals will respond to the decline in benefits—whether they will continue to retire at younger ages or will postpone retirement to later ages in order to receive larger benefits. We instituted benefit reductions in the maintain-tax-rates scenario by reducing only the 0.15 and the 0.32 brackets of the PIA formula, following the approach used by the Advisory Council. (The PIA formula is described below.) This is important to take into account when comparing benefits under the maintain-tax-rates scenario with benefits for disabled beneficiaries under the Social Security reform proposals. Kasich reduces all three brackets, the 0.90 bracket as well as the 0.15 and 0.32 brackets. These reductions apply to both disabled and retired workers and their dependents. This is why benefits for lower earners under Kasich’s PIA provision are below those calculated in the maintain-tax-rates scenario. The Kolbe-Stenholm proposal, however, reduces only the upper two brackets for disabled-worker beneficiaries and does not reduce these brackets by as much as the maintain-tax-rates scenario does. Therefore, benefits for disabled low earners and their dependents under the Kolbe- Stenholm proposal are greater than benefits under the maintain-tax-rates scenario. The Gregg-Kerrey-Breaux-Grassley proposal creates an additional bracket and increases the 0.32 bracket to 0.70. This explains the increase in benefits for low earners above the benefits received under the maintain-benefits scenario. The full unreduced monthly benefit amount for worker beneficiaries is determined by using the PIA formula. This formula consists of three brackets separated by two bend points. In 1999, these bend points were $505 and $3,043 for newly eligible beneficiaries. A worker’s PIA is calculated as 0.90 of the first $505 of career-average indexed monthly earnings (AIME), plus 0.32 of any AIME amount between $505 and $3,043 and 0.15 of any AIME amount in excess of $3,043. This scenario maintains current-law benefits while increasing payroll tax rates to levels that support those benefits. There are many ways to increase payroll tax rates, including waiting until the trust fund is exhausted and then abruptly increasing payroll tax rates to levels that would support current-law benefits. We follow an approach similar to that used in the “PL PAYGO Proposal” presented in the Report of the 1994-96 Advisory Council on Social Security in which payroll tax rates are increased more gradually. The PL PAYGO option modifies the present law payroll tax rate schedule from 12.4 percent beginning in 1995 and reaching 17.1 percent in 2060. The present law payroll tax rate adjustments used for this report are in table 9. These payroll tax rates result in trust fund solvency through 2074 under the 1999 Trustees Report intermediate assumptions. Note that 85 percent of the OASDI payroll tax rate is assigned to the OASI program, 15 percent to the DI program. To assess how the Social Security reform proposals affect the solvency of the Social Security trust funds and the level of benefits individuals receive, we conducted a variety of simulations using the SSASIM model, developed by the Policy Simulation Group. The initial version of the model was developed under a series of contracts from SSA as part of the 1994-96 Advisory Council on Social Security’s activities. The model was subsequently enhanced with major support from the American Association of Retired Persons, the Employee Benefit Research Institute, and SSA as well as other organizations. The model can simulate a variety of policy reforms to the Social Security program, from incremental changes in the OASI and DI programs to broader structural reforms that would introduce an IA component to the Social Security system. The SSASIM model simulates the dynamic interaction of the labor force, the economy, and the Social Security programs and can be used to generate aggregate program cost and income estimates as well as estimates for the OASI and DI trust funds. Changes in program structure can be analyzed for any specified future time periods. Consistent with SSA’s annual projections, we explored the effect of such changes on OASI and DI trust fund solvency for the 75-year period 1999-2074. The implications of a reform relative to one of the alternative scenarios that achieve solvency are determined by comparing the output results from a simulation that assumes the reform policy with results from a simulation that assumes one of the two alternative scenarios. In our analysis, we made a number of assumptions. With respect to population and economic projections, we used the intermediate assumptions in the 1999 Annual Report of the Board of Trustees of the federal OASI and DI trust funds. We use the assumptions in the 1999 Trustees Report because the Office of the Chief Actuary used these assumptions to score the Social Security reform proposals we analyzed. (See table 10.) We analyzed how the reforms affect individuals born in 1946, 1966, and 1986 in order to assess the effects of provisions that are phased in over time. We analyzed how the reforms affect individuals with average earnings and with 45 percent of average earnings to see how the reform provisions affect workers at different earnings levels. The model contains information on earnings separately for men and women. The user can specify a gender- related earnings pattern. Our analysis uses the earnings pattern for men. These earnings are based on the national average annual earnings of covered workers with earnings. Using 1998 data from SSA, we compared our choice of earnings levels with the earnings levels of actual new beneficiaries. We did so by calculating the DI benefit corresponding to our selected earnings levels and comparing these benefit levels with the distribution of benefits actual DI beneficiaries received in 1998. We found that about 42 percent of all new beneficiaries in 1998 received benefits that correspond to earnings that are less than 45 percent of average earnings, about 38 percent of new beneficiaries received benefits corresponding to earnings that are between 45 percent of average earnings and average earnings, and about 20 percent of new beneficiaries received benefits corresponding to earnings that are greater than the average level. We analyzed how the reforms affected individuals with three different ages of first receipt of DI benefits (35, 45, and 55) to compare the experiences of people disabled at younger ages with those disabled at older ages. These three ages reflect the experiences of individuals with different lengths of time in the DI program and with different lengths of time in the labor force. According to SSA, the average age of a new male DI beneficiary in 1999 was 49.6 years, down from 51.2 years in 1980. In 1999, 19.3 percent of men’s new benefits were awarded to individuals younger than 40, 24 percent to those in their 40s, and 40 percent to those in their 50s. DI benefits for disabled workers are terminated mostly because of the death of the beneficiary or the attainment of retirement age and conversion of benefits to the OASI program; only half of 1 percent of DI beneficiaries leave the program each year because of work. According to SSA data on awards made to DI beneficiaries in 1998, the type of disability that new DI beneficiaries claimed is somewhat associated with age. In 1998, mental disorders were the most common diagnosis for new DI awardees younger than 35, while diseases of the musculoskeletal system were the most common diagnosis for those aged 50 and older. For new awardees younger than 35, mental disorders accounted for 34 percent while diseases of the musculoskeletal system accounted for 11 percent. For new DI awardees aged 50 and older, diseases of the musculoskeletal system accounted for 27 percent while mental disorders accounted for 11 percent. We assumed that individuals enter the workforce at age 22 and work full- time until disability or retirement with no years out of the labor force. We chose these assumptions because they represent a “best case” for the disabled individual. Many disabled individuals are likely to work less than full-time and to have periods of time out of the labor force. However, little information is available on the wages, earnings histories, and periods of nonwork of the disabled. This makes it difficult to choose a “typical” earnings level and earnings pattern for them. The benefit income for our “best case” disabled individuals will clearly be greater than that for disabled individuals receiving lower earnings from intermittent and less than full-time employment. Our results, therefore, represent a maximum level of benefit income that disabled beneficiaries could expect to receive under the Social Security reform proposals that we modeled. We also assumed that the nondisabled workers we simulated retire at age 67 and that all the individuals we simulated die at age 79. We made these assumptions so that in our simulations the retired workers and all disabled workers with a given age of first receipt of DI benefits would have the same number of years of receiving benefits. Thus, differences in benefit income across individuals would be the result of differences in reform proposals and not the result of differences in individual characteristics. Because of the possibility that actual disabled individuals might have a lower life expectancy than we assumed for our simulation, we asked SSA’s Office of the Chief Actuary to send us death rates for men who were born in 1986 and began receiving DI benefits at age 45. We then calculated the proportion who would still be alive at ages 46 to 79. According to our calculations, 49 percent of these individuals would still be alive at 70, and 31 percent would still be alive at 79. We assumed that the benefits workers and their dependents received were not affected by the application of the maximum family benefit. The maximum family benefit refers to the maximum amount that can be paid on a worker’s earnings record. In the case of retired or deceased workers, the maximum varies from 150 to 188 percent of the PIA. In the case of disabled workers, the maximum family benefit is the smaller of 85 percent of the worker’s AIME or 150 percent of the worker’s PIA. The family maximum cannot be exceeded, regardless of the number of beneficiaries entitled on that earnings record, although any benefit payable to a divorced spouse is not included. Whenever the total of the individual monthly benefits payable to all the beneficiaries entitled on one earnings record exceeds the maximum, each dependent’s or survivor’s benefit is reduced in equal proportion to bring the total within the maximum. For the analysis of IAs, we assumed that administrative costs are 0.105 percent of assets. Our estimate of administrative costs is that used in the Report of the 1994-96 Advisory Council on Social Security. The Council considered an option to create IAs alongside the Social Security system with a centralized system of recordkeeping and limited investment choices. The estimate of 0.105 percent of assets was a consensus of the Council members. We also assumed that individuals do not annuitize but, rather, draw down the balance in the IA through periodic withdrawals. Consequently, the balance in the account is not reduced by the costs associated with purchasing an annuity. We also assumed that individuals know how long they are going to live and thus determine the schedule of periodic withdrawals so as to use up the entire balance in the IA by the time they die. These assumptions result in the largest balance possible in the IAs. For the Kolbe-Stenholm, Gregg-Kerrey-Breaux-Grassley, and Kasich proposals, we used the same assumptions that SSA’s Office of the Chief Actuary used in scoring the Kolbe-Stenholm proposal. Following the approach taken in the Report of the 1994-96 Advisory Council on Social Security, we varied the percentage invested in equities according to age. We assumed that persons younger than 40 would invest 55 percent of their account in equities, with an average real return of 4.8 percent for the portfolio. We assumed that those 40 to 49 would invest 50 percent of their account in equities, with an average real return of 4.5 percent. We assumed that those 50 to 59 would invest 40 percent of their account in equities, with an average real rate of return of 4.1 percent. We assumed that those 60 to 69 would invest 20 percent of their accounts in equities, with an average real return of 3.1 percent. We assumed the portion not invested in equities would be invested in Treasury bonds and the return on equities would be a constant, inflation-adjusted 7 percent per year, which reflects the long-term historical average return on equities. We note that the assumption of a 7 percent return on equities in the future has been criticized by some as being optimistic. We did not adjust the rates of return on equities for risk. As we stated in a recent report, there are numerous ways to adjust for risk but no clearly best way, and there is no one risk-adjusted measure that everyone agrees is the correct measure. As a result, the returns on equity that we use are likely to be higher than the risk-adjusted returns. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | There has been little analysis of how the various Social Security reform proposals might affect the Social Security Disability Insurance (DI) program. This report assesses the potential impact of these proposals on the solvency of the DI trust fund. GAO found that most disabled beneficiaries would receive higher benefits under the various Social Security reform proposals it reviewed than under a solvency scenario that maintained payroll tax rates while reducing benefits. However, most of the disabled beneficiaries GAO studied would receive lower benefits under three of the reform proposals reviewed than under a solvency scenario that maintained current-law benefits while raising payroll taxes. The proposals GAO studied treat DI beneficiaries similar to Old-Age and Survivor Insurance beneficiaries. However, the circumstances facing disabled workers differ from those facing retired workers. The differences between disabled workers and retired workers suggest that Social Security reform proposals should be viewed not only in light of their effects on retired workers but also explicitly for their effect on disabled beneficiaries and their families. |
FDA is responsible for overseeing the safety and effectiveness of human drugs marketed in the United States, whether they are manufactured in foreign or domestic establishments. As part of its efforts to ensure the safety and quality of imported drugs, FDA may inspect foreign establishments whose drugs are imported into the United States. The purpose of these inspections is to ensure that foreign establishments meet the same manufacturing standards for quality, purity, potency, safety, and efficacy as required of domestic establishments. Requirements governing FDA’s inspection of foreign and domestic establishments differ. Specifically, FDA is required to inspect every 2 years those domestic establishments that manufacture drugs marketed in the United States, but there is no comparable requirement for inspecting foreign establishments. However, drugs manufactured by foreign establishments that are offered for import may not enter the United States if FDA determines—through the inspection of an establishment, a physical examination of drugs offered for import, or otherwise—that there is sufficient evidence of a violation of applicable laws or regulations. Within FDA, CDER sets standards and evaluates the safety and effectiveness of prescription and over-the-counter (OTC) drugs. Among other things, CDER requests that ORA inspect both foreign and domestic establishments to ensure that drugs are produced in conformance with federal statutes and regulations, including current GMPs. CDER requests that ORA conduct inspections of establishments that produce drugs in finished-dosage form as well as APIs used in finished drug products. These inspections are performed by investigators and, as needed, laboratory analysts. ORA conducts two primary types of drug manufacturing establishment inspections: Preapproval inspections of domestic and foreign establishments are conducted before FDA will approve a new drug to be marketed in the United States. These inspections occur following FDA’s receipt of a new drug application or an abbreviated new drug application and focus on the manufacture of a specific drug. Preapproval inspections are designed to verify the accuracy and authenticity of the data contained in these applications to determine that the establishment is following commitments made in the application. FDA also determines that the establishment manufacturing the finished drug product, as well as each manufacturer of an API used in the finished product, manufactures, processes, packs, and labels the drug adequately to preserve its identity, strength, quality, and purity. GMP inspections focus on an establishment’s systemwide controls for ensuring that the processes it uses to manufacture drugs marketed in the United States produce drugs that are of high quality. Systems examined during these inspections include those related to materials, quality control, production, facilities and equipment, packaging and labeling, and laboratory controls. These systems may be involved in the manufacture of multiple drugs. For the purpose of surveillance, FDA conducts GMP inspections of establishments manufacturing drugs currently marketed in the United States to determine establishments’ ongoing compliance with laws and regulations. FDA conducts for-cause GMP inspections when it receives information indicating problems in the manufacture of approved drugs, as well as when it follows up on establishments that were not in compliance with GMPs during previous inspections. FDA may conduct an inspection that combines both preapproval and GMP components during a single visit to an establishment. As the results of a GMP inspection can often be generalized to all drugs manufactured at a particular establishment, FDA can use the results of the combined inspection to make decisions in the future if the establishment is listed in another application. FDA uses a risk-based process to select some domestic and foreign establishments for GMP inspections to conduct surveillance of drugs currently marketed in the United States. According to an FDA report, the agency developed the process after recognizing that it did not have the resources to meet the requirement for inspecting domestic establishments every 2 years. The process uses a risk-based model to identify those establishments that, based on characteristics of the establishment and of the drug being manufactured, have the greatest public health risk potential should they experience a manufacturing defect. Through this process, CDER annually prepares a prioritized list of domestic establishments and a separate, prioritized list of foreign establishments. FDA began applying this risk-based process to domestic establishments in fiscal year 2006 and expanded it to foreign establishments in fiscal year 2007. FDA’s process for determining whether a foreign establishment complies with GMPs involves both CDER and ORA. During an inspection, ORA staff report observations of significant objectionable conditions and practices that do not conform to GMPs on the list-of-observations form, commonly referred to as an FDA Form 483. They provide this Form 483 to the establishment, along with a briefing on the inspection’s results, on the last day of the inspection. ORA staff discuss the observations on the Form 483 with the establishment’s management to ensure that they are aware of any deficiencies that were observed during the inspection and suggest that the establishment respond to FDA in writing concerning all actions taken as a result of the observations. Once ORA staff complete the inspection, they prepare an establishment inspection report to document their inspection findings. Inspection reports describe the manufacturing operations observed during the inspection and any conditions that may violate federal statutes and regulations. Based on its inspection findings, ORA recommends whether the establishment is acceptable to supply drug products or drug ingredients to the United States. ORA makes a recommendation regarding the classification of the inspection. All inspection reports and classification recommendations related to inspections of foreign establishments are forwarded to CDER. CDER reviews the ORA recommendation and determines the final classification and whether regulatory action is necessary. A classification of no action indicated (NAI) means that insignificant or no deficiencies were identified during the inspection. A classification of voluntary action indicated (VAI) means that deficiencies were identified during the inspection, but the agency is not prepared to take regulatory action. Therefore, any corrective actions are left to the establishment to take voluntarily. A classification of official action indicated (OAI) means that serious deficiencies were found that warrant regulatory action. Inspections classified as OAI may result in regulatory action, such as the issuance of a warning letter. FDA issues warning letters to those foreign establishments manufacturing drugs for the U.S. market that are in violation of the law or implementing regulations and may be subject to enforcement action if the violations are not promptly and adequately corrected. In addition, warning letters notify the establishment that FDA may refuse entry of the establishment’s drugs at the border and will recommend disapproval of any new drug applications listing the establishment until sufficient corrections are made. It is FDA policy to consider many factors in determining whether to issue a warning letter. For example, the agency is to consider corrective actions taken or promised by the establishment since the inspection, and it may decide to not issue a letter if an establishment’s corrective actions are adequate and the violations that would have supported the letter have been corrected. Warning letters are issued after the review and approval of FDA’s Office of Chief Counsel. FDA policy states that the agency will strive to issue warning letters within 4 months of the last day of the inspection. In addition to a warning letter, FDA may take other regulatory actions if it identifies serious deficiencies during the inspection of a foreign establishment. For example, FDA may issue an import alert, which instructs FDA staff that they may detain drugs manufactured by the violative establishment that have been offered for entry into the United States. In addition, FDA may conduct regulatory meetings with the violative establishment. Regulatory meetings may be held in conjunction with the issuance of a warning letter to emphasize the significance of the deficiencies or for the purpose of obtaining prompt voluntary compliance in those instances in which the deficiencies do not warrant the issuance of a warning letter. FDA uses multiple sources of information to determine whether the actions taken by an establishment to correct violations are adequate. FDA may, for example, review documentation describing completed or proposed corrective actions; hold meetings with representatives of the establishment to discuss corrective actions; agree to consider reports of inspections conducted by private consultants; obtain inspection reports from foreign regulatory bodies; and reinspect the establishment itself, though it is not required to do so. As part of this process, agency staff may also make a recommendation for when the establishment should next receive a surveillance inspection. See figure 1 for a description of this process. FDA uses multiple databases to manage its foreign drug inspection program. The Drug Registration and Listing System (DRLS) contains information on foreign and domestic drug establishments that have registered with FDA to market their drugs in the United States. These establishments provide information, including company name and address and the drugs they manufacture for commercial distribution in the United States, on paper forms, which are entered into DRLS by FDA staff. The Operational and Administrative System for Import Support (OASIS) contains information on drugs and other FDA-regulated products offered for entry into the United States, including information on the establishment that manufactured the drug. The information in OASIS is automatically generated from data managed by Customs and Border Protection (CBP). The data are originally entered by customs brokers based on the information available from the importer. CBP specifies an algorithm by which customs brokers generate a manufacturer identification number from information about an establishment’s name and address. The Field Accomplishments and Compliance Tracking System (FACTS) contains information on foreign and domestic establishments inspected by ORA, the type of inspection conducted, and the outcome of those inspections. Investigators and laboratory analysts enter information into FACTS following completion of an inspection. The Office of Compliance Foreign Inspection Tracking System (OCFITS) contains information that CDER uses to track its review of foreign inspection reports submitted by ORA staff, such as information on the type of inspection conducted, CDER actions taken in connection with its review of inspection reports, and the outcome of those inspections. Information in OCFITS is entered by CDER staff. According to DRLS, in fiscal year 2007, foreign countries that had the largest number of registered establishments were China, India, Canada, France, Germany, Japan, the United Kingdom, and Italy (see fig. 2). These countries are also listed in OASIS as having the largest number of establishments offering drugs for import into the United States. Specifically, according to OASIS, China had more establishments manufacturing drugs that were offered for import into the United States than any other foreign country. According to OASIS, in fiscal year 2007, a wide variety of prescription and OTC drugs manufactured in China were offered for import into the United States, including pain killers, antibiotics, blood thinners, and hormones. FDA does not know how many foreign establishments are subject to inspection, and the agency’s recently announced initiatives do not fully address this weakness. The databases that FDA uses to select establishments for inspection do not contain accurate information on the number of establishments manufacturing drugs for the U.S. market. Instead of maintaining a list of establishments subject to inspection, FDA relies on information from databases that contain inaccuracies and that were not designed for this purpose. Furthermore, officials indicated that these databases cannot be electronically integrated or readily interact with one another to compare data, so some comparisons are done manually for each individual establishment. FDA has supported initiatives that could provide it with more accurate information about foreign establishments subject to inspection, but it is too early to tell if these efforts will provide the agency with an accurate count. DRLS provides FDA with some information that the agency uses to select establishments for inspection, but contains inaccuracies and does not provide a complete count of establishments subject to inspection. DRLS, established in 1991, is intended to list the registered establishments that manufacture drugs for the U.S. market. Requirements for the registration of foreign establishments were implemented in 2002. FDA expected that requiring foreign establishments to register would provide it with a comprehensive list of establishments that manufacture drugs for the U.S. market. In fiscal year 2007, approximately 3,000 foreign establishments that reported manufacturing human drugs, biologics, or veterinary drugs were registered with FDA; FDA was unable to determine from this database the number of registered establishments specifically manufacturing human drugs. FDA officials told us that the count of registered foreign establishments in DRLS does not reflect the actual number whose drugs are being imported into the United States for several reasons. First, although foreign establishments are required to renew their registration information annually, FDA does not enforce this requirement by deactivating the registration of establishments that do not fulfill this requirement. Agency officials told us that some foreign establishments may not report to FDA if they stop manufacturing drugs for the U.S. market or go out of business, although establishments are required to do so. Thus, these establishments may still be listed in DRLS as actively registered establishments. Second, foreign establishments may register with FDA whether or not they actually manufacture drugs for the U.S. market. FDA officials told us that this is made more likely by the fact that FDA does not charge foreign establishments a fee to register. FDA officials pointed out that some foreign establishments register because, in foreign markets, registration may erroneously convey an “approval” or endorsement by FDA. FDA officials told us that the agency does not routinely verify the information provided by establishments to ensure that it is accurate. Nor does FDA confirm that the establishment actually manufactures drugs for the U.S. market. FDA does not know how many foreign establishments are erroneously registered. In addition, DRLS does not provide the agency with a complete count of establishments subject to inspection because foreign establishments that manufacture APIs are not required to register if their products are not directly imported into the United States. Planned changes to DRLS could help FDA improve this database but will not provide an accurate count. In July 2008, FDA initiated a pilot of a voluntary electronic registration and listing system for establishments that manufacture drugs; the agency plans to accept only electronic registration beginning June 2009. The new system allows drug manufacturing establishments to submit registration and listing information electronically, rather than submitting it on paper forms. FDA hopes that electronic registration will result in efficiencies allowing the agency to shift resources from data entry to assuring the quality of the databases. Through this new system, FDA also plans to require establishments to update their registration information every 6 months, rather than annually, as is currently required. In addition, FDA has asked establishments to voluntarily submit a unique identification number—a Dun and Bradstreet Data Universal Numbering System (D-U-N-S®) Number—as part of their registration. An official said the agency plans to make this a requirement after it implements electronic registration in June 2009. This identification number could provide FDA with confidence regarding certain information about the establishment, such as its name and location. However, it will not prevent foreign establishments that do not manufacture drugs for the U.S. market from registering. As a result, the registration database will continue to contain inaccuracies when FDA selects establishments for inspection. FDA has also proposed, but not yet implemented, initiatives that could help improve the accuracy of information FDA maintains on registered establishments. FDA proposed a program to contract with an external organization to help manage and improve DRLS, which it describes in its proposal as fragmented and unreliable. As part of the contract, FDA states that the contractor would “establish reasonable credibility” of some of the information provided by establishments. However, as of June 2008, the agency had not yet solicited proposals for this program. In addition, the agency has proposed the Foreign Vendor Registration Verification Program. Through this program, FDA plans to contract with an external organization to conduct on-site verification of the registration data and product listing information of foreign establishments shipping drugs and other FDA-regulated products to the United States. FDA has solicited proposals for this contract but is still developing the specifics of the program. For example, the agency has not yet formalized the criteria it would use to determine which establishments would be visited for verification purposes or determined how many establishments it would verify annually. As of July 2008, FDA had not yet awarded this contract. Given the early stages of these proposals, it is too soon to determine whether they will improve the accuracy of the data FDA maintains on foreign drug establishments. OASIS, which FDA also uses to help it select establishments for inspection, provides an inaccurate count of foreign establishments manufacturing drugs offered for import into the United States. According to OASIS, 6,760 foreign establishments manufactured drugs that were offered for import into the United States in fiscal year 2007. However, this count is inaccurate as a result of unreliable manufacturer identification numbers generated by customs brokers when a drug is offered for import. FDA officials told us that these errors result in the creation of multiple records for a single establishment, which results in inflated counts of establishments offering drugs for import into the U.S. market. FDA officials acknowledged this problem but were unable to provide us with an estimate of the extent of these errors. In addition, the agency does not have a process for systematically identifying and correcting these errors. To mitigate this problem, the officials told us that FDA has provided training to brokers as a way to improve accuracy. FDA has supported a proposal with the potential to address weaknesses in OASIS, but FDA does not control the implementation of this proposed change. FDA, in conjunction with other federal agencies, is pursuing the creation of a governmentwide unique establishment identifier that could minimize duplication. Agencies currently rely on the creation and entry of an identifier at the time of import. Under this new proposal, establishments offering products, including drugs, for import into the United States would obtain a unique establishment identifier through a commercial service that would verify certain information about the establishment. This unique identifier would then be stored within the proposed Shared Establishment Data Service (SEDS) and submitted as part of import entry data when required by FDA or other government agencies. The unique identifier could thus eliminate the problems that have resulted in multiple identifiers associated with an individual establishment. The implementation of SEDS is dependent on action from multiple federal agencies, including the integration of the concept into a CBP import and export system that is under development and scheduled for implementation in 2010. In addition, once implemented by CBP, FDA and other participating federal agencies would be responsible for bearing the cost of integrating SEDS with their own operations and systems. FDA officials are not aware of a specific time line for the implementation of SEDS. The databases FDA uses to select establishments for inspection are not electronically integrated, and their integration could help reconcile data inaccuracies. To create a list of foreign establishments subject to inspection, the agency relies on information from databases that were not designed for that purpose and contain divergent estimates—about 3,000 and 6,760 from DRLS and OASIS, respectively. FDA officials told us that these databases are not electronically integrated and do not readily interact with one another to help reconcile the data. FDA indicated that any electronic comparison of the data in these databases is complex and the agency conducts some comparisons manually for each individual establishment. For example, for fiscal year 2007, FDA used DRLS and other data to develop a list of 3,249 foreign establishments ranked by their risk level in order to select establishments for surveillance inspection. However, due to inaccuracies in DRLS, FDA must also check OASIS to determine which of these establishments actually had imported drugs into the United States and were subject to inspection. FDA officials indicated that they had to manually compare establishments on this list with establishments in OASIS. Because these databases are not electronically integrated, DRLS and OASIS are not conducive to routine analysis to compare the data and identify errors. FDA is in the process of improving the integration of some of its current data systems, which could make it easier for the agency to establish an accurate count of foreign drug manufacturing establishments subject to inspection. The agency’s Mission Accomplishments and Regulatory Compliance Services (MARCS) is intended to help FDA electronically integrate data from multiple systems. It is specifically designed to give individual users a more complete picture of establishments but could also help the agency compare information in multiple databases to obtain an accurate count of establishments subject to inspection. For example, an FDA official indicated that MARCS in combination with planned improvements to the agency’s registration database will allow FDA to electronically integrate FDA’s drug registration and import data. FDA officials estimate that MARCS, which is being implemented in stages, could be fully implemented by 2011 or 2012. An FDA official told us that the agency may be able to electronically integrate its registration and import data by the end of fiscal year 2009, but this implementation has previously faced delays. FDA officials told us that implementation has been slow because the agency has been forced to shift resources away from MARCS and toward the maintenance of current systems that are still heavily used, such as FACTS and OASIS. It is too early to tell whether the implementation of MARCS will improve FDA’s management of its inspection program. FDA inspects few foreign establishments, relative to domestic establishments, each year to assess the manufacture of drugs currently marketed in the United States. The percentage of such foreign establishments that have been inspected cannot be calculated with certainty because FDA does not know how many foreign establishments manufacture drugs for the U.S. market and are thus actually subject to inspection. Of the foreign establishments that FDA inspected, few were selected to conduct surveillance of drugs currently marketed in the United States. Instead, most foreign establishments are selected for inspection as part of the agency’s review process associated with applications for approving a new drug. In each year we examined, FDA inspected fewer foreign establishments manufacturing drugs for the U.S. market than it inspected domestically. However, its lack of an accurate count of foreign establishments subject to inspection makes it difficult to exactly determine the relative size of that portion. Based on our review of data on inspections, FDA conducted an average of 247 foreign establishment inspections per year from fiscal years 2002 through 2007. Comparing this average number of inspections with FDA’s count of 3,249 foreign establishments that it used to prioritize its fiscal year 2007 surveillance inspections suggests that the agency inspects about 8 percent of foreign establishments in a given year. At this rate it would take FDA more than 13 years to inspect this group of establishments once, assuming that no additional establishments are subject to inspection. In contrast, from fiscal years 2002 through 2007 FDA conducted about 1,528 inspections of domestic establishments each year. FDA officials estimated that there were about 3,000 domestic establishments manufacturing drugs in fiscal year 2007. They told us that the agency inspects these domestic establishments about once every 2.7 years. FDA’s data indicate that some foreign establishments have never received an inspection, but the exact number of such establishments is unclear. Of the list of 3,249 foreign establishments, there were 2,133 foreign establishments for which the agency could not identify a previous inspection. Agency officials told us that this count included registered establishments whose drugs are being imported into the United States that have never been inspected, as well as establishments whose drugs were never imported into the United States or those who have stopped importing drugs into the United States without notifying FDA. FDA was unable to provide us with counts of how many establishments fall into each of these subcategories. Of the remaining 1,116 establishments on FDA’s list, 242 had received at least one inspection, but had not received a GMP inspection since at least fiscal year 2000. The remaining 874 establishments had received at least one GMP inspection since fiscal year 2000. Of these 874 establishments, 326 had last been inspected in fiscal years 2005 or 2006, 292 were last inspected in fiscal years 2003 or 2004, and the remaining 256 received their last inspection in fiscal years 2000 through 2002. FDA recently increased the number of foreign establishments it inspects, most of which are concentrated in a small number of countries. From fiscal years 2002 through 2007, the number of foreign establishment inspections FDA conducted varied from year to year, but increased overall from 220 in fiscal year 2002 to 332 in fiscal year 2007. During this period, FDA inspected establishments in a total of 51 countries. More than three quarters of the 1,479 foreign inspections the agency conducted during this period were of establishments in 10 countries, as shown in table 1. Because some establishments were inspected more than once during this time period, FDA actually inspected 1,119 unique establishments. For example, of the 94 inspections that FDA conducted of Chinese establishments, it inspected 80 unique establishments. The proportion of establishments inspected in each of these 10 countries varied. The country with the lowest proportion of establishments inspected was China, for which FDA inspected 80 of its estimated 714 establishments. In contrast, the agency inspected 43 of the estimated 61 establishments in Ireland. While FDA has recently made progress in conducting more foreign inspections, it still inspects relatively few such establishments. FDA conducted more foreign establishment inspections in fiscal year 2007 than it had in each of the 5 previous fiscal years. However, the agency still inspected less than 11 percent of the foreign establishments on the prioritized list that it used to plan its fiscal year 2007 surveillance inspections. In order to inspect foreign establishments biennially, as is required for domestic establishments, FDA would have to dedicate substantially more resources than it has dedicated to such inspections in the past. In fiscal year 2007, FDA dedicated about $10 million to inspections of foreign establishments. FDA estimates that, based on the time spent conducting inspections of foreign drug manufacturing establishments in fiscal year 2007, the average cost of such an inspection ranged from approximately $41,000 to $44,000. If these estimates are applied to the 3,249 foreign drug establishments on the list FDA used to plan its fiscal year 2007 surveillance inspections, it could cost the agency $67 million to $71 million each year to inspect each of those establishments biennially. Using FDA’s estimates for the cost of each inspection also suggests that it could cost the agency $15 million to $16 million each year to biennially inspect the estimated 714 drug manufacturing establishments in China, the country estimated to have the largest number of establishments. According to FDA budget documents, the agency estimates that it will dedicate a total of about $11 million in fiscal year 2008 to foreign drug inspections. Significant changes were recently made to the fiscal year 2009 budget request for FDA. The President’s original budget request to the Congress called for $2.4 billion in fiscal year 2009 for FDA, including $13 million to conduct all inspections of foreign drug establishments. However, in June 2008, the President submitted an amendment requesting an additional $275 million for fiscal year 2009, an approximately 11 percent increase over the original request. According to the submission, some of these additional funds were requested to allow FDA to conduct an additional 143 inspections of foreign drug establishments and 75 inspections of domestic drug establishments. FDA is pursuing initiatives with drug regulators in foreign countries that are intended to help the agency improve its inspectional coverage. FDA has announced an initiative with the regulatory body of the European Union to pilot joint inspections of establishments that manufacture finished drug products in either the United States or the European Union and supply both of these markets. FDA indicated that these joint inspections could help it leverage resources by allowing the agency to utilize staff from the E.U. regulatory body when forming joint inspection teams. According to FDA, the joint inspections will help the agency and the E.U. regulatory body build confidence in each other’s inspections, which could allow FDA to review an inspection report completed by E.U. regulators instead of conducting its own inspection. As of July 2008, no joint inspections had been scheduled under this program, but they were in preliminary discussions with one establishment to conduct a joint inspection. In addition, FDA has announced an initiative with the regulatory bodies of the European Union and Australia to share their plans for and results of inspections of API manufacturing establishments in these and other countries. For example, FDA could receive the results of inspections conducted by these regulatory bodies and then determine if regulatory action or a follow-up inspection is necessary. FDA contends that prospectively sharing information about inspection plans will allow these regulatory bodies to more efficiently use their resources by minimizing the overlap in their plans. FDA and the other regulatory bodies held initial discussions in July 2008 and plan to further discuss the program in September 2008. While both initiatives are intended to improve FDA’s knowledge of foreign establishments, both were recently announced and their impact will depend on the extent to which FDA effectively utilizes the information that it receives from the other regulatory bodies. FDA selected few foreign establishments for inspection in order to examine the manufacturing of drugs currently marketed in the United States. We reported in 1998 that 20 percent of the agency’s foreign inspections were for the purpose of routine surveillance. For fiscal years 2002 through 2007, we found that about 13 percent of foreign inspections were GMP inspections conducted to examine the manufacturing of drugs currently marketed in the United States, rather than to inspect an establishment listed in a new drug application. (See fig. 3.) In comparison, for fiscal years 2002 through 2007, about 85 percent of FDA’s inspections of domestic establishments were GMP inspections conducted to examine the manufacturing of drugs currently marketed in the United States. FDA conducts a similar number of preapproval inspections in domestic and foreign establishments each year, but many more domestic GMP inspections. Agency officials said that preapproval inspections are driven by specific goals for the timely review of new drug applications, which may necessitate the inspection of establishments referenced in those applications. FDA often included a systemwide GMP inspection when it visited a foreign establishment for a preapproval inspection. From fiscal years 2002 through 2007, the majority of FDA’s foreign inspections combined a preapproval inspection with a broader GMP inspection. According to FDA officials, because foreign establishments are inspected infrequently, it is expedient for the agency to conduct preapproval inspections and GMP inspections during the same visit to a foreign establishment. Relatively few foreign establishments identified through CDER’s risk- based process are selected for the agency to conduct surveillance of drugs currently marketed in the United States. In fiscal year 2007, after using this process to rank the 3,249 establishments by their potential risk level, CDER forwarded to ORA a list of 104 foreign establishments that it considered to be a high priority for inspection and requested that ORA complete surveillance inspections of 25 of them. FDA officials indicated that 29 such inspections were actually completed in fiscal year 2007. In fiscal year 2008, CDER submitted a list of 110 foreign establishments to ORA, with a target of at least 50 inspections. Though FDA oversight resulted in foreign establishments taking actions to address serious deficiencies identified during inspections, FDA’s subsequent inspections of these establishments were not always timely. FDA identified deficiencies during most of its inspections of foreign establishments. However, determining the number of inspections during which FDA identified serious deficiencies is hindered by inconsistent data on inspection classifications. FDA issued 15 warning letters to foreign drug establishments found to be out of compliance with GMPs. To determine the adequacy of an establishment’s corrective actions, FDA often relied on information provided by the establishment, rather than information obtained from another FDA inspection. Although FDA verified these corrective actions during subsequent inspections, FDA inspections to determine establishments’ continued compliance were not always timely and identified additional deficiencies. FDA identified deficiencies during most of its inspections of foreign establishments. Based on our review of classification data in FACTS, FDA identified deficiencies necessitating a classification of VAI or the more serious OAI in about 62 percent of foreign inspections conducted from fiscal years 2002 through 2006, compared to about 51 percent of inspections of domestic establishments. However, we determined that FDA’s data did not provide reliable information about the number of foreign inspections with serious deficiencies classified specifically as OAI. Determining the number of inspections during which FDA identified serious deficiencies is hindered by inconsistencies in databases used by FDA to track inspections. FDA uses two databases to track information about foreign inspections—FACTS, which is accessible to ORA staff and staff in CDER and other FDA centers, and OCFITS, which is only accessible to CDER staff who review foreign inspection reports. In comparing inspection classification information for foreign inspections conducted from fiscal years 2002 through 2006, we found that of the inspections that could be identified in both databases, 92 percent were consistently classified. However, for inspections that identified serious deficiencies, this rate was much lower. Of inspections classified as OAI in FACTS, 53 percent were identified in OCFITS as receiving the less serious classification of VAI. CDER officials told us that the final inspection classification should be the same in both FACTS and OCFITS. FDA officials suggested that inconsistencies between FACTS and OCFITS may result when changes in inspection classifications are not appropriately updated by FDA staff during the review process. Following an inspection of a foreign establishment, ORA staff enter classification recommendations into FACTS. However, CDER makes the final classification decision, which may be either more or less serious than ORA’s recommendation. CDER officials enter this final classification into OCFITS and, according to FDA policy, should also update this information in FACTS. However, FDA officials indicated that CDER staff may not always update FACTS. FACTS is the database used by ORA investigators and staff in other FDA centers to check establishments’ compliance history. When FACTS is not always updated consistent information on foreign establishments may not be readily accessible to FDA staff responsible for the oversight of foreign establishments manufacturing drugs marketed in the United States. FDA issued warning letters to establishments at which it identified serious deficiencies. Of the 1,479 inspections of foreign drug establishments that FDA conducted from fiscal years 2002 through 2007, the agency issued a warning letter following 15 inspections in which serious deficiencies were identified (see table 2). The rate of warning letters issued to foreign establishments was similar to that for domestic establishments. Foreign establishments that received warning letters were located in 10 countries. For establishments listed in 4 of the 15 warning letters, in addition to issuing a warning letter, FDA also issued import alerts authorizing detention of the establishments’ drugs if they were offered for import into the United States. When issuing the other 11 warning letters, FDA did not restrict importation of the establishments’ drugs, but notified the establishments that failure to correct the identified deficiencies could result in the agency denying entry of their drugs when they were offered for import into the United States. During the inspections that resulted in these 15 warning letters, FDA identified various deficiencies. Identified deficiencies included those related to: laboratory controls, such as lack of an adequate impurity profile; documentation and records, such as records that did not include complete and accurate information relating to the production of each batch of drug produced; and facilities and equipment, such as an “unknown soft, yet flaking, black residue” inside a piece of equipment. FDA generally met its internal goal for the timely issuance of warning letters, and establishments usually began responding to deficiencies identified on the Form 483 prior to receiving the warning letter. FDA issued 9 of the 15 warning letters within 4 months of completing its inspection—as is FDA’s policy—and issued 3 other letters in just over 4 months. While FDA was reviewing the results of the inspection and drafting the warning letters, inspected establishments generally responded in writing to deficiencies identified on the Form 483, which establishments receive on the last day of an inspection. In all but one instance, the establishments responded in writing to Form 483 observations within 5 weeks following the completion of the inspection. These written responses included information on the establishments’ proposed, completed, or soon to be implemented corrective actions taken to address deficiencies identified during the FDA inspection. In more than half of the cases, FDA noted that more comprehensive corrective actions were needed than those outlined in the establishments’ responses or that the responses lacked sufficient details, explanation, or documentation. The agency proceeded to issue the warning letters after finding the establishments did not provide sufficient written responses to the deficiencies identified during the inspection. Most of the foreign drug establishments to which FDA issued the 15 warning letters had previously been found by the agency to be out of compliance with GMPs. FDA had previously inspected establishments named in 12 of the 15 warning letters. These previous inspections had been conducted 1 to 7 years prior to the inspection that resulted in the issuance of the warning letter, with 9 of the 12 previous inspections occurring within 4 years of the warning letter inspection. FDA identified deficiencies in almost all of the 12 previous inspections, classifying 10 as VAI and 1 as OAI, but did not issue any warning letters. For 7 of these inspections, the deficiencies FDA identified at these establishments were again identified during the inspection that led to the issuance of a warning letter. FDA often identified the warning letter deficiencies, which relate to the manufacture of a currently marketed drug, when it inspected the establishment as part of its review of a new drug application. In 7 of the 15 cases, FDA selected the establishment for inspection as part of its review of a drug application. In 3 cases, FDA conducted the inspection for surveillance purposes. In 3 other cases, FDA conducted the inspections following the receipt of information from an informant, such as allegations of insanitary conditions. In the 2 remaining cases, FDA conducted the inspection to follow up on a previous inspection performed by FDA or a foreign government that identified deficiencies. FDA oversight resulted in establishments taking actions to correct serious deficiencies, but the agency has not always conducted timely subsequent inspections to determine whether establishments continued to comply with agency requirements. FDA often relied on information provided by the establishment, rather than obtained from an FDA inspection, to determine the adequacy of an establishment’s corrective actions. As of July 2008, FDA had determined that the corrective actions taken by establishments referenced in 11 of the 15 warning letters were adequate. (See fig. 4.) For 7 of these 11 establishments, FDA relied on information provided by the establishment to make this determination. For example, establishments provided FDA with an outline of corrective actions to be taken. In some of these cases, FDA also met with officials from the establishments or held telephone conferences to discuss the corrective actions. This process often involved multiple communications between FDA and the establishment. FDA typically notified these establishments that their corrective actions were adequate within 4 months of issuing the warning letter. In this notification, the agency generally stated that it would verify the corrective actions taken at the time of the next inspection. FDA conducted an inspection or used the results of an inspection conducted by a private consultant to determine the adequacy of the establishments’ corrective actions for the other four establishments it deemed adequate. FDA inspected three of these establishments between 8 and 21 months after the issuance of the warning letter. Based on these inspections and other documentation, FDA determined that the deficiencies that led to the warning letter had been corrected. In two of those three inspections, FDA also found additional deficiencies that led to a classification of VAI. For one establishment, instead of waiting for FDA to conduct an inspection to determine the adequacy of its corrective actions, FDA agreed that the establishment could arrange for an inspection by a private consultant. The consultant found that the establishment had made the corrective actions requested by FDA. The agency stated that it would verify the corrective actions during its next inspection. FDA inspections to determine establishments’ continued compliance were not always timely. As of June 2008, FDA had subsequently inspected 4 of the 11 establishments it determined had taken adequate corrective actions in response to the warning letters. For 3 establishments, FDA had previously determined the adequacy of their corrective actions by reviewing information provided by the establishment. Although CDER staff had recommended that they be inspected within 1 year, these 3 establishments were inspected about 4 to 5 years after the inspection that resulted in the warning letter. However, FDA officials told us that dates recommended by CDER staff for subsequent inspections are only regarded as suggestions and scheduling inspections must be considered in light of other priorities. They noted that the selection of foreign establishments for inspection is driven by the drug approval process. We found that, in these 3 cases, FDA next selected the establishment for inspection as part of processing an application for a new drug, rather than for the purpose of surveillance. For the fourth establishment, FDA had previously determined the adequacy of the establishment’s corrective actions by reviewing an audit report from a private consultant’s inspection. CDER staff had recommended that this establishment be inspected within 2 years and the agency met this recommendation by conducting a surveillance inspection. FDA verified corrective actions during three of these four inspections subsequent to deeming the establishments’ corrective actions adequate, but it also identified additional deficiencies. The agency found that the three establishments had taken the corrective actions indicated in their response to the warning letters. However, FDA found other deficiencies requiring correction at those establishments. FDA classified all four of these inspections as VAI and none resulted in the issuance of a warning letter. Inspections of foreign drug establishments pose unique challenges to FDA—in both human resources and logistics—that influence the manner in which such inspections are conducted. For example, FDA does not have a dedicated staff devoted to conducting foreign inspections and relies on staff to volunteer. In addition, unlike domestic surveillance inspections, foreign surveillance inspections are announced in advance and inspections cannot be easily extended due to travel itineraries that involve more than one establishment. Other factors, such as language barriers, can also add complexity to the challenge of completing foreign establishment inspections. FDA has recently announced proposals to address some of the challenges unique to conducting foreign inspections, but it is unclear if these proposals will address all of these challenges. Human resource and logistical challenges unique to foreign inspections influence the manner in which FDA conducts those inspections. According to FDA officials, the agency does not have a dedicated staff to conduct foreign inspections. Instead FDA relies on investigators and laboratory analysts to volunteer to conduct foreign inspections. Officials explained that the same investigators and laboratory analysts are responsible for conducting both foreign and domestic inspections. These staff members must meet certain criteria in terms of their experience and training in order to conduct inspections of foreign establishments. For example, they are required to take certain training courses and must have at least 3 years of experience conducting domestic inspections before they are considered qualified to conduct a foreign inspection. FDA reported that in fiscal year 2007 it had approximately 335 employees who were qualified to conduct foreign inspections of drug manufacturing establishments. Approximately 250 of these employees were investigators and 85 were laboratory analysts. FDA officials told us that it is difficult to recruit investigators and laboratory analysts to voluntarily travel to certain countries and FDA does not mandate that they do so. However, officials noted that the agency provides various incentives to recruit employees for foreign inspection assignments. For example, employees receive a $300 bonus for each 3-week foreign inspection trip completed, when their inspection reports are submitted within established time frames. FDA indicated that if the agency could not find an individual to volunteer for a foreign inspection trip, it would mandate that travel. However, FDA has not typically sent investigators and laboratory analysts to countries for which the Department of State has issued a travel warning. We found that 49 foreign establishments registered as manufacturers of drugs for the U.S. market were located in 10 countries that had travel warnings posted as of October 2007. However, FDA officials told us that they have conducted inspections in countries with travel warnings. They also provided us with one example in which an establishment in a country with a travel warning hired security through the Department of State to protect the inspection team. FDA also faces several logistical challenges in conducting inspections of foreign drug manufacturing establishments. FDA guidance states that inspections of foreign establishments are to be approached in the same manner as domestic inspections. However, the guidance notes that logistics pose a significant challenge to the inspection team abroad. For example, FDA is unable to conduct unannounced inspections of foreign drug establishments, as it does with domestic establishments. FDA policy states that the agency, with few exceptions, initiates inspections of establishments without prior notification to the specific establishment or its management so that the inspection team can observe the establishment under conditions that represent normal day-to-day activities. However, prior notification is routinely provided to foreign establishments. FDA officials noted that the time and expense associated with foreign travel require them to ensure that managers of the foreign establishments are available and that the production line being inspected is operational during the inspection. In addition, FDA often needs the permission of the foreign government prior to the inspection. FDA officials explained that in some cases investigators and laboratory analysts may need to obtain a visa or letters of invitation to enter the country in which the establishment is located. Furthermore, FDA does not have the same flexibility to extend the length of foreign inspection trips if problems are encountered as it does with domestic inspections because of the need to maintain the inspection schedule, which FDA officials told us typically involves inspections of multiple establishments in the same country. In our review of FDA inspection reports, we identified instances in which FDA was unable to fully complete inspections of foreign establishments in the allotted time. For example, in one instance, the FDA staff had a commitment to travel to another city to inspect another establishment. In this instance, an unexpected cancellation during that same trip allowed FDA staff to return to the establishment at a later date to complete the inspection. FDA officials also told us that language barriers can make foreign inspections more difficult to complete than domestic inspections. The agency does not generally provide translators in foreign countries, nor does it require that foreign establishments provide independent interpreters. Instead, FDA staff may have to rely on an English-speaking employee of the foreign establishment being inspected, who may not be a translator by training, rather than rely on an independent translator. In our review of FDA inspection reports, we identified instances in which the translational support provided by an establishment created challenges. For example, an FDA investigator noted that during one inspection it was difficult to get an interpreter provided by the establishment to translate employee statements verbatim. FDA officials told us that while the presence of a translator is helpful, it is not necessary. They also pointed out that for inspections related to the review of a drug application, the establishment is required to submit its documentation in English. FDA has recently announced proposals to address some of the challenges unique to conducting foreign inspections, but the extent to which these proposals will improve FDA’s program is unclear. FDA is exploring the creation of a cadre of investigators who would be dedicated to conducting foreign inspections. FDA officials indicated that the agency plans to begin a pilot of the foreign cadre in early fiscal year 2009. As of July 2008, FDA had not yet begun recruiting investigators to participate in the foreign cadre, but officials expected the pilot group to consist of 15 investigators specializing in the inspection of drug establishments. An FDA official told us, however, that it may recruit investigators specializing in other FDA- regulated products, such as food or medical devices, if it is unable to recruit 15 drug investigators. The official also stated that the foreign cadre will be composed of investigators who have experience conducting foreign inspections. FDA has indicated that it would take approximately 4 years before a newly hired investigator would be able to complete independent inspections of foreign drug manufacturing establishments. According to FDA, the full size of the foreign cadre will be determined in fiscal year 2010, taking lessons learned from the fiscal year 2009 pilot and resources into consideration. FDA also recently announced plans to establish a permanent foreign presence overseas, although little information about these plans is available. Through an initiative known as “Beyond our Borders,” FDA intends to establish foreign offices to improve cooperation and information exchange with foreign regulatory bodies, improve procedures for expanded inspections, allow it to inspect facilities quickly in an emergency, and facilitate work with private and government agencies to assure standards for quality. FDA’s proposed foreign offices are intended to expand the agency’s capacity for regulating, among other things, drugs, medical devices, and food. The extent to which the activities conducted by foreign offices are relevant to FDA’s foreign drug inspection program is uncertain. Initially, FDA plans to establish a foreign office in China with three locations—Beijing, Shanghai, and Guangzhou—composed of a total of eight FDA employees and five Chinese nationals. The Beijing office, which the agency expects will be partially staffed by the end of 2008, will be responsible for coordination between FDA and the Chinese regulatory agencies. FDA staff located in Shanghai and Guangzhou, who the agency announced it will hire in 2009, will be focused on conducting inspections and working with Chinese inspectors to provide training as necessary. FDA has noted that the Chinese nationals will primarily provide support to FDA staff including translation and interpretation. The agency also plans to begin staffing offices in Central America, Europe, and India by the end of 2008 and in the Middle East in 2009. While the establishment of both a foreign inspection cadre and offices overseas has the potential for improving FDA’s oversight of foreign establishments and providing the agency with better data on foreign establishments, it is too early to tell whether these steps will be effective or will increase the number of foreign drug inspections. Agreements with foreign governments, such as one recently reached with China’s State Food and Drug Administration as part of Beyond our Borders, may help the agency address certain logistical issues unique to conducting inspections of foreign establishments. We have noted that one challenge facing FDA involved the need for its staff to obtain a visa or letter of invitation to enter a foreign country to conduct an inspection. However, FDA officials told us that the agency’s agreement with China recently helped FDA expedite this process when it learned of the adverse events associated with a Chinese heparin manufacturing establishment. According to these officials, the agreement with China greatly facilitated FDA’s inspection of this establishment by helping the agency send investigators much more quickly than was previously possible. Americans depend on FDA to ensure the safety and effectiveness of drugs marketed in the United States. More than 10 years ago we reported that FDA needed to make improvements in its foreign drug inspection program. Our current work indicates that flaws we identified at that time persist. The recent incident involving contaminated heparin sodium also underscores the need for FDA to obtain more information about foreign drug establishments, conduct more inspections overseas, and improve its overall management of this critical program. FDA recently announced initiatives that represent important steps for the agency and, if fully implemented, could address some of the concerns we identified in 1998 and reiterated in recent testimonies. However, given the growth in foreign drug manufacturing for the U.S. market and the large gaps in FDA’s foreign drug inspection program, significant challenges—such as improving its data systems and increasing the rate of inspection—remain. FDA’s oversight of its foreign inspection program is hampered by inaccurate and inconsistent data on foreign establishments. An important component of selecting establishments for inspection is an accurate list of establishments subject to inspection, which currently is not readily available to the agency. To reduce the creation of duplicate counts in its import database, FDA supports the establishment of a unique governmentwide identifier for foreign establishments. Such an identifier has the potential to improve the accuracy of the data that FDA maintains on foreign drug manufacturing establishments, and FDA’s continued exploration of this option is an important step to improving the accuracy of its data. However, the establishment and utilization of a unique governmentwide identifier would be dependent on the actions of multiple agencies and would not provide an immediate solution to correcting the inaccuracies in FDA’s databases. In addition, the agency’s plan to institute electronic registration may provide FDA with a more efficient way to maintain information on each establishment, but it is unlikely to make a meaningful improvement in FDA’s registration database by preventing erroneous registration and providing an accurate count of establishments subject to inspection. Enforcing the requirement that establishments update their registration annually, or biannually, as planned, is an important step towards keeping this database up to date. However, it is also important that FDA verify the information provided by establishments at the time of registration to ensure that establishments are appropriately registered. In addition, inconsistencies in databases that FDA uses to track inspections of foreign drug manufacturing establishments provide it with unreliable data on those establishments for which it identified serious manufacturing deficiencies. As a result, the different FDA staff responsible for oversight of these foreign establishments may not have ready access to accurate information on their compliance history when carrying out regulatory responsibilities. Conducting additional surveillance inspections of foreign establishments manufacturing drugs currently marketed in the United States is vital, but FDA’s selection of foreign establishments for inspection has instead been driven by the need to inspect establishments named in an application for a new drug. While these preapproval inspections are an important component of FDA oversight, without additional surveillance inspections FDA has little opportunity to monitor the ongoing compliance of establishments manufacturing drugs currently marketed in the United States. In addition, FDA has not utilized its risk-based process to select foreign establishments for inspection to the extent it has for selecting domestic establishments. However, both FDA’s inspection classifications and issuance of warning letters indicate that deficiencies, including serious GMP deficiencies, are found in foreign establishments at least as often as in domestic ones. Therefore, it is important that FDA inspect foreign and domestic establishments with similar characteristics at comparable frequencies. A reassessment of FDA’s inspection priorities could help the agency to ensure that it is frequently inspecting those establishments, foreign or domestic, that pose the greatest potential risk to public health should they experience a manufacturing defect. Although foreign establishments have been responsive to FDA warning letters, the agency’s subsequent inspections have often identified additional deficiencies. This points to the need for FDA to promptly inspect establishments with a history of serious deficiencies so problems do not go undetected for extended periods. FDA’s plans to establish overseas offices and a cadre of investigators dedicated to foreign inspections are promising and have the potential to address many of the challenges unique to conducting foreign inspections. However, it is too early to tell whether these steps will be effective in improving the agency’s foreign drug inspection program. To address weaknesses in FDA’s oversight of foreign establishments manufacturing drugs for the U.S. market, we recommend that the Commissioner of FDA take the following five actions: Enforce the requirement that establishments manufacturing drugs for the U.S. market update their registration annually. Establish mechanisms for verifying information provided by the establishment at the time of registration. Ensure that information on the classification of inspections with serious deficiencies is accurate in all FDA databases. Conduct more inspections to ensure that foreign establishments manufacturing drugs currently marketed in the United States are inspected at a frequency comparable to domestic establishments with similar characteristics. Conduct timely inspections of foreign establishments that have received warning letters to determine continued compliance. HHS reviewed a draft of this report and provided comments, which are reprinted in appendix II. HHS also provided technical comments, which we incorporated as appropriate. HHS commented on one of our recommendations and agreed that FDA should conduct more inspections of foreign establishments. It did not comment on the other four recommendations we made. HHS also stated that our report raises some important issues regarding FDA’s foreign drug inspection program and noted that FDA has made efforts to improve this program. HHS agreed that additional inspections are needed to strengthen its foreign drug inspection program. The agency did not provide a specific plan or timeframe for conducting additional foreign inspections. HHS noted that these inspections represent only one component of its overall strategy to enhance oversight of imported drugs. HHS also said that conducting foreign inspections based on the same criteria as domestic inspections is problematic because of challenges associated with foreign inspections. As we noted in our draft report, we recognize that inspections of foreign establishments pose unique challenges to FDA. Nevertheless, foreign and domestic establishments with characteristics that pose similar potential risks to public health need to be inspected at comparable frequencies. As we noted, FDA finds serious GMP deficiencies in foreign establishments at least as often as in domestic ones. Therefore, we believe that it is important for the agency to use its resources, in coordination with its other initiatives, to prioritize for inspection those establishments, whether they are located in the United States or a foreign country, that have the greatest potential to negatively impact public health. HHS also elaborated on some of the initiatives to improve FDA’s foreign drug inspection program that were discussed in our report—such as initiatives to improve FDA databases, establish foreign offices, and collaborate with foreign governments. In particular, HHS noted that as FDA implements electronic registration, it also plans to require establishments to update their registration at 6-month intervals, which is more frequent than is currently required. We have revised our report to reflect this proposed change. While requiring establishments to update their registration more often could enhance the accuracy of FDA’s registration information, we remain concerned about the agency’s enforcement of this provision. There is already a requirement for establishments to update this information annually, but FDA has not enforced it. FDA’s proposal to direct establishments to update their registration information at more frequent intervals will only be meaningful if the agency takes steps to actively enforce this requirement. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Commissioner of FDA and appropriate congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To adress our reporting objectives, we interviewed officials from several components of the Food and Drug Administration (FDA), including the Center for Drug Evaluation and Research (CDER) and the Office of Regulatory Affairs (ORA). We also reviewed pertinent statutes and regulations as well as agency documents that provide guidance on conducting inspections and provide the basis for FDA’s assessment of an establishment’s compliance with current good manufacturing practice regulations (GMP). These documents included FDA’s Compliance Program Guidance Manuals; Guide to Inspection of Foreign Pharmaceutical Manufacturers; Investigations Operations Manual 2008; Regulatory Procedures Manual, March 2008; and Field Management Directives. To obtain perspectives from relevant stakeholders, we also interviewed officials from the Generic Pharmaceutical Association, Pharmaceutical Research and Manufacturers of America, and Synthetic Organic Chemical Manufacturers Association. To examine the extent to which FDA has accurate data on the number of foreign manufacturing establishments subject to inspection, we obtained information from FDA databases on establishments whose drugs have been imported into the United States. Specifically, we obtained data from CDER’s Drug Registration and Listing System (DRLS) and ORA’s Operational and Administrative System for Import Support (OASIS). From DRLS, we obtained counts of establishments registered with FDA in fiscal year 2007 to market drugs in the United States. We assessed the reliability of these data by (1) reviewing existing information about the data and the databases that produced them and (2) interviewing agency officials knowledgeable about the data. We found that DRLS was reliable for our purposes, to the extent that it accurately reflects information provided by foreign establishments that register to market drugs in the United States. However, we determined that these data do not necessarily reflect all foreign establishments whose drugs are imported into the United States. From OASIS, we obtained counts of establishments that offered drugs for import into the United States in fiscal year 2007. We also obtained fiscal year 2007 data from OASIS to determine the types of drugs manufactured in China and offered for import into the United States. We assessed the reliability of these data by (1) reviewing existing information about the data and the databases that produced them, (2) interviewing agency officials knowledgeable about the data, and (3) performing electronic testing of data elements. We found that while OASIS is likely to overestimate the number of foreign establishments involved in the manufacture of those drugs because of uncorrected errors in the data, it provides sufficiently reliable information about the types of drugs offered for import into the United States. Therefore, we present information from both DRLS and OASIS to illustrate the variability in information that FDA’s databases provide to agency officials on this topic. This represents the best information available and is what FDA relies on to manage its foreign drug inspection activities. We examined FDA’s plans to improve these and other databases. We also obtained information from the Center for Devices and Radiological Health to learn about changes to one of its databases that address problems similar to CDER’s problems with DRLS. To examine the frequency of foreign inspections and factors influencing the selection of such establishments for inspection, we obtained data on foreign and domestic inspections from ORA’s Field Accomplishments and Compliance Tracking System (FACTS). Our analysis includes all foreign and domestic inspections that were identified in FACTS as being either related to the drug application approval process or GMP. Our November 2007 testimony included the number of inspections from FACTS as of September 26, 2007. Therefore, we obtained FACTS data that contained information on fiscal year 2007 inspections conducted or entered into this database since September 26, 2007, to update the data presented in our November 2007 testimony. We assessed the reliability of these data by (1) reviewing existing information about the data and the databases that produced them, (2) interviewing agency officials knowledgeable about the data, and (3) performing electronic testing of data elements. We found these data from the FACTS database reliable for our purposes. In addition, we examined methods used by FDA to help it select foreign and domestic establishments for inspection, including its risk-based site selection process. To examine FDA’s response to serious deficiencies identified during inspections of foreign manufacturing establishments and FDA’s monitoring of establishments’ corrective actions and continued compliance, we examined data in two sources, FACTS and CDER’s Office of Compliance Foreign Inspection Tracking System, which each contain information on how the agency classified establishments’ compliance with agency requirements. We assessed the reliability of these data by interviewing agency officials knowledgeable about the data and performing electronic testing to compare the data from each of these databases. We found that these databases sometimes presented inconsistent information about the final classification of foreign inspections. Therefore, we present data from these databases on inspection classification to illustrate the variability in information that FDA’s databases provide to agency officials on this topic. We also reviewed case files provided by FDA that relate to inspections of foreign establishments conducted from fiscal years 2002 through 2007, during which FDA identified serious deficiencies and subsequently issued warning letters. The case files contained information about these establishments, their inspections, and their correspondence with FDA. To examine issues unique to conducting foreign inspections, we reviewed FDA practices and policies related to the conduct of foreign inspections and interviewed FDA officials about these topics. We also obtained information about recent or proposed FDA initiatives that may have the potential to improve the agency’s foreign drug inspection programs. We conducted the work for this report from September 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Marcia Crosse, (202) 512-7114, [email protected]. In addition to the contact named above, Geraldine Redican-Bigott, Assistant Director; Katherine Clark; Andrew Fitch; William Hadley; Cathleen Hamann; Julian Klazkin; Daniel Ries; and Monique B. Williams made key contributions to this report. Medical Devices: FDA Faces Challenges in Conducting Inspections of Foreign Manufacturing Establishments. GAO-08-780T. Washington, D.C.: May 14, 2008. Drug Safety: Preliminary Findings Suggest Recent FDA Initiatives Have Potential, but Do Not Fully Address Weaknesses in Its Foreign Drug Inspection Program. GAO-08-701T. Washington, D.C.: April 22, 2008. Medical Devices: Challenges for FDA in Conducting Manufacturer Inspections. GAO-08-428T. Washington, D.C.: January 29, 2008. Drug Safety: Preliminary Findings Suggest Weaknesses in FDA’s Program for Inspecting Foreign Drug Manufacturers. GAO-08-224T. Washington, D.C.: November 1, 2007. Food and Drug Administration: Improvements Needed in the Foreign Drug Inspection Program. GAO/HEHS-98-21. Washington, D.C.: March 17, 1998. | The Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), oversees the safety and effectiveness of human drugs marketed in the United States, including those manufactured in foreign establishments. FDA inspects foreign establishments in order to ensure that the quality of drugs is not jeopardized by poor manufacturing processes. This report examines (1) the extent to which FDA has accurate data on the number of foreign establishments subject to inspection, (2) the frequency of foreign inspections, and (3) oversight by FDA to ensure that foreign establishments correct serious problems identified during inspections. GAO analyzed information from FDA databases, reviewed inspection reports which identified serious deficiencies, and interviewed FDA officials. FDA databases contain inaccurate information on foreign establishments subject to inspection. FDA uses information from a database of establishments registered to market drugs in the United States and a database of establishments that shipped drugs to the United States to compile a list of establishments subject to inspection, but these databases contain divergent estimates--about 3,000 and 6,800, respectively. FDA's registration database contains information about establishments not subject to FDA inspection. Although annual reregistration is required, FDA does not deactivate in its database establishments that do not fulfill this requirement. The agency also does not routinely verify that a registered establishment manufactures a drug for the U.S. market. The accuracy of this information is important in FDA's identification of foreign establishments subject to inspection. FDA inspects relatively few foreign establishments each year to assess the manufacturing of drugs currently marketed in the United States. FDA inspected 1,479 foreign drug manufacturing establishments from fiscal years 2002 through 2007.Because FDA does not know the number of establishments subject to inspection, the percentage of those inspected cannot be calculated with certainty. However, using a list FDA developed to prioritize foreign establishments for inspection in fiscal year 2007, GAO estimated that FDA may inspect about 8 percent of foreign establishments in a given year. At this rate, it would take the agency more than 13 years to inspect these establishments once. In contrast, FDA estimates that it inspects domestic establishments about once every 2.7 years. Unlike domestic establishments, foreign establishments were generally only inspected if they were named in an application for a new drug. While FDA made progress in fiscal year 2007 in conducting more foreign inspections, GAO estimated it still inspected less than 11 percent of such establishments. As FDA plans additional inspections, it is important that it ensure that foreign and domestic establishments with similar characteristics are inspected at a similar frequency. FDA's identification of serious deficiencies has led foreign establishments to take corrective actions, but inspections to determine continued compliance are not always timely. FDA identified deficiencies during most foreign inspections, but determining how the agency classified the results of a specific inspection is hindered by inconsistencies in its databases, particularly on the classification of inspections with serious deficiencies. From fiscal years 2002 through 2007, FDA issued 15 warning letters to foreign establishments at which it identified serious deficiencies. FDA generally determined the adequacy of actions taken in response to these letters by reviewing information provided by the establishments. FDA's subsequent inspections to determine establishments' continued compliance were not always timely. Of establishments named in the 15 warning letters, FDA subsequently inspected 4 establishments 2 to 5 years later, generally because these establishments were named in a new drug application. At 3 of these 4 inspections, FDA verified that corrective actions had been taken but identified additional deficiencies. |
DOD defines a UAS as “a system whose components include the necessary equipment, networks, and personnel to control an unmanned aircraft”—that is, an aircraft that does not carry a human operator and is capable of flight under remote control or autonomous programming. DOD classifies its UAS into five groups that are based on attributes of weight and capabilities including vehicle airspeed and operating altitude. For example, group 1 UAS weigh 20 pounds or less whereas group 5 UAS weigh more than 1,320 pounds. Servicemembers who operate the larger and more capable UAS, in group 3 or above, are either manned- aircraft pilots or pilots specializing in flying UAS and are to receive 4 or more months of training to prepare them to fly UAS. In contrast, personnel who operate the less capable UAS that are classified in groups 1 and 2 generally operate UAS as an additional duty. Service headquarters officials stated that personnel who operate UAS in group 1 receive about 2 weeks of training and personnel who operate UAS in group 2 receive anywhere from 2 weeks to 3 months of training. Each of the services flies various types of large UAS in groups 3, 4, and 5. The Air Force flies the MQ-1 (Predator), the MQ-9 (Reaper), and the larger RQ-4 (Global Hawk). The Army flies the RQ-7 (Shadow), the MQ-5 (Hunter) and the MQ-1C (Gray Eagle). The Marine Corps flies the RQ-7B (Shadow) and the RQ-21A (Black Jack). Finally, the Navy flies the MQ-4C (Triton) and the MQ-8 (Fire Scout). Each service uses a different term to refer to the UAS pilot position and a different strategy to assign personnel to this position. For example, the Air Force uses the term remotely piloted aircraft (RPA) pilot and assigns officers to this position. Specifically, the Air Force assigns various types of officers to serve in these positions including (1) temporarily re-assigned manned-aircraft pilots, (2) manned-aircraft pilots and other Air Force aviation officers who have converted to this career permanently, (3) graduates of manned-aircraft pilot training on their first assignment, and (4) pilots who specialize in flying UAS with limited manned-aircraft experience. The Army uses the term unmanned aircraft system operator and assigns enlisted personnel to this position, who receive no manned- aircraft flight training. See table 1 for a summary of the terms and the staffing strategies each service uses. The services are responsible for providing three types of individual training to UAS pilots: initial qualification, mission, and continuation training. Each of the services is responsible for providing initial qualification training to UAS pilots in two phases. In the first phase, pilots are taught the fundamentals of aviation and in the second phase pilots learn to fly a particular UAS. Each of the services uses similar but slightly different approaches to train their UAS pilots. Army. The first phase of training consists of an 8-week common core course for all UAS pilots. During this phase, the Army is responsible for teaching its pilots the fundamentals of aerodynamics, flight safety, and navigation. During the second phase of training, the Army is responsible for teaching its UAS pilots to fly one of the Army’s three UAS. This training lasts between 12 and 25 weeks depending on the UAS that is the focus of the course. During this phase, the Army teaches its pilots to launch and recover a UAS, conduct reconnaissance and surveillance, and participate in a field training exercise. In addition, all Army UAS pilots are trained as sensor operators in the aircrew of a UAS. Thus, pilots learn to operate UAS sensors during their initial qualification training. Air Force. During the first phase of training, UAS pilots who specialize in flying a UAS attend 5 months of training called undergraduate UAS training. This training consists of three courses: first, these pilots learn to fly a small manned aircraft for 39 hours; second, they use a simulator to learn to fly a manned aircraft using instruments; and third, they learn about the fundamentals of flying a UAS in a classroom setting. Air Force UAS pilots who the Air Force re-assigns from its manned-aircraft pilot ranks do not attend this first phase of training because they received flight training as manned- aircraft pilots. During the second phase of training, all UAS pilots attend a 4-month course at a formal training unit to learn to fly one of the Air Force’s three UAS platforms. Most active duty Air Force pilots attend the formal training unit at Holloman Air Force Base to learn to fly the Air Force’s MQ-1 Predator or MQ-9 Reaper. Marine Corps. During the first phase of training, UAS pilots who specialize in flying a UAS attend 5 months of training with the Air Force called undergraduate UAS training. This training consists of three courses: First, these pilots learn to fly a small manned aircraft for 39 hours; second, they use a simulator to learn to fly a manned aircraft using instruments; and third, they learn about the fundamentals of flying a UAS in a classroom setting. During the second phase of training, Marine Corps UAS pilots attend the Army’s 8-week UAS pilot common course and 10-week UAS pilot training courses at Fort Huachuca to become familiar with flying the RQ-7 Shadow, which the Marine Corps flies. Navy. In January 2015, the Navy began providing a 7- to 8-week UAS initial qualification course in San Diego, CA to its pilots of the MQ-8 Fire Scout, which is a rotary wing UAS. The Navy assigns manned- helicopter aircraft pilots who receive manned helicopter training and have served, or are serving, in an assignment in a manned-helicopter squadron prior to attending this course. As of March 2015, the Navy is developing plans for its initial qualification course for its MQ-4C Triton, which is a fixed-wing UAS. The services also provide mission and continuation training to their UAS pilots. Mission qualification training includes all training that takes place once a servicemember reaches their operational unit but before that servicemember is designated as being qualified to perform the unit’s missions. Continuation training includes all training that takes place once a servicemember finishes mission qualification training and is designed to maintain and improve UAS piloting skills. A March 2015 Army review showed that pilots in most Army Shadow units did not complete training in their units in fiscal year 2014, which we corroborated through both discussions with pilots in our focus groups and unit responses to our questionnaires. One of the core characteristics of a strategic training and development process calls for agency leaders and managers to consistently demonstrate that they support and value continuous learning. However, the Army’s Training and Doctrine Command conducted a review from January 2015 through March 2015 and found that 61 of the Army’s 65 Shadow units that were not deployed had completed an average of 150 hours of flight training. Further, the Army assessed that these units were at the lowest levels of unit training proficiency in the Army’s readiness reporting system. Army Training and Doctrine Command officials stated that in January 2015, the Chief of Staff of the Army directed the Army Training and Doctrine Command to evaluate unit training for Army UAS units to determine if training was a factor that caused UAS mishaps in combat. These officials stated that in response to the Chief of Staff’s direction they evaluated the total flight hours completed to conduct training by 65 Shadow units that were not deployed, 13 deployed Shadow units, and 2 Shadow units at the UAS initial qualification school at Ft. Huachuca. Training and Doctrine Command assessed the level of unit readiness associated with the amount of training these units completed using the Army’s unit training proficiency system specified in Army Pamphlet 220-1, Defense Readiness Reporting System-Army Procedures. This system includes a four-tiered rating system ranging from T-1 to T-4. In this system a T-1 rating indicates the highest level of unit training proficiency, whereas T-3 and T-4 ratings indicate that the unit is untrained on one or more of the mission essential tasks that the unit was designed to perform in an operational environment. Using this system to assess the 65 Shadow units that were not deployed, Training and Doctrine Command found that 1 unit was rated at T-1, 3 units were rated at T-2, and 61 units were rated at T-3 or T-4. In addition, Training and Doctrine Command found that 11 of the 13 deployed units were rated T-1 and the other 2 deployed Shadow units were rated T-2, and both of the units at the UAS training school were rated at T-1 (see table 2). Army Training and Doctrine Command found a number of factors led to UAS pilots in Army Shadow units not completing training in their units in fiscal year 2014. For example, the review found that UAS units organized under infantry brigades have a particular challenge completing training in their units because the unit commanders and leadership overseeing these brigades may not be fully aware of the UAS units’ training requirements. In addition, the review found that a number of warrant officers were not qualified and current on the units’ aircraft that they were assigned to oversee. The review included recommendations that the Army plans to implement to increase emphasis on training in UAS units, to provide training on UAS training to unit commanders, and to establish a system to report UAS training readiness on periodic unit status reports. However, as of April 2015 the Army had not yet taken actions to implement these recommendations, and Army Training and Doctrine Command officials were unable to provide a timeframe for implementation of the recommendations. Similarly, focus groups we conducted with Army UAS pilots and responses to questionnaires we administered indicated that Army UAS pilots face challenges to complete training in units. In particular, pilots in all eight of the focus groups we conducted with Army UAS pilots stated that they cannot complete training in their units. For example, a pilot in one of our focus groups stated that during his 3 years as a UAS pilot, he had been regularly tasked to complete non-training-related activities, and as a result he completed a total of 36 training flight hours even though the requirement is 24 flight hours per year. Further, we administered a questionnaire to various offices within each military service and five of the six Army UAS units that responded indicated that their units faced challenges completing training in their units. For example, one unit respondent stated that Army UAS units rarely have the time to meet their training requirements. A second unit respondent stated that Army UAS units are taxed trying to maintain proper training in units and have little time to progress into proficient pilots due to training equipment and resource constraints. A third respondent stated that training in units is very limited due to competing priorities, including being consistently tasked by Army Forces Command to train other units, which prevents their unit from training their own UAS operators. In addition, four of the six Army UAS units that responded stated that the Army provides too little funding for the training that takes place in units to help ensure that this training achieves the Army goals for that training. Further, focus groups we conducted with Army UAS pilots and some Army officials indicated that leadership of larger non-aviation units that oversee Army UAS units may not fully understand the training needs for Army UAS pilots. Specifically, pilots in seven of the eight focus groups that we conducted with Army UAS pilots stated that leadership of larger non-aviation units that oversee their UAS units do not understand UAS pilot training. Moreover, four of the six units that responded to our questionnaire indicated that leadership of larger non-aviation units that oversee Army UAS units lacks understanding of UAS unit training needs. For example, a unit official who responded to our questionnaire stated that Army headquarters leadership provides very limited support for UAS continuation training. Another unit official who responded to our questionnaire stated that “unit leadership has a fundamental lack of understanding of our training requirements.” In addition, officials at Army Forces Command and an official who oversees Army UAS assignments at Army Human Resources Command stated that infantry commanders at the battalion and brigade level who oversee UAS units do not understand the aviation training requirements for Army UAS pilots. Further, Army UAS pilots in all of the focus groups we conducted stated that they had difficulty completing UAS pilot training in units because they spend a significant amount of time performing additional duties such as lawn care, janitorial services, and guard duty. While the Army review and our analysis show that most Army UAS pilots are not completing training in their units, the high-level interest expressed by the Chief of Staff of the Army and Army Training and Doctrine Command’s review and associated recommendations, if effectively implemented, could help address the Army’s training shortfalls. The Army does not have visibility over the amount of training that pilots in some Army UAS units have completed. Another one of the core characteristics of a strategic training framework highlights the importance of quality data regarding training. However, in our current review, we found that the Army does not have access to data that would allow it to measure the amount of training that UAS pilots have completed in Army UAS units. The Army’s Unmanned Aircraft System Commander’s Guide and Aircrew Training Manual establishes three readiness levels for Army UAS pilots. Readiness level training begins with the development of proficiency at the individual level at readiness level three and progresses through crew to collective proficiency at readiness levels two and one. The Army assigns readiness level designations to UAS pilots to identify the training that UAS pilots have completed and the training that they need to complete to progress to the next level of readiness. Army Forces Command identifies the Army UAS units that are ready to deploy, according to Army Forces Command officials. Army Forces Command officials stated that they need information about the readiness level of pilots in UAS units to determine if a unit is ready to deploy and perform its mission. These officials stated that currently they review Army unit status reports to determine if a unit is prepared to deploy. These officials stated that Army unit status reports provide information on a variety of factors related to a unit’s readiness to perform its mission including the unit’s materiel, personnel staffing levels, and an assessment of a unit’s training. However, officials from Army headquarters, Army Forces Command, and Army Aviation Center of Excellence stated that these reports do not provide any information on the readiness levels of the UAS pilots in UAS units because the Army does not require these reports to include this information. In addition, the organizational structure of many Army UAS units is an impediment to visibility over training completed in these units. Specifically, the Army’s RQ-7B Shadow units are organized under larger units. According to Forces Command officials, these larger units oversee multiple smaller units, including UAS units and other units that have different functions, such as intelligence. However, these officials also stated that the readiness information for these UAS units is combined with training information from other, non-UAS units in the unit status reports because unit status reports do not provide lower- unit level details. Officials at Forces Command stated that, using these reports, they have designated units as available for deployment and later learned that a significant portion of the pilots in those units had not completed their readiness level training. Without requiring information on the readiness of pilots in UAS units as part of unit status reports, Army Forces Command will continue to lack visibility over the amount of training that UAS pilots have completed in units. Air Force officials stated that Air Force UAS pilots do not complete the majority of their required continuation training, even though an Air Force memorandum allows pilots to credit operational flights towards meeting training requirements. Another one of the core characteristics we found constitutes a strategic training framework is that agency leaders and managers consistently demonstrate that they support and value continuous learning. However, in December 2014, the commanding general of Air Combat Command wrote in a memo to the Chief of Staff of the Air Force that since 2007, Air Force UAS units have conducted “virtually no continuation training” because the Air Force has continuously surged to support combatant command requirements. Additionally, Air Force officials at a number of locations stated that Air Force UAS pilots rarely conduct continuation training for any of their unit’s missions. These locations include headquarters, Air Combat Command, as well as the Vice Wing Commander and multiple squadron commanders at Creech Air Force Base and the Wing and Operations Group Commanders at Holloman Air Force Bases. We found that a nongeneralizable sample of training records for seven Air Force UAS units showed that, on average, 35 percent of the pilots in these units completed the continuation training for all of their seven required missions in fiscal year 2014. The situation occurred despite an Air Combat Command memorandum that allows pilots to credit flights taken on operational missions towards continuation training requirements, provided that the flights meet certain conditions. This memorandum also requires UAS pilots to conduct specified numbers of training flights associated with each of the missions that MQ-1 Predator and MQ-9 Reaper units perform. We found that 91 percent or more of the pilots in the seven units completed continuation training for one of the seven missions, specifically the intelligence, surveillance, and reconnaissance mission, which involves obtaining information about the activities and resources of an enemy. In contrast, an average of 26 percent of pilots in these seven units completed the continuation training for another one of the seven missions, the air interdiction mission, which involves diverting or destroying the enemy’s military potential. Air Force officials stated that operational flights do not provide an ideal environment to conduct training because pilots are not able to perform all of the tasks needed for a training flight during operational missions. Moreover, Creech Air Force Base officials also stated that UAS pilots at Creech Air Force Base conduct continuation training on less than two percent of all the hours that UAS pilots on Creech Air Force Base currently fly. According to Air Force officials, some Air Force UAS pilots have not completed their continuation training because they spend most of their time conducting operational missions due to shortages of UAS pilots and high workloads. In addition, Creech Air Force Base officials stated that UAS pilots perform one to two of their required missions regularly based on operational needs, which also allows them to fulfill training requirements for those missions. However, due to shortages of UAS pilots and high workloads some pilots do not complete training requirements for their other five to six missions. As of March 2015, the Air Force has staffed the UAS pilot career field at 83 percent of the total number of UAS pilots that the Air Force believes are necessary to sustain current UAS operations and training. We conducted focus groups with seven groups of Air Force UAS pilots and pilots in all these groups stated that they could not conduct continuation training because their units were understaffed. In addition, Air Force headquarters officials stated that they think the current number of UAS pilots that the Air Force has approved for its UAS units is not enough to accomplish the workload of UAS units. As a result, workloads for Air Force UAS units are high, and in January 2015, the Secretary of the Air Force stated that on average Air Force UAS pilots fly 6 days in a row and work 13- to 14-hour days. In April 2014, we found that the Air Force had shortages of UAS pilots and we made multiple recommendations to address these shortages. In particular, we found that the Air Force had operated below its optimum crew ratio, which is a metric used to determine the personnel needs for Air Force aviation units, and that the Air Force had not tailored its recruiting and retention strategy to align with the specific needs and challenges of UAS pilots. We made four recommendations related to these findings including that the Air Force update crew ratios for UAS units to help ensure that the Air Force establishes a more accurate understanding of the required number of UAS pilots needed in its units and that the Air Force develop a recruiting and retention strategy that is tailored to the specific needs and challenges of UAS pilots to help ensure that the Air Force can meet and retain required staffing levels to meet its mission. The Air Force concurred with these recommendations and has taken some actions but has not yet fully implemented them. Specifically, a headquarters Air Force official stated that, in February 2015, the Air Force completed the first phase of a three-phase personnel requirements study designed to update the UAS unit crew ratio. The headquarters official also stated that Air Force senior leaders are reviewing the results of the first phase of the study and expect to update the UAS unit crew ratio by summer 2015. In addition, in fiscal year 2014, the Air Force began using a new process that provides the Air Force with greater flexibility to assign cadets who were preparing to join the Air Force. Under this process, the cadets are assigned to various Air Force careers, which enabled the Air Force to meet its quota for the number of cadets who graduate from Air Force officer schools and agree to serve as UAS pilots. Further, in January 2015, the Air Force more than doubled the Assignment Incentive Pay for UAS pilots who are reaching the end of their 6-year service commitment to $1500 a month. As noted above, the Air Force continues to face a shortage of UAS pilots, but fully implementing our April 2014 recommendations would better position the Air Force to address these shortages. See additional information on these recommendations and the Air Force’s actions to date in appendix I. The Army has taken action to increase the number of UAS pilot instructors, but in doing so, it is using less experienced instructors, which could affect the quality of the training provided to UAS pilots. The Army significantly increased the number of UAS units and UAS pilots in recent years, and as a result many lack the experience and proficiency needed to be an instructor, according to officials from the Army Aviation Center of Excellence. To address this shortage and accommodate the need for more instructors, the Army began to waive course prerequisites for the UAS instructor course so that it could enable these less experienced and less proficient UAS pilots to become instructors, according to officials from the Army Aviation Center of Excellence. Army Aviation Center of Excellence officials also stated that the instructor course prerequisites are important because they help ensure that the UAS pilots the Army trains to become instructors are the most experienced and most proficient pilots and can successfully train other UAS pilots. One of the officials also stated that the Army would prefer not to grant waivers to any UAS pilot attending the course so that the pilots who become instructors would be experienced and able to share their experiences with the pilots they train. In contrast, pilots with less experience may not be able to refer to and use more varied experiences during instructional time with UAS pilots and thus may not be as prepared to successfully train other UAS pilots to perform at the highest levels of proficiency. The instructor course prerequisites include a minimum rank, a minimum number of flying hours piloting a UAS, whether a pilot has completed their readiness level training, and whether a pilot has recently completed certain flying tasks—known as currency. For example, the Army course prerequisites specify that pilots attending the course to become an instructor for the MQ-1C Gray Eagle should (1) hold the enlisted rank of sergeant (E-5), (2) have flown a UAS for a minimum of 200 hours, (3) be designated at readiness level one, and (4) be current in their experience, specifically by having flown a UAS within the last 60 days. According to an official from the Aviation Center of Excellence, a pilot’s battalion commander and the commander of the UAS school are responsible for approving requests to waive these course prerequisites. Following their approval, a pilot’s unit commander assesses the pilot’s potential to successfully complete the instructor training and fulfill duties required of instructors. The Army waived the instructor course prerequisites for about 40 percent of the UAS pilots attending the course from the beginning of fiscal year 2013 through February 2015. Specifically, the Army waived one or more of these course prerequisites for 38 percent of the pilots who attended the course in fiscal year 2013, 48 percent of the pilots who attended the course in fiscal year 2014, and 23 percent of the pilots who attended the course from October 2014 through February 2015 (see table 3). The Army has taken some steps to mitigate the potential risks of using less proficient instructors. Specifically, Army Aviation Center of Excellence officials stated that in fiscal year 2015, the Army stopped waiving the instructor course prerequisites that UAS pilots be designated at readiness level one and that pilots be current in their experience flying a UAS. In addition, prior to fiscal year 2015, the Army had provided remedial training to pilots who had not met these two course prerequisites. The Army’s action to stop allowing waivers for these two course prerequisites helps to ensure that the pilots it allows to become instructors meet the minimum UAS flying proficiency course prerequisites. However, the Army has not fully addressed the potential risks of using less proficient and less experienced instructors. Although the Army reduced the number of waivers granted so far in fiscal year 2015 by no longer waiving the course prerequisites related to minimum proficiency, the Army can continue to grant waivers for the course prerequisites related to experience, including that UAS pilots have a minimum number of flying hours in a UAS and hold the minimum enlisted rank of sergeant. In addition, the Army has not provided additional preparation to address the gap in experience for the instructors who have completed the instructor course nor does it have plans to address this gap in experience for those pilots who will attend the course in the future. Within a strategic training and development process, one core practice calls for agencies to provide appropriate resources for their training programs, and that agency leaders should consistently demonstrate that they support and value continuous learning. Army officials have stated that experienced instructors are central to providing successful continuous learning to its UAS pilots, and in that regard are important resources in training programs. However, the Army faces risks that by training with less experienced instructors, Army UAS pilots may not be receiving the highest caliber of training needed to prepare them to be able to successfully perform UAS missions in the future. In addition, though the Army expects to face shortages of experienced and proficient UAS pilots through fiscal year 2019, the Army has not fully addressed the potential risks of training with less experienced pilots, such as by providing additional preparation for current and future instructors who do not meet one or more course prerequisites related to experience to enhance their ability to successfully provide training. The Air Force has taken action to address shortages of instructors at its UAS formal training unit at Holloman Air Force Base. The second major phase of the Air Force’s initial qualification training occurs in the formal training unit, and all of the Air Force’s active duty UAS pilots are to attend this training to learn to operate the UAS that they will fly in their operational units. As we noted earlier, a core characteristic of a strategic training framework is that agencies should provide appropriate resources for its training programs. However, we found that as of March 2015, the Air Force staffed its UAS training squadrons at Holloman Air Force Base at 63 percent of their planned staffing levels. In December 2014, the commanding general of Air Combat Command stated that the Air Force has not fully staffed the formal training unit due to shortages of UAS pilots across the Air Force and as a result “pilot production has been decimated.” An Air Force headquarters official stated that shortages of instructors at the formal training unit are a key reason that the Air Force has shortages of UAS pilots across the Air Force. An Air Force headquarters official stated that the Air Force is taking action to address these shortages. Specifically, the Air Force is studying the personnel requirements for the formal training unit, and expects the Air Force to report the results of this study by spring 2016. The Air Force official also stated that the results of that study will likely show that the Air Force UAS pilot formal training unit should have additional instructor positions. Although the Air Force formal training unit faces a shortage of UAS instructor pilots, fully implementing our April 2014 recommendations should better position the Air Force to address these shortages. The Office of the Deputy Assistant Secretary of Defense (Readiness) and the military services coordinate on UAS pilot training in some distinct areas; however, potential benefits from enhanced coordination efforts on training UAS pilots exist. According to key practices, federal agencies can enhance and sustain their collaborative efforts by defining a common outcome and establishing joint strategies. Collaborating agencies should also assess their relative strengths and limitations to identify opportunities to leverage each others’ resources. Further, agencies should establish compatible standards, policies, procedures and data systems to enable a cohesive working relationship. During our review, in January 2015, the Acting Deputy Assistant Secretary of Defense (Readiness) stated that the services should coordinate and collaborate with one another regarding their efforts to train UAS pilots. He stated that in coordinating with one another the services should share best practices to help the department as a whole train its UAS pilots more effectively and efficiently. Further, the Acting Deputy Assistant Secretary stated that because the services fly similar UAS, they may be able to train their pilots more effectively and efficiently by taking advantage of the lessons learned that they may have acquired as they have trained their pilots separately. He cited similarities between the Air Force’s Predator and the Army’s Gray Eagle and acknowledged similarities between the Air Force’s Global Hawk and the Navy’s Triton (see Fig. 1). In this review of UAS pilot training, we found that the Office of the Deputy Assistant Secretary of Defense (Readiness) and the services have taken some actions to coordinate on UAS training and these actions are consistent with the key practices that can enhance and sustain federal agency coordination. For example, the Air Force and the Army train all Marine Corps UAS pilots, which is consistent with the practice of identifying and addressing needs by leveraging resources to initiate or sustain a collaborative effort. In addition, the Air Force and the Army have published UAS strategies that outline their services’ plans to develop, organize and incorporate the use of UAS into their missions, which is consistent with the practice of reinforcing agency accountability for collaborative efforts with plans and reports. However, the Air Force and Army strategies do not address if or how the services will coordinate with one another on UAS pilot training. Further, we also found that the actions that the Office of the Deputy Assistant Secretary of Defense (Readiness) and the services had taken were not fully consistent with these key practices. See table 4 for a description of these key practices, a description of DOD actions, and our assessment. In addition, officials from three of the four military service headquarters offices who responded to our questionnaire expressed limited support for further coordination with the other services on UAS pilot training. For example, headquarters officials from the Army and the Air Force stated that they did not anticipate any additional benefit to coordinating with the other services on UAS pilot training. Further, headquarters officials from the Army and the Navy stated that they did not foresee any other benefits from coordinating with the other services on UAS pilot training because their services fly different UAS with different missions. Moreover, DOD has not yet issued a UAS training strategy that addresses if and how the services should coordinate with one another to share information on training UAS pilots. In 2010, we found that DOD had commenced initiatives to address training challenges, but had not developed a results-oriented strategy to prioritize and synchronize these efforts. We recommended that DOD establish a UAS training strategy to comprehensively resolve challenges that affect the ability of the Air Force and the Army to train personnel for UAS operations and DOD concurred with our recommendation. The Office of the Deputy Assistant Secretary of Defense (Readiness) engaged the RAND Corporation to draft a UAS training strategy and provided RAND with guidelines about the content and purpose of the strategy. However, these guidelines do not discuss if or how the services should coordinate on UAS pilot training. In September 2014, RAND provided a draft of a UAS training strategy to the Office of the Deputy Assistant Secretary of Defense (Readiness), but the draft also did not discuss coordination on UAS pilot training. As of April 2015, the draft training strategy had not been updated to include this information, and officials from the Office of the Deputy Assistant Secretary of Defense (Readiness) were unable to provide a timeframe for completion of this training strategy. Until DOD issues a UAS training strategy that addresses if and how the services should coordinate with one another to share information on training UAS pilots, the services may miss opportunities to improve the effectiveness and efficiency of this training. In response to our questionnaire, 6 out of the 11 units stated that potential benefits may exist from coordinating with other services on UAS pilot training. For example, 1 Army UAS unit stated that coordinating training with other services could help shorten the amount of time they spend acclimating to other services once deployed and would allow for an easier transition to working together during missions. Additionally, another Army UAS unit stated that they were unable to train because of a poorly written certificate of authorization, which is the document that requires approval by the Federal Aviation Administration before the services can fly their UAS in the National Airspace System. Further, the unit stated that they could have avoided a temporary halt in training and benefited from reaching out to the Air Force for guidance on this process rather than the unit spending time developing their own system. Without taking steps to address coordination among the services, the Office of the Deputy Assistant Secretary of Defense (Readiness) and the services may waste scarce funds on training UAS pilots and may limit the efficiency and effectiveness of these training efforts. DOD’s UAS portfolio has grown over the years to rival the traditional manned systems. In its Unmanned Systems Integrated Roadmap FY2013-2038 report, DOD highlighted the importance of developing a comprehensive UAS training strategy to guide the myriad DOD UAS training efforts across all systems, and to help ensure effective and efficient training of UAS pilots. However, without amending unit status reports to require information on the readiness level of pilots in UAS units, Army Forces Command will continue to lack visibility over the amount of training that UAS pilots have completed in units and will not be able to ensure that all Army UAS units being considered for deployment have completed their required training. In addition, without taking additional steps to mitigate the potential risks of using less experienced instructors, the Army may be unable to ensure that the training these instructors provide will result in highly skilled future UAS pilots. Finally, it is important that DOD identify ways to achieve its missions more efficiently and effectively. It is encouraging that the Office of the Deputy Assistant Secretary of Defense (Readiness) and the services coordinate on UAS pilot training in some areas such as the Air Force and the Army training all Marine Corps UAS pilots and the Air Force and the Army publishing UAS strategies. However, without addressing how the services can enhance their coordination efforts on training UAS pilots in DOD’s forthcoming UAS training strategy, the services may not be able to achieve additional benefits to the efficiency and effectiveness of UAS pilot training across the department. We are making three recommendations to the Secretary of Defense: To provide greater visibility over the extent to which Army UAS units have completed required training to leaders responsible for deployment decisions, we recommend that the Secretary of Defense direct the Secretary of the Army to require unit status reports to include information on the readiness levels of UAS pilots in UAS units. To help ensure that Army UAS pilots receive the highest caliber of training to prepare them to successfully accomplish UAS missions, we recommend that the Secretary of Defense direct the Secretary of the Army to take additional steps to mitigate potential risks posed by its waiver of course prerequisites for less experienced UAS pilots attending the course to become instructors, such as by providing additional preparation for current and future instructors who do not meet one or more course prerequisites to enhance their ability to successfully provide training. To increase opportunities to improve the effectiveness and efficiency of UAS pilot training across DOD, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to address how the services should coordinate with one another in the strategy on UAS pilot training that the Office of the Under Secretary of Defense for Personnel and Readiness is current drafting. We provided a draft of this report to DOD for comment. In written comments, DOD concurred with each of our three recommendations. DOD stated that it will review the implementation status of each of the recommendations within six months. DOD’s comments are reprinted in their entirety in appendix III. DOD also provided technical comments that we have incorporated into this report where applicable. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretaries of the Air Force, the Army, and the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In April 2014, we found that the Air Force had shortages of pilots of unmanned aircraft systems (UAS). In particular, we found that the Air Force (1) has operated below its optimum crew ratio, which is a metric used to determine the personnel needs for Air Force aviation units; (2) has not developed a minimum crew ratio; (3) has not tailored its recruiting and retention strategy to align with the specific needs and challenges of UAS pilots; and (4) has not considered the viability of using personnel other than officers such as enlisted or civilians as UAS pilots. We made four recommendations related to these findings. Since we issued our report in April 2014, the Air Force has taken some actions, but has not yet fully implemented them. In the committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2015, the Senate Committee on Armed Services directed that the Air Force report to the committee by September 30, 2014, on its efforts to implement three recommendations from our 2014 report related to staffing levels of Air Force UAS pilots. On September 22, 2014, the Air Force reported on the status of their efforts to implement these recommendations. We focused on three types of individual training that the services provide to pilots of unmanned aerial systems (UAS) pilots: initial qualification, mission, and continuation training. In addition, we focused our review on initial qualification instructors and instructors within the units. We focused our review on Army and Air Force UAS pilot training programs in our first two objectives assessing the extent to which the Army and the Air Force face challenges in ensuring that their UAS pilots complete their required training and have a sufficient number of UAS pilot instructors. We focused this part of our review on the Army and the Air Force because these services have significantly more UAS pilots than the Navy and the Marine Corps. However, for our third objective, we assessed the coordination that occurs among the Office of the Deputy Assistant Secretary of Defense (Readiness) and all four of the military services because we determined that there may be benefits to collaboration among the Office of the Deputy Assistant Secretary of Defense (Readiness) and all of the services regardless of the maturity and size of their current UAS training programs. We assessed the reliability of the data we used to support findings in this report by reviewing documentation of the data and interviewing agency officials knowledgeable about the data and the way they are maintained. Specifically, we assessed the reliability of the Air Force’s data on fiscal year 2014 continuation training flights completed by seven UAS units at Creech Air Force Base to fulfill requirements laid out in the Air Combat Command Ready Aircrew Program Tasking Memorandum; the Army’s fiscal year 2013 to February 2015 data on waivers granted to UAS pilots attending the UAS school to become instructors; and March 2015 data on the Air Force’s UAS pilot staffing levels and staffing levels at the formal training unit. We selected these dates, because they are the most recent years for which the data were available. We determined that these data were sufficiently reliable for the purposes of this report, such as the discussion of the percentage of Army UAS pilots that required a waiver to become a UAS instructor by fiscal year; the overall staffing levels of Air Force UAS pilots; the staffing levels of Air Force UAS instructor pilots at the formal training unit; and the completion of continuation training by a nongeneralizeable sample of seven UAS units at Creech Air Force Base. To evaluate the extent to which the Army and the Air Force face challenges, if any, in ensuring that their UAS pilots complete their required training, we reviewed documents that outline training requirements for UAS pilots in the Army and the Air Force including the Army’s UAS Commander’s Guide and Aircrew Training Manual and the Air Force Air Combat Command Ready Aircrew Program Tasking Memorandum. We also reviewed reports that we previously issued that address topics related to UAS pilot training including a 2014 report on the personnel challenges that Air Force UAS pilots face and a 2010 report on challenges that the Air Force and the Army faced training personnel for UAS operations. We assessed the services’ UAS pilot training programs using a set of core characteristics that we previously developed in 2004. In 2004, we found that agencies must continue to build their fundamental management capabilities in order to effectively address the nation’s most pressing priorities. To help agencies build their management capabilities, we developed a framework that includes principles and key questions that federal agencies can use to ensure that their training investments are targeted strategically. In developing this framework, we concluded that there is a set of certain core characteristics that constitute a strategic training and development process. These characteristics include leadership commitment and communication; effective resource allocation; and continuous performance improvement. To develop these characteristics in 2004, we consulted government officials and experts in the private sector, academia, and nonprofit organizations; examined laws and regulations related to training and development in the federal government; and reviewed the sizeable body of literature on training and development issues, including previous GAO products on a range of human capital topics. To identify the extent to which the military services applied these principles in their training programs, we developed a questionnaire based on these characteristics and on the services’ UAS training programs. We adapted these core characteristics by modifying the language of some of the criteria that we used in our questionnaire, to more appropriately apply to UAS pilot training. We reviewed our adaptation with officials from the Office of the Deputy Assistant Secretary of Defense for Readiness as well as officials from the headquarters of each of the military services. These officials agreed that the framework was relevant to our review and provided feedback on the questions we included in our questionnaire. We distributed the questionnaire to each of the service’s headquarters, training commands, and operational commands. To include diverse UAS unit perspectives, we also randomly selected a nongeneralizable sample of 14 UAS units in each of the services based on factors including aircraft types flown in the UAS unit and geographical location of the unit. We distributed the questionnaire to the commanders of the selected units. We attained an 85 percent response rate for the questionnaires. We analyzed responses we obtained from each of the questionnaires, and compared the perspectives and documentation we collected to the GAO criteria. We reviewed a March 2015 Army Training and Doctrine Command review that evaluated continuation training for Army UAS units. The results of this review are not generalizable. We also reviewed continuation training requirements included in the Air Force’s 2014 Ready Aircrew Program Tasking Memorandum. We compared these requirements to fiscal year 2014 training data for all seven of Creech Air Force Base’s MQ-1 Predator and MQ-9 Reaper units that have the same mission requirements outlined in this memorandum. Fiscal year 2014 is the most recent year for which the data were available. The results from these units of this data are not generalizable to other UAS units or fiscal years. We also interviewed Air Force officials at Headquarters, Air Combat Command, Air Education and Training Command, as well as the Vice Wing Commander and multiple UAS unit commanders at Creech Air Force Base and the Wing Commander and Operations Group and multiple UAS unit commanders at Holloman Air Force Base to determine Air Force UAS pilots’ training completion rates; the Air Force’s UAS manning levels; and metrics that Air Force has in place to determine aviation personnel requirements. To determine the extent to which the Army and the Air Force have a sufficient number of qualified UAS pilot instructors, we identified and analyzed criteria included in the Army’s course prerequisite requirements that provide the minimum requirements for rank, the number of flying hours a pilot has flown, the readiness level of a pilot, and whether that pilot is current, which measures if the pilot has recently completed certain flying tasks. We compared these course prerequisites to the most recent Army documentation on UAS operators who attended the Army school to become an instructor in fiscal year 2013, fiscal year 2014, and October 2014 to February 2015, to determine the number of instructors who met these course prerequisites. We also interviewed the Director of Training at the Army’s initial qualification school at Fort Huachuca, and officials at Army Headquarters to get their views about whether the Army school and units have adequate numbers of instructors. We also compared Air Force documentation on the actual numbers of Air Force UAS pilots in Air Force UAS assignments to the Air Force planned number of positions for UAS pilots. In addition, we compared the actual numbers of Air Force UAS instructor pilots at the formal training unit at Holloman Air Force Base to the Air Force planned number of positions at the formal training unit. The formal training unit is the organization that provides training for the second major phase of the Air Force’s initial qualification training and all of the Air Force’s active duty UAS pilots are to attend this training to learn to fly the MQ-1 Predator or MQ-9 Reaper. We also interviewed the Wing Commander and Operations Group and multiple UAS unit commanders at the Air Force’s formal training unit at Holloman Air Force Base to get their views about whether the formal training units have a sufficient numbers of instructors. We visited UAS units at five bases: Ft. Huachuca, AZ; Ft. Hood, TX; Holloman Air Force Base, NM; Creech Air Force Base, NV; and Marine Corps Air Station Cherry Point, NC. We chose units at these locations to get a perspective on a variety of UAS operations and selected the locations on the basis of several factors including the type and size of UAS flown in the unit; missions of the unit; whether or not the unit is deployed (we did not meet with units who were deployed); number of UAS pilots in the unit; the major command of the unit; and location of the unit. At each instillation, we met with unit commanders and other leaders to discuss their views about training UAS pilots. We also conducted 18 focus groups with active-duty UAS pilots at these locations to gain their perspectives on their services’ UAS training efforts. We met with eight Army focus groups, seven Air Force focus groups, and three Marine Corps focus groups for 90 minutes each. To select specific UAS pilots to participate in our focus groups, we worked with officials at each of the instillations to develop a diverse group of active-duty UAS pilots. To obtain a variety of perspectives, we selected UAS pilots with various amounts of experience flying UASs and additional duties in their units. To help ensure an open discussion in the groups, we organized them by rank and met with groups of similar rank. We also met with some groups of instructor pilots separately. These groups typically consisted of six to nine UAS pilots. We used content analysis to analyze detailed notes from each focus group to identify themes that participants expressed across all or most of the groups. To do this, two GAO analysts analyzed an initial set of the records and individually developed themes. Then, they convened to discuss and agree on a set of themes to perform the coding. The analysts then analyzed our records and made coding decisions based on these themes. Following the initial analysis by one analyst, a second analyst reviewed all of the coding decisions that the first analyst made for each of the records. Where there were discrepancies, the analysts reviewed one another’s coding and rationale for their coding decisions and reached a consensus on which codes should be used. The results of our analyses of the opinions of UAS pilots we obtained during our focus groups are not generalizable to the populations of all UAS pilots in the Army, Air Force, and Marine Corps. To evaluate the extent to which DOD and the military services coordinate and collaborate with one another to train their UAS pilots, we used criteria for enhancing and sustaining collaboration among federal agencies that we previously developed. We assessed the department’s actions using seven of the eight key practices from our prior report. We excluded one key practice related to reinforcing individual accountability for collaborative efforts through performance management systems. Evaluating this practice involves assessing the extent to which agencies set expectations for senior executives for collaboration within and across organizational boundaries in their individual performance plans. We did not include this key practice in our review because many of the officials who oversee UAS pilot training in the services are military members and the military does not establish individual performance plans for its servicemembers. We reviewed our adaptation with officials from the Office of the Deputy Assistant Secretary of Defense for Readiness. These officials agreed that these practices were relevant to our review. The seven key practices that we assessed in our review were: (1) defining and articulating a common outcome; (2) establishing mutually reinforcing or joint strategies; (3) identifying and addressing needs by leveraging resources; (4) agreeing on roles and responsibilities; (5) establishing compatible policies, procedures, and other means to operate across agency boundaries; (6) developing mechanisms to monitor, evaluate, and report on results; and (7) reinforcing agency accountability for collaborative efforts through agency plans and reports. To identify the extent to which the DOD organizations applied these practices, we analyzed documentation related to coordination on UAS pilot training that we obtained from a variety of DOD offices. For example, we analyzed guidelines for a UAS training strategy that the Office of the Deputy Assistant Secretary of Defense for Readiness provided to the RAND Corporation; a draft UAS training strategy developed by the RAND Corporation; UAS strategies that the Army and Air Force issued; and documentation that shows that the Air Force and Army train all Marine Corps UAS pilots. In addition, we analyzed responses we obtained from each of the questionnaires we administered and focused on questions related to coordination among the services. Further, we collected additional information in interviews with officials from the Office of the Deputy Assistant Secretary of Defense for Readiness, the UAS Task Force, and knowledgeable officials within each military service. We then compared the information we collected from these sources to the key practices that help enhance and sustain coordination that we previously developed to determine the extent to which the Office of the Deputy Assistant Secretary of Defense for Readiness and the military services coordinate to train UAS pilots. We conducted this performance audit from July 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Lori Atkinson (Assistant Director), James P. Klein, Leigh Ann Sennette, Michael Silver, Paola Tena, Alex Welsh, Erik Wilkins-McKee, and Michael Willems made key contributions to this report. | The Department of Defense's (DOD) UAS portfolio has grown over the years to rival traditional manned systems, and, as of July 2013, DOD had acquired over 10,000 UAS, according to a 2013 DOD report. Training DOD UAS pilots, most of whom are in the Army or the Air Force, is an integral part of DOD's strategy to accomplish its mission. Senate Report 113-176 included a provision that GAO review DOD's efforts to train UAS pilots. This report examines, among other things, the extent to which the Army and the Air Force (1) face challenges ensuring that their UAS pilots complete required training and (2) have taken steps to ensure they have sufficient numbers of UAS instructors. GAO analyzed DOD guidance on training UAS pilots, distributed a questionnaire to Army and Air Force headquarters and units, examined nongeneralizeable training records of seven Air Force UAS units selected because they have the same mission requirements, and interviewed DOD officials. GAO also conducted 18 focus groups with active duty UAS pilots who were selected based on rank and other factors. The results of the questionnaire and focus groups are not generalizable. The Army and the Air Force face challenges ensuring that the pilots who remotely operate their unmanned aerial systems (UAS) complete their required training. Specifically, a March 2015 Army review showed that most pilots in certain Army units did not complete fundamental training tasks in fiscal year 2014—a finding that GAO corroborated through discussions with pilots in focus groups and unit responses to questionnaires. In addition, Army unit status reports do not require UAS pilot training information, and as a result, the Army does not know the full extent to which pilots have been trained and are therefore ready to be deployed. In addition, Air Force training records from a nongeneralizeable sample of seven UAS units showed that, on average, 35 percent of the pilots in these units completed the training for all of their required missions. Pilots in all of the seven focus groups GAO conducted with Air Force UAS pilots stated that they could not conduct training in units because their units had shortages of UAS pilots.GAO found similar shortages of UAS pilots in April 2014 and in particular, GAO found that the Air Force operated below its crew ratio, which is a metric used to determine the number of pilots needed in units. At that time, GAO made four recommendations including that the Air Force update its update crew ratio. The Air Force concurred with these recommendations and has taken actions, or has actions underway. For example, an Air Force Headquarters official stated that, in February 2015, the Air Force completed the first phase of a three-phase personnel requirements study on the crew ratio and expects to update the crew ratio in 2015. However, at this time, the Air Force has not fully implemented any of the recommendations. The Army and the Air Force are taking actions to increase the number of UAS instructors, but the Army has not fully addressed the risks associated with using less experienced instructors and the Air Force faces instructor shortages. In order to increase the number of its instructors in response to an increase in the number of UAS units, the Army waived course prerequisites for about 40 percent of the UAS pilots attending the course to become instructor pilots from the beginning of fiscal year 2013 through February 2015.The Army originally established these prerequisites—such as a minimum number of flight hours—for UAS pilots volunteering to become instructors to help ensure that instructors were fully trained and ready to instruct UAS pilots. The Army has taken some steps to mitigate the potential risks of using less proficient UAS instructors. For example, beginning in fiscal year 2015, the Army no longer grants waivers for course prerequisites related to proficiency. However, the Army can continue to grant waivers for additional course prerequisites related to experience. As a result, the Army risks that its UAS pilots may not be receiving the highest caliber of training needed to prepare them to successfully perform UAS missions. Furthermore, as of March 2015, the Air Force had staffed its UAS training squadrons at Holloman Air Force Base at 63 percent of its planned staffing levels. This shortage is a key reason that the Air Force has shortages of UAS pilots across the Air Force, according to an Air Force headquarters official. The Air Force is studying the personnel requirements for its school and expects to report the results of this study by spring 2016. GAO recommends, among other things, that the Army require unit status reports to include information on the readiness levels of UAS pilots; and the Army take additional steps to mitigate potential risks posed by its waiver of course prerequisites related to experience for pilots attending the course to become instructors. DOD concurred with each of GAO's recommendations. |
Twelve years ago, in September 1993, the National Performance Review called for an overhaul of DOD’s temporary duty (TDY) travel system. In response, DOD created the DOD Task Force to Reengineer Travel to examine the process. In January 1995, the task force issued the Report of the Department of Defense Task Force to Reengineer Travel. The Task Force’s report pinpointed three principal causes for DOD’s inefficient travel system: (1) travel policies and programs were focused on compliance with rigid rules rather than mission performance, (2) travel practices did not keep pace with travel management improvements implemented by industry, and (3) the travel system was not integrated. On December 13, 1995, the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Under Secretary of Defense (Comptroller)/Chief Financial Officer issued a memorandum, “Reengineering Travel Initiative,” establishing the PMO-DTS to acquire travel services that would be used DOD-wide. Additionally, in a 1997 report to the Congress, the DOD Comptroller pointed out that the existing DOD TDY travel system was never designed to be an integrated system. Furthermore, the report stated that because there was no centralized focus on the department’s travel practices, the travel policies were issued by different offices and the process had become fragmented and “stove- piped.” The report further noted that there was no vehicle in the current structure to overcome these deficiencies, as no one individual within the department had specific responsibility for management control of the TDY travel system. To address these concerns and after the use of competitive procedures, the department awarded a firm fixed-price, performance-based services contract to BDM International, Inc. (BDM) in May 1998. In September 1998, we upheld the department’s selection of BDM. Under the terms of the contract, the contractor was to start deploying a travel system and to begin providing travel services for approximately 11,000 sites worldwide, within 120 days of the effective date of the contract, completing deployment approximately 38 months later. The contract specified that, upon DTS’s achieving initial operational capability (IOC), BDM was to be paid a one-time deployment fee of $20 for each user and a transaction fee of $5.27 for each travel voucher processed. The estimated cost for the contract was approximately $264 million. Prior to commencing the work, BDM was acquired by TRW Inc. (TRW), which became the contractor of record. The operational assessment of DTS at Whiteman Air Force Base, Missouri, from October through December 2000, disclosed serious failures. For example, the system’s response time was slower than anticipated, the result being that it took longer than expected to process a travel order/voucher. Because of the severity of the problems, in January 2001, a joint memorandum was issued by the Under Secretary of Defense (Comptroller) and the Deputy Under Secretary of Defense (Acquisition, Technology & Logistics) directing a functional and technical assessment of DTS. The memorandum also directed that a determination be made of any future contract actions that would be necessary, based on the assessment results. In July 2001, the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense (Acquisition, Technology & Logistics) approved proceeding with the DTS program and restructuring the contract with TRW. The TRW contract was restructured through a series of contract modifications which were finalized on March 29, 2002. The Government agreed to provide TRW consideration in the amount of about $44 million for restructure of the contract. TRW agreed to release and discharge the Government from liability and agreed to waive any and all liabilities, obligations, claims and demands related to or arising from its early performance efforts under the original contract. Northrop Grumman subsequently acquired TRW in December 2002, and, as such, is now the contractor of record. The first deployment of DTS was at Ellsworth Air Force Base, South Dakota, in February 2002. As of September 2005, DTS has been deployed to approximately 5,600 locations. The department currently estimates that DTS will be fully deployed to all 11,000 locations by the end of fiscal year 2006, with an estimated total development and production cost of approximately $474 million. Of this amount, the contract for the design, development, and deployment of DTS, as restructured is worth approximately $264 million—the same amount as specified in the original contract that was agreed to with BDM. The remaining costs are DOD internal costs associated with areas such as the operation of the program management office, the voucher payment process, and management of the numerous CTO contractors. Over the past several years, we have reported pervasive weaknesses in DOD’s travel program. These weaknesses have hindered the department’s operational efficiencies and have left it vulnerable to fraud, waste, and abuse. These weaknesses are highlighted below. On the basis of statistical sampling, we estimated that 72 percent of the over 68,000 premium class airline tickets DOD purchased for fiscal years 2001 and 2002 were not properly authorized and that 73 percent were not properly justified. During fiscal years 2001 and 2002, DOD spent almost $124 million on airline tickets that included at least one leg of the trip in premium class—usually business class. Because each premium class ticket costs the government up to thousands of dollars more than a coach class ticket, unauthorized premium class travel resulted in millions of dollars of unnecessary costs annually. Because of control breakdowns, DOD paid for airline tickets that were neither used nor processed for refund—amounting to about 58,000 tickets totaling more than $21 million for fiscal years 2001 and 2002. DOD was not aware of this problem before our audit and did not maintain any data on unused tickets. Based on limited data provided by the airlines, it is possible that the unused value of the fully and partially unused tickets that DOD purchased from fiscal year 1997 through fiscal year 2003 with DOD’s CBA could be at least $100 million. We found that DOD sometimes paid twice for the same airline ticket—first to the Bank of America for the monthly DOD credit card bill, and second to the traveler, who was reimbursed for the same ticket. Based on our mining of limited data, the potential magnitude of the improper payments was 27,000 transactions for over $8 million. For example, DOD paid a Navy GS-15 civilian employee approximately $10,000 for 13 airline tickets he had not purchased. DTS development and implementation have been problematic, especially in the area of requirements and testing key functionality to ensure that the system would perform as intended. Given the lack of adherence to such a key practice, it is not surprising that critical flaws have been identified after deployment, resulting in significant schedule slippages. As originally envisioned, the initial deployment of DTS was to commence 120 days after the effective date of the contract award in September 1998, with complete deployment to approximately 11,000 locations by April 2002. However, that date has been changed to September 2006—a slippage of over 4 years. Our recent analysis of selected requirements disclosed that the testing of DTS is not always adequate prior to updated software being released for use by DOD personnel. System testing is a critical process utilized by organizations to improve an entity’s confidence that the system will satisfy the requirements of the end user and will operate as intended. Additionally, an efficient and effective system testing program is one of the critical elements that need to be in place in order to have reasonable assurance that an organization has implemented the disciplined processes necessary to reduce project risks to acceptable levels in software development. In one key area, our results to date have identified instances in which the testing of DTS was inadequate, which precluded DOD from having reasonable assurance that DTS displayed the proper flights and airfares. This occurred because the PMO-DTS failed to ensure that the appropriate system interfaces were tested. Additionally, because a system requirement covering this had never been defined, there was not reasonable assurance that DTS displayed the accurate number of flights and related airfares within a given flight window. As a result of these two weaknesses, DOD travelers might not have received accurate information on available flights and airfares, which could have resulted in higher travel costs. Specific details on these two weaknesses are discussed below. The DOD tests for determining whether DTS displayed the proper flights and airfares did not provide reasonable assurance that the proper (1) flights were displayed and (2) airfares for those flights were displayed. DTS uses a commercial product to obtain information from the database that contains the applicable flight and airfare information (commonly referred to as a Global Distribution System or ). In testing whether DTS displayed the proper flights and airfares, the information returned from the commercial product was compared with the information displayed in DTS and was found to be in agreement. However, the commercial product did not provide all of the appropriate flights or airfares to DTS that were contained in the GDS. Since the PMO-DTS neither performed an end-to-end test nor made sure that the information returned from this commercial product was in agreement with the information contained in the GDS, it did not have reasonable assurance that DTS was displaying the proper flights and airfares information to the users. According to DOD officials, this system weakness was detected by users complaining that DTS did not display the proper flights and airfares. DOD officials stated that prior to the August 2005 system update, DTS should have displayed 12 flights, if that many flights were available, within a flight window. DTS program officials and Northrop Grumman personnel acknowledged that this particular system requirement had never been tested because DOD failed to document the requirement until January 2005. Therefore, DOD did not have reasonable assurance that DTS displayed the required number of flights and related airfare information. The inability to ensure that the proper number of flights was displayed could have caused DOD to incur unnecessary travel cost. As we have noted in previous reports, requirements that are not defined are unlikely to be tested. PMO-DTS officials acknowledged that these two problems have been ongoing since the initial implementation of DTS. PMO-DTS officials have stated that the two problems were corrected as part of the August 2005 DTS system update. We are in the process of verifying whether the actions taken by DOD will correct the problems. Of the four previously reported DOD travel problems, DTS has corrected one of the problems while the others remain. However, the remaining problems are not necessarily within the purview of DTS and may take departmentwide action to fully address. While DOD has taken actions to improve existing guidance and controls related to premium class travel, including system changes in DTS, we identified instances in which unauthorized premium class travel continues. In November 2003, the Under Secretary of Defense (Personnel and Readiness) formed a task force to address our prior recommendations that focused on three major areas: (1) policy and controls of travel authorization, (2) ticket issuance and reporting, and (3) internal control and oversight. Subsequently, several policy changes were made to improve the control and accountability over premium class travel. For example, the approval level for first class travel was elevated to a three-star general and for business class travel to a two-star general or civilian equivalent. Other changes included strengthening the description of circumstances when premium class travel may be used to more clearly show that it is an exceptional circumstance and not a common practice. In all cases, approving officials must have their own premium class travel approved at the next level. These changes also set a broad policy that CTOs are not to issue premium class tickets without proper authorization. In September 2004, the PMO-DTS made system changes to DTS that blocked seven fare codes that were considered to be premium class fare codes from being displayed or selected by the traveler through DTS. According to the PMO- DTS, the airline industry does not have standardized fare code indicators to identify first class, business class, and economy class. Subsequently, DOD found that economy class fare codes were being blocked using the seven codes and in May 2005, reduced the list to three codes. Despite these various changes in policy and to DTS, we continue to identify instances in which premium class travel is occurring without the proper authorization. To date, our preliminary analysis disclosed at least 68 cases that involved improperly approved premium class travel. In one case, we found that a Department of the Army civilian employee (GS-12) flew from Columbia, South Carolina via Atlanta, Georgia to Gulf Port, Mississippi to attend a conference. On the return trip, one leg included first class accommodations. From our review and analysis of Bank of America data and the travel voucher, DOD paid $1,107 for the airfare. The cost of a GSA city pair round trip airfare was $770. According to information provided by the Army, the traveler informed the Army that he was meeting another traveler at the destination and they were going to share a rental car and there were no seats available on the flight the other traveler had booked. Therefore, the individual selected a flight arriving as close as possible to the time of the traveler he was meeting. This is not a valid justification, and the premium class fare was not approved by the appropriate official. Additionally, the premium class fare occurred on the return flight. Furthermore, based upon our review to date, none of the 68 cases that involved improper premium class travel had the required approval. DTS still does not have the capability to determine whether a traveler does not use all or a portion of an airline ticket. To address this problem, DOD directed that all new CTO contract solicitations require CTOs to prepare that unused ticket reports which identify tickets that were not used within a specified time period, usually 30 days past the trip date, so that they can be cancelled and processed for refund. Additionally, the various DOD components were directed to modify existing CTO contracts to require the CTOs to process refunds for unused airline tickets. At the five locations we visited we found that the Army and Air Force CTOs prepared daily and monthly reports. The Navy CTOs produced the unused ticket report on a weekly basis, and the Marine Corps CTOs prepared the report monthly. However, according to DOD officials, this requirement has not yet been implemented in all the existing CTO contracts. Our preliminary observations indicate that DTS was designed to ensure that tickets purchased through the CBA cannot be claimed on the individual’s travel voucher as a reimbursement to the traveler. As part of our statistical sample discussed later, we found 14 travel vouchers in which an airline ticket purchased with the CBA was included on the voucher; however, the traveler did not receive reimbursement for the claim. DFAS has previously reported problems with the accuracy of DTS travel payments. For the first quarter of fiscal year 2004, DFAS reported a 14 percent inaccuracy rate in the DTS travel payments of airfare, lodging, and meals, and incidental expenses. Our preliminary analysis of 170 travel vouchers disclosed that for the two attributes that are directly related to the operation of the DTS system—computation of lodging reimbursement and meals and incidental expenses (per diem)—the DTS calculations were correct in all instances on the basis of the information provided by the traveler. However, we continue to identify numerous instances in which employee errors led to inaccurate reimbursements. In some cases, errors occurred because incorrect data were entered into DTS by the traveler. In other cases, the reviews by the AOs were inadequate. In regard to the AO reviews, our preliminary analysis indicates that approximately 66 travel vouchers or 39 percent were paid even though there was not reasonable assurance that the amount of the reimbursement was accurate. More specifically, 49 of 66 travel vouchers lacked adequate receipts for the amounts claimed. Receipts are required for all expenses of $75 or more and for lodging, regardless of the amount. However, for the 49 vouchers, we saw no evidence that the AO was provided with the appropriate receipts by the traveler. In one case, the traveler was reimbursed for expenses claimed in excess of $500, even though none of the required receipts were available for review and approval by the AO. According to DOD regulations, “the AOs signature on the expense report certifies that the travel was taken, that the charges are reasonable…and that the payment of the authorized expenses is approved.” While the signature of the AO signifies that the payment is approved, it falls short of ensuring that amounts claimed are reasonable in the cases in which receipts for airfare and lodging are not provided. Until the overall review process is improved, travel payment problems will continue to occur. DOD’s goal of making DTS the standard travel system within the department depends upon the development, testing, and implementation of system interfaces with the myriad of related DOD systems, as well as private-sector systems such as the system used by credit card company that provides DOD military and civilian employees with travel cards. While DOD has developed 32 interfaces, the PMO-DTS is aware of at least 17 additional DOD business systems for which interfaces must be developed. To date, the development and testing of the interfaces has cost DOD reportedly over $30 million. Developing the interfaces is time consuming and costly. Additionally, the underutilization of DTS at the sites where it has been deployed is also hindering the department’s efforts to have a standard travel system throughout the department. Furthermore, the underutilization impacts the estimated savings that are to be derived from the use of DTS departmentwide. One of DOD’s long-standing problems has been the lack of integrated systems. To address this issue and minimize the manual entry of data, interfaces between existing systems must be developed to provide the exchange of data that is critical for day-to-day operations. For example, DTS needs to know before permitting the authorization of travel that sufficient funds are available to pay for the travel—information that comes from a non-DTS system—and once the travel has been authorized, another system needs to know this information so that it can record an obligation and provide management and other systems with information on the funds that remain available. Interfaces are also needed with private-sector systems, such as the credit card company that provides DOD personnel with travel cards. Figure 1 illustrates the numerous DTS system interfaces that have already been developed and implemented with the department’s business systems. Figure 2 shows the DTS system interfaces that must be developed in the future with the department’s business systems. While DOD was able to develop and implement the interfaces with the 32 systems, the development of each remaining interface will present the PMO-DTS with challenges. For example, the detailed requirements for each of the remaining interfaces have not yet been defined. Such requirements would define (1) what information will be exchanged and (2) how the data exchange will be conducted. This is understandable in some cases such as the Army General Fund Financial enterprise resource planning (ERP), which is a relatively new endeavor within the department and it will be some time before DOD is in position to start development of the interface. Additionally, the development of the DTS interfaces depends on other system owners’ achieving their time frames for implementation. For example, the Navy ERP is one of the DOD systems with which DTS is to interface and exchange data. Any difficulties with the Navy’s ERP implementation schedule could adversely affect DTS’s interface testing and, thereby, result in a slippage in the interface being implemented. The above two factors also affect DTS’s ability to develop reliable cost estimates for the future interfaces. Another challenge for DTS in achieving its goal of a standard travel system within DOD is the continued use of the existing legacy travel systems, which are owned and operated by the various DOD components. Currently, at least 31 legacy travel systems are continuing to be operated within the department. As we have previously reported, because each DOD component receives its own funding for the operation, maintenance, and modernization of its own systems, there is no incentive for DOD components to eliminate duplicative travel systems. We recognize that some of the existing travel systems, such as the Integrated Automated Travel System version 6.0, cannot be completely eliminated because it performs other functions, such as permanent change of station travel claims that DTS cannot process. However, in other cases, the department is spending funds on duplicative systems that perform the same function as DTS. The funding of multiple systems that perform the same function is one of the reasons why the department has 4,150 business systems. Since these legacy systems are not owned and operated by DTS, the PMO-DTS does not have the authority to discontinue their operation. This is an issue that must be addressed from a departmentwide perspective. Because of the continued operation of the legacy systems at locations where DTS has been fully deployed, DOD components pay DFAS higher processing fees for processing manual travel vouchers as opposed to processing the travel vouchers electronically through DTS. According to an April 13, 2005, memorandum from the Assistant Secretary of the Army (Financial Management and Comptroller), DFAS was charging the Army $34 for each travel voucher processed manually and $2.22 for each travel voucher processed electronically—a difference of $31.78. The memorandum further noted that for the period October 1, 2004, to February 28, 2005, at locations where DTS had been deployed, the Army paid DFAS approximately $6 million to process 177,000 travel vouchers manually—$34 per travel voucher, versus about $186,000 to process 84,000 travel vouchers electronically—$2.22 per voucher. Overall, for this 5- month period, the Army reported that it spent about $5.6 million more to process these travel vouchers manually as opposed to electronically using DTS. The military services have recognized the importance of utilizing DTS to the fullest extent possible. The Army issued a memorandum in September 2004 directing each Army installation to fully disseminate DTS to all travelers within 90 to 180 days after IOC at each installation. The memorandum included a list of sites that should be fully disseminated and the types of vouchers that must be processed through DTS. Furthermore, the memorandum noted that travel vouchers that could be processed in DTS should not be sent to DFAS for processing. In a similar manner, in February 2005, the Marine Corps directed that upon declaration of DTS’s IOC at each location, commands will have DTS fully fielded within 90 days and will stop using other travel processes that have the capabilities of DTS. The Air Force issued a memorandum in November 2004 that stressed the importance of using DTS when implemented at an installation. The Navy has not issued a similar directive. Despite these messages, DTS remains underutilized by the military services. The military services, and in particular, the Army, have taken steps to monitor DTS’s usage, but others, such as the Marine Corps, do not capture the data necessary to assess the extent to which DTS is being underutilized. The lack of pertinent data hinders management’s ability to monitor its progress toward the DOD vision of DTS as the standard TDY system. Overhauling DOD’s financial management and business operations—one of the largest and most complex organizations in the world—represents a daunting challenge. DTS, intended to be the department’s end-to-end travel management system, illustrates some of the obstacles that must be overcome by DOD’s array of transformation efforts. With over 3.3 million military and civilian personnel as potential travel system users, the sheer size and complexity of the undertaking overshadows any such project in the private sector. Nonetheless, standardized business systems across the department will be the key to achieving billions of dollars of annual savings through successful DOD transformation. As we have previously reported, because each DOD component receives its own funding for the operation, maintenance, and modernization of its own systems, nonintegrated, parochial business systems have proliferated— 4,150 business systems throughout the department by a recent count. The elimination of “stove-piped” legacy systems and cheaper electronic processing, which could be achieved with the successful implementation of DTS, are critical to realizing the anticipated savings. In closing, we commend the Subcommittee for holding this hearing as a catalyst for improving the department’s travel management practices. We also would like to reiterate that following this testimony, we plan to issue a report that will include recommendations to the Secretary of Defense aimed at improving the department’s implementation of DTS. Mr. Chairman and Members of the Subcommittee, this concludes our prepared statement. We would be pleased to respond to any questions you may have. DOD has taken several steps to address its needs for the use of intellectual and tangible property in the DTS, but it has not yet completed the exercise of the rights it determined necessary for long-term development and implementation of the DTS. While the original contract awarded to BDM did not specifically address intellectual property rights, TRW, as the successor to BDM, acquired in 2001 perpetual rights to use three key commercial software programs to accommodate technology decisions that necessitated modifying some software for use in DTS. When DOD and TRW agreed to restructure the DTS contract, they modified the contract to include several key provisions that provided DOD with rights to various categories of intellectual and tangible property. As set out below, DOD officials told us that they have yet to complete the exercise of some of DOD’s intellectual property rights and to secure title to hardware necessary to meet its long-term acquisition needs, but those steps are in progress. The original DTS contract awarded in 1998 did not specifically address the Government’s intellectual property rights because the contract was structured primarily as a fixed-priced travel services contact rather than as a government-funded development effort. As such, the contractor was responsible for securing the necessary intellectual property rights in the commercial software and other products being used, except for those pertaining to existing DOD systems or used by DOD under other agreements. The fixed price for the services would include the cost to the contractor to obtain or develop the necessary software, hardware, and technical data in order to provide the required travel services to DOD. According to DOD officials, DOD and TRW determined in 2001 that three key commercial software programs used in DTS would not meet DOD’s requirements without modification. Accordingly, in September 2001, TRW executed a license agreement with the firm holding the copyright to the software programs for TRW to use in developing and deploying DTS within DOD. The firm charged TRW with a one-time fee for the rights under the agreement. Under the license agreement, TRW obtained a perpetual and exclusive license to use the three software programs and related software documentation to develop and deploy software and services for use in the DTS. This license includes the authority to modify the source code to one of the software programs. The license agreement authorizes the assignment of TRW’s rights under the agreement to DOD for the DTS project. The license agreement does not expressly condition such an assignment on payment of a fee. According to DOD officials, DOD has approached Northrop Grumman Space & Mission Systems Corp. (Northrop Grumman), as the successor to TRW, requesting assignment of those rights to DOD. In a September 22, 2005, letter to the DTS contracting officer, Northrop Grumman represented that they would assign its rights under the license agreement to DOD at the conclusion of the contract, if requested. The license agreement also provides that Northrop Grumman may sublicense its rights under the agreement to other entities in support of DTS. DOD officials told us that they believe Northrop Grumman’s assignment of these rights to DOD would include the authority for DOD to sublicense the rights to other DOD contractors for use in providing services related to DTS. The DOD officials noted that they are in the process of modernizing the DTS application to include a potential complete replacement of the licensed software with custom developed software. The officials stated that they are still evaluating whether an assignment of rights and issuance of any sublicenses actually would be needed in light of these changes. In the restructuring of the DTS contract, DOD and TRW agreed to address a number of intellectual and tangible property categories under the contract that DOD officials told us would satisfy DOD’s long-term DTS development and implementation plans. The restructured contract incorporated several standard DOD intellectual property rights clauses, but DOD is still evaluating ownership rights related to key hardware used in the DTS. The restructured contract incorporates standard DOD intellectual property rights clauses for a system being developed at government expense and it specifically gives DOD perpetual rights to DTS software. The perpetual rights for different categories of intellectual property generally depend upon the source of the funding of their development. In particular, the contract requires Northrop Grumman to “provide a perpetual license for DOD use worldwide for DTS software” in accordance with certain standard clauses or in accordance with standard commercial terms for commercial software. Also, the contract incorporates a clause that requires Northrop Grumman to grant or obtain for the government royalty free, world-wide, nonexclusive, irrevocable license rights in technical data. Further, these clauses include provisions that permit Northrop Grumman to assert restrictions on the government’s use, release or disclosure of technical data and computer software, depending upon the funding of their development. For commercial software used in the DTS, Northrop Grumman has asserted restrictions applicable to commercial software licenses. Some of the licenses Northrop Grumman obtained for use of commercial software may be neither perpetual nor assignable to DOD, but DOD officials told us that this does not cause risk to the project since there are available alternative methods to acquire similar licenses. Table 1 sets out DOD’s rights in these categories. Finally, the contract incorporated a standard clause governing restrictions DOD may place on information it provides to Northrop Grumman for use under the contract. The restructured contract requires Northrop Grumman to provide all hardware (and other equipment) necessary to deliver services under the contract, but DOD officials told us that they are discussing delivery schedules and ownership rights to hardware items, principally configuration items. In a September 23, 2005, letter to the DTS contracting officer, Northrop Grumman represented that they would assign title to certain hardware at the conclusion of the contract, if requested. Finally, DOD has leased some hardware items necessary to interface with the airline Global Distribution Systems and it will need to evaluate the terms of those leases. To determine if the Department of Defense (DOD) effectively tested key Defense Travel System (DTS) functionality associated with flights and airfares, we reviewed the applicable requirements and the related testing prior to the August 2005 release to determine if the desired functionality was effectively implemented. To determine if DTS will correct the problems previously identified with DOD travel, we analyzed past GAO reports and testimonies, selected Defense Finance and Accounting Service (DFAS) reports, and DOD congressional testimonies. In this regard, we focused on how DTS addresses issues related to premium class travel, unused tickets, and centrally billed accounts. We also randomly sampled 170 travel vouchers to ascertain if some of the problems previously reported upon by DFAS have been resolved. To be included within the selected sample, the travel vouchers had to be for trips that were in DTS and for travel started on or after October 1, 2004, and ended on or before December 31, 2004. We have not yet finalized our projections for the sample. To assess the use of premium class travel, we obtained databases from Bank of America and the Project Management Office-Defense Travel System (PMO-DTS), which provided information on the actual travel transactions and traveler information for the period October-December 2004. The Bank of America’s database contained all DOD transactions for the first quarter of fiscal year 2005, and the PMO-DTS database contained all vouchers processed by DTS for the same time period. We removed all transactions that were not specifically airline charges, such as rail charges and commercial travel office fees, and then selected all fare codes that corresponded to the potential issuance of a premium class ticket. This resulted in 419 instances in which a premium class ticket could have been issued. We have not finalized our analysis. To identify some of the challenges confronting the department in making DTS the department’s standard travel system, we discussed with PMO-DTS officials their implementation strategy and reviewed past GAO reports and testimonies related to the department’s efforts to improve the accuracy and reliability of the information in its business systems. We briefed DOD officials on the contents of this testimony. We assessed the reliability of the DOD data we used for our preliminary evaluation by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of this testimony. We performed our audit work from October 2004 through September 2005, in accordance with U.S. generally accepted government auditing standards. To describe DOD’s property rights in the DTS we reviewed the DTS contract, applicable acquisition regulations, DOD intellectual property guidance, key DTS license agreements, and written responses from PMO- DTS to our questions, and we met with PMO-DTS and contracting officials and with their legal counsel. For future information about this testimony, please contact McCoy Williams at (202) 512-6906 or [email protected] or Keith A. Rhodes at (202) 512-6412 or [email protected]. Our contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the above contacts, the following individuals made key contributions to this testimony: Darby Smith, Assistant Director; J. Christopher Martin, Senior Level Technologist; Beatrice Alff; Francine DelVecchio; Francis Dymond; Thomas Hackney; Gloria Hernandezsaunders; Wilfred Holloway; Jason Kelly; Sheila Miller; Robert Sharpe; Patrick Tobo; and Adam Vodraska. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) has been working to develop and implement a standard end-to-end travel system for the last 10 years. Congress has been at the forefront in addressing issues related to DOD's travel management practices with the hearing today being another example of its oversight efforts. Because of widespread congressional interest in the Defense Travel System (DTS), GAO's current audit is being performed under the statutory authority given to the Comptroller General of the United States. GAO's testimony is based on the preliminary results of that audit and focuses on the following three key questions: (1) Has DOD effectively tested key functionality in DTS related to flights and fare information? (2) Will DTS correct the problems related to DOD travel previously identified by GAO and others? and (3) What challenges remain in ensuring that DTS achieves its goal as DOD's standard travel system? In addition, the Subcommittee asked that GAO provide a description of the intellectual property rights of DOD in DTS. Subsequent to this testimony, GAO plans to issue a report that will include recommendations to the Secretary of Defense aimed at improving the department's implementation of DTS. DTS development and implementation have been problematic, especially in the area of testing key functionality to ensure that the system will perform as intended. Consequently, critical flaws have been identified after deployment, resulting in significant schedule slippages. GAO's recent analysis of selected requirements disclosed that system testing was ineffective in ensuring that the promised capability has been delivered as intended. For example, GAO found that DOD did not have reasonable assurance that DTS properly display flight and airfare information. This problem was not detected prior to deployment, since DOD failed to properly test system interfaces. Accordingly, DOD travelers might not have received accurate information which, could have resulted in higher travel costs. DTS has corrected some of the previously reported travel problems but others remain. Specifically, DTS has resolved the problem related to duplicate payment for airline tickets purchased with the centrally billed accounts. However, problems remain related to improper premium class travel, unused tickets that are not refunded, and accuracy of traveler's claims. These remaining problems cannot be resolved solely within DTS and will take departmentwide action to address. GAO identified two key challenges facing DTS in becoming DOD's standard travel system: (1) developing needed interfaces and (2) underutilization of DTS at sites where it has been deployed. While DTS has developed 32 interfaces with various DOD business systems, it will have to develop interfaces with at least 17 additional systems--not a trivial task. Furthermore, the continued use of the existing legacy travel systems results in underutilization of DTS and affects the savings that DTS was planned to achieve. Components incur additional costs by operating two systems with the same function--the legacy system and DTS--and by paying higher processing fees for manual travel vouchers as opposed to processing the travel vouchers electronically through DTS. |
Congress enacted SCRA in December 2003 as a modernized version of the Soldiers’ and Sailors’ Civil Relief Act of 1940. In addition to providing protections related to residential mortgages, the act covers other types of loans (such as credit card and automobile) and other financial contracts, products, and proceedings, such as rental agreements, eviction, installment contracts, civil judicial and administrative proceedings, motor vehicle leases, life insurance, health insurance, and income tax payments. SCRA provides the following mortgage-related protections to servicemembers: Interest rate cap. Servicemembers who obtain mortgages prior to serving on active duty status are eligible to have their interest rate and The servicer is to forgive interest and any fees capped at 6 percent.fees above 6 percent per year. Servicemembers must provide written notice to their servicer of their active duty status to avail themselves of this provision. Foreclosure proceedings. A servicer cannot sell, foreclose, or seize the property of a servicemember for breach of a preservice obligation unless a court order is issued prior to the foreclosure or unless the servicemember executes a valid waiver. If the servicer files an action in court to enforce the terms of the mortgage, the court may stay any proceedings or adjust the obligation. Fines and penalties. A court may reduce or waive a fine or penalty incurred by a servicemember who fails to perform a contractual obligation and incurs the penalty as a result if the servicemember was in military service at the time the fine or penalty was incurred and the servicemember’s ability to perform the obligation was materially affected by his or her military service. Federal authorities have applied this provision to prepayment penalties incurred by servicemembers who relocate due to permanent change-of-station orders and consequently sell their homes and pay off mortgages early. Adverse credit reporting. A servicer may not report adverse credit information to a credit reporting agency solely because servicemembers exercise their SCRA rights, including requests to have their mortgage interest rates and fees capped at 6 percent. Both servicemembers and servicers have responsibility for activating or applying SCRA protections. For example, to receive the interest-rate benefit, servicemembers must identify themselves as active duty military and provide a copy of their military orders to their financial institution. However, the responsibility of extending SCRA foreclosure protections to eligible servicemembers often falls to mortgage servicers. The burden is on the financial institution to ensure that borrowers are not active duty military before conducting foreclosure proceedings. Eligible servicemembers are protected even if they do not tell their financial institution about their active duty status. One of the primary tools mortgage servicers use to comply with SCRA is a website operated by DOD’s Defense Manpower Data Center (DMDC) that allows mortgage servicers and others to query DMDC’s database to determine the active duty status of a servicemember. Under SCRA, the Secretaries of each military service and the Secretary of Homeland Security have the primary responsibility for ensuring that servicemembers receive information on their SCRA rights and protections. Typically, legal assistance attorneys on military installations provide servicemembers with information on SCRA during routine briefings, in handouts, and during one-on-one sessions. Additionally, DOD has established public and private partnerships to assist in the financial education of servicemembers. The limited data we obtained from four financial institutions showed that a small fraction of their borrowers qualified for SCRA protections. Our analysis suggests that SCRA-protected borrowers generally had higher rates of delinquency, although this pattern was not consistent across the institutions in our sample and cannot be generalized. However, SCRA protections may benefit some servicemembers. SCRA-protected borrowers at two of the three institutions from which we had usable data were more likely to cure their mortgage delinquencies than other military borrowers. Some servicemembers also appeared to have benefitted from the SCRA interest rate cap. Financial institutions we contacted could not provide sufficient data to assess the impact of different protection periods, but our analysis indicates that mortgage delinquencies appeared to increase in the first year after active duty. Based on our interviews and the data sources we reviewed, the number of servicemembers with mortgages eligible for SCRA protections is not known because servicers have not systematically collected this information, although limited data are available. Federal banking regulators do not generally require financial institutions to report information on SCRA-eligible loans or on the number and size of loans that they service for servicemembers. SCRA compliance requires that financial institutions check whether a borrower is an active duty servicemember and therefore eligible for protection under SCRA before initiating a foreclosure proceeding. However, institutions are not required to conduct these checks on loans in the rest of their portfolio, and two told us that they do not routinely check a borrower’s military status unless the borrower is delinquent on the mortgage. Consequently, the number of SCRA-eligible loans that these two institutions reported to us only includes delinquent borrowers and those who reported their SCRA eligibility to the financial institution. Two other institutions were able to more comprehensively report the number of SCRA-eligible loans in their portfolio because they routinely check their portfolio against the DMDC database. Additionally, only one of the financial institutions we contacted was able to produce historical data on the total number of known SCRA- eligible loans in its portfolio. Although exact information on the total number of servicemembers eligible for the mortgage protections under SCRA is not known, DOD data provide some context for approximating the population of servicemembers who are homeowners with mortgage payments and who therefore might be eligible for SCRA protections. According to DOD data, in 2012 there were approximately 1.4 million active duty servicemembers and an additional 848,000 National Guard and Reserve members, of which approximately 104,000 were deployed. While DOD does not maintain data on the number of servicemembers who are homeowners, DOD’s 2012 SOF survey indicated that approximately 30 percent of active duty military made mortgage payments. For reservists, DOD’s most recent survey of homeownership in June 2009 indicated that 53 percent of reservists made mortgage payments. According to DOD officials, industry trade group representatives, SCRA experts, and military service organizations, the servicemembers most likely to be eligible for SCRA mortgage protections are members of the Reserve components because they were more likely to have had mortgages before entering active duty service. Although comprehensive data on the number of servicemembers eligible for SCRA are not available, four financial institutions provided us with some data on the servicemembers they have identified in their portfolios in 2012. According to these data, a small percentage of the financial institutions’ total loan portfolios were identified as being eligible for SCRA protections. Table 1 details the number of loans held by each of the institutions from which we obtained data, including the estimated number of loans belonging to servicemembers and the number of loans the institutions identified as SCRA-eligible. Collectively, we estimate that the financial institutions from which we received useable data service approximately 27-29 percent of the mortgages held by servicemembers. This estimate is based on information from DOD’s SOF results on the estimated percentage of active duty servicemembers and reservists who make mortgage payments and the reported and estimated number of military borrowers that each of these institutions reported in their portfolios. Representatives with three of the financial institutions told us they have made changes to their data systems over the past 2 years to help better identify whether mortgage holders were active duty military and eligible for SCRA protections. They attributed these changes, in part, to DOD’s April 2012 upgrade of the DMDC database to allow financial institutions to check on the active duty status of up to 250,000 borrowers at once, as opposed to checking one individual at a time. Since then, some of the institutions had made changes to their systems to use the DMDC database to routinely check the military status of borrowers, thereby improving their available data on SCRA-eligible borrowers. Of the financial institutions we contacted, representatives with two told us that they now regularly check their entire loan portfolio against the DMDC database. Representatives with the other institutions said that they only check the military status of delinquent borrowers. To illustrate the extent to which these changes could improve the accuracy of the data on SCRA- eligible borrowers, representatives of one financial institution told us they used to rely on postal codes to help identify borrowers on or near military bases to determine whether they were likely servicemembers. This institution has since switched to a data system that allows a check of its entire portfolio against the DMDC database so that the institution can more accurately identify which borrowers are also servicemembers. Our analysis of data from three financial institutions suggests that SCRA- protected borrowers were substantially more likely to experience delinquency at any time than their non-SCRA-protected military counterparts, with one exception. The institutions provided us data with substantial inherent limitations that prevented us from fully analyzing the repayment practices of their military borrowers. However, the limited data allowed us to conduct some analyses of borrowers’ delinquency rates and the rates at which delinquent borrowers became current on their mortgages. At two servicers, we found that SCRA-protected borrowers had delinquency rates from 16 to 20 percent. In contrast, non-SCRA- protected military borrowers had delinquency rates that ranged from 4 to 8 percent. These rates also varied across time within an institution. However, delinquency rates for the large credit union we analyzed were significantly smaller, and its SCRA-protected borrowers were less likely to be delinquent. For example, in the fourth quarter of 2012, 0.01 percent of SCRA-protected borrowers at this institution were delinquent on their loans, while 0.56 percent of the remaining borrowers in its loan portfolio were delinquent. The variation in delinquency rates among these financial institutions indicates that factors in addition to SCRA protection likely influence an institution’s delinquency rates, including differences among each institution’s lending standards and policies or borrower characteristics, such as income and marital status. Although it should be interpreted with caution because the results were not consistent at all three institutions for which we could conduct the analysis, our data analysis also suggests that borrowers protected by SCRA may have a better chance of curing their mortgage delinquency— making payments sufficient to restore their loan to current status—than those without the protections. The summary loan data we obtained from one institution show that its SCRA-protected military borrowers who were 90 or more days delinquent were almost twice as likely to cure their delinquency within a year than civilian borrowers and almost five times as likely as other military borrowers who were not SCRA-protected. Our analysis of loan-level data from another institution also suggested that its SCRA-protected borrowers had a higher likelihood of curing their mortgage delinquency than military borrowers not SCRA-protected, although their chances of curing the delinquency declined after leaving active duty.suggested that cure rates for active duty SCRA-protected servicemembers were substantially lower than their noneligible active duty counterparts. Again, these differences in cure rates among the three institutions could reflect differences in institution policies or borrower characteristics. However, our analysis of data provided by a third institution Our data analysis also indicates that at least some servicemembers have benefitted from the SCRA interest rate cap. As discussed earlier, servicemembers must provide written notice to their servicer of their active duty status to avail themselves of this provision. Analysis of one institution’s data showed that approximately 32 percent of identified SCRA-eligible borrowers had a loan with an interest rate above 6 percent at origination. According to data provided by this institution—which included the initial interest rate and a current interest rate for 9 consecutive months in 2013—some SCRA-eligible borrowers saw their interest rates reduced to 6 percent or less, but almost 82 percent of the loans for those eligible for such a reduction retained rates above 6 percent. However, SCRA-eligible borrowers with interest rates higher than 6 percent had a larger average drop in interest rates from origination through the first 9 months of 2013 than non-SCRA-eligible military borrowers or SCRA-protected borrowers with initial rates below 6 percent. We cannot determine how many rate reductions resulted from the application of SCRA protections; other potential reasons for rate decreases include refinancing or a rate reset on adjustable-rate loans. Several financial institutions told us that more servicemembers could benefit from the rate cap protection if they provided proof of their active duty status to their mortgage servicer. For example, representatives from one financial institution told us that they receive military documentation (orders, commanding officer letters, etc.) on 31 percent of their SCRA- eligible borrowers—as a result, up to 69 percent may not be receiving the full financial benefit that SCRA affords. The data financial institutions we contacted were able to provide were generally not sufficient to assess the impact of the various protection periods in effect since the enactment of SCRA: 90 days, 9 months, and 1 year. Because most of the institutions we interviewed reported that they made enhancements to their data systems in 2012 to better identify SCRA-eligible borrowers, they were unable to provide data for both SCRA-eligible borrowers and a comparison group of other military borrowers before the end of 2011, when the protection periods were shorter. Furthermore, none of our data that included SCRA-eligible borrowers and a comparison group of non-SCRA-eligible borrowers covered more than a 1-year span. As a result, the data were insufficient to evaluate the effectiveness of SCRA in enhancing the longer-term financial well-being of the servicemember leaving active duty or over the life of the mortgage. Finally, our measures of financial well-being— likelihood of becoming delinquent, curing a delinquency, and obtaining a reduction in the mortgage interest rate—are not comprehensive measures of financial well-being, but were the best measures available to us in the data. Our analysis of one servicer’s data suggests that all military borrowers— SCRA-protected or not—had a higher likelihood of becoming delinquent in the first year after they left active duty than when in the military. For example, in the loan-level data from an institution that used the DMDC database to check the military status of its entire loan portfolio, all of its military borrowers had a higher likelihood of becoming delinquent in the first year after they left active duty than when in service, with that risk declining somewhat over the course of the year for non-SCRA-protected military borrowers. Although not generalizeable, these findings are consistent with concerns, described below, that servicemembers may face financial vulnerability after separating from service. Those who were SCRA-protected had a smaller increase in delinquency rates in the first year after leaving active duty than other military borrowers, but this may be due to SCRA-protected borrowers having their loans become delinquent at higher rates before leaving active duty and not to a protective effect of SCRA. Although we were generally unable to obtain data to analyze the impact of the varying protection periods, data from one institution provided some indication of a positive effect of SCRA protection for servicemembers receiving up to a year of protection. Analyzable data from one institution on the mortgage status of all its military borrowers for a 9-month period in 2013, including those who had left active duty service within the last year, indicated that SCRA-protected borrowers who were within the 1-year protection period after leaving active duty service had a higher chance of curing their delinquencies than did the institution’s other military borrowers who had left active duty service. We found this effect despite this being the same institution where we found that SCRA-eligible borrowers were less likely to cure their mortgage delinquencies when still on active duty (compared with non-SCRA-eligible borrowers). Overall, the findings from our data analysis on delinquencies and cure rates were consistent with our interviews and past work showing that the first year after servicemembers leave active duty can be a time of financial vulnerability. We previously reported that while the overall unemployment rate for military veterans was comparable to that of non- veterans, the unemployment rate for veterans more recently separated from the military was higher than for civilians and other veterans. Additionally, representatives from the National Guard and Army Reserve said that Guard and Reserve members may return to jobs in the civilian sector that could be lower paying or less stable than their previous military work. Based on a June 2012 DOD SOF survey of Reserve component members, an estimated 40 percent of reservists considered reemployment, returning to work, or financial stability as their biggest concern about returning from their most recent activation or deployment. As we reported in 2012, some financial institutions extended SCRA protections beyond those stated in the act, as a result of identified SCRA violations and investigations in 2011. For example, three mortgage servicers we included in this review noted that they had reduced the interest rate charged on servicemembers’ mortgages to 4 percent—below the 6 percent required in SCRA. Additionally, the National Mortgage Settlement in February 2012 required five mortgage servicers to extend foreclosure protections to any servicemember—regardless of whether their mortgage was obtained prior to active duty status—who receives Hostile Fire/Imminent Danger Pay or serving at a location more than 750 miles away from his or her home.meeting these conditions may not be foreclosed upon without a court order. Two financial institutions we interviewed extended SCRA foreclosure protections to all active duty servicemembers. One of the financial institutions told us that they have made SCRA foreclosure protections available to all active duty servicemembers for the loans that As a result, any servicemember they own and service (thus, about 16 percent of their mortgage portfolio receives SCRA protection). However, officials at this institution said that they were bound by investor guidelines for the loans they service for other investors, such as Fannie Mae, the Department of Housing and Urban Development, and private investors. The officials said that many of the large investors have not revised their rules to extend SCRA protections; as a result, the institution has been unable to extend SCRA protections to all noneligible borrowers whose loans are owned by these entities. None of the financial institutions we interviewed advocated for a change in the length of time that servicemembers received SCRA protection. Officials at one institution told us that they considered a 1-year period a reasonable amount of time for servicemembers to gain financial stability after leaving active duty and that they implemented the 1-year protection period before it became law. One attorney we interviewed who has a significant SCRA-related practice supported the extension of the SCRA foreclosure protection to 1 year because the revised timeframe matches the mortgage interest-rate protection period, which has remained at 1 year since 2008, when mortgages were added to the SCRA provision that limits interest rates to 6-percent. In contrast, a representative of one of the military support organizations we interviewed noted that, based on his interactions with servicemembers, the effect of extending the foreclosure protection from 9 months to 1 year has been negligible, although he also said that the extension was a positive development. DOD has entered into partnerships with many federal agencies and nonprofit organizations to help provide financial education to servicemembers, but limited information on the effectiveness of these efforts exists. Under SCRA, the Secretaries of the individual services and the Secretary of Homeland Security have the primary responsibility for ensuring that servicemembers receive information on SCRA rights and protections. Servicemembers are informed of their SCRA rights in a variety of ways. For example, briefings are provided on military bases and during deployment activities; legal assistance attorneys provide counseling; and a number of outreach media, such as publications and websites, are aimed at informing servicemembers of their SCRA rights. DOD also has entered into partnerships with many other federal agencies and nonprofit organizations to help provide financial education to servicemembers. These efforts include promoting awareness of personal finances, helping servicemembers and their families increase savings and reduce debt, and educating them about predatory lending practices. As shown in fig. 1, the external partners that worked with DOD have included financial regulators and nonprofit organizations. According to DOD officials, these external partners primarily focus on promoting general financial fitness and well-being as part of DOD’s For example, partners including the Financial Readiness Campaign.Consumer Federation of America, the Better Business Bureau Military Line, and the Financial Industry Regulatory Authority’s Investor Education Foundation provide financial education resources free of charge to servicemembers. DOD and the Consumer Federation of America also conduct the Military Saves Campaign every year, a social marketing campaign to persuade, motivate, and encourage military families to save money every month and to convince leaders and organizations to aggressively promote automatic savings. DOD has partnerships with the Department of the Treasury and the Federal Trade Commission to address consumer awareness, identity theft, and insurance scams targeted at servicemembers and their families. In addition, DOD officials noted that some partners provide SCRA outreach and support to servicemembers. For example, the Bureau of Consumer Financial Protection has an Office of Servicemember Affairs that provides SCRA outreach to servicemembers and mortgage servicers responsible for complying with the act. This agency also works directly with servicemembers by collecting consumer complaints against depository institutions and coordinating those complaints with depository institutions to get a response from them and, if necessary, appropriate legal assistance offices. Similarly, nonprofit partners including the National Military Family Association, the Association of Military Banks of America, and the National Association of Federal Credit Unions provide information on SCRA protections to their members. But DOD officials also noted that partners are not required by DOD to provide SCRA education, and that such education may represent a rather small component of the partnership efforts. DOD established its financial education partnerships by signing memorandums of understanding (MOU) with the federal agencies and nonprofit organizations engaged in its Financial Readiness Campaign. The MOUs include the organizations’ pledges to support the efforts of military personnel responsible for providing financial education and financial counseling to servicemembers and their families as well as additional responsibilities of the individual partners. According to the program manager of DOD’s Financial Readiness Program (in the Office of Family Policy, Children and Youth, which collaborates with the partners), there are no formal expectations that any of the partners provide education about SCRA protections. She noted that such a requirement would not make sense for some partners, including those that do not interact directly with servicemembers but instead provide educational materials about financial well-being. The manager said that it was important that all of DOD’s partners be aware of the SCRA protections, and she planned to remind each of them about the SCRA protections in an upcoming partners meeting. The program manager noted that although her office has not conducted any formal evaluations of the partnerships to determine how effective the partners have been in fulfilling the educational responsibilities outlined in their MOUs, she believes that they have functioned well. According to personal financial managers in the individual services (who work with the personal financial advisors who provide financial education to servicemembers at military installations) and representatives from a military association, the education partnerships have been working well overall. But they also told us that obtaining additional information about the educational resources available through the partnerships and their performance would be helpful. For example, one association noted that it could benefit from a central website to serve as a clearinghouse for educational information from the various financial education partners. Staff from another organization said that DOD should regularly review all of these partners to ensure they were fulfilling their responsibilities. DOD officials told us they would likely discuss these suggestions at upcoming meetings with their financial education partners. The program manager of the Personal Financial Readiness Program also noted that to manage the partnerships, she regularly communicates with the partners to stay informed of their activities. In addition, she said that the Office of Family Policy, Children and Youth has been encouraging individual installation commanders to enter into agreements with local nonprofit organizations. The local partners would provide education assistance more tailored to servicemembers’ situations than the more global information the DOD partners provided. As we noted in our 2012 report, DOD has surveyed servicemembers on whether they had received training on SCRA protections, but had not assessed the effectiveness of its educational methods. To assess servicemembers’ awareness of SCRA protections, in 2008 DOD asked in its SOF surveys if active duty servicemembers and members of the Reserve components had received SCRA training. Forty-seven percent of members of the Reserve components—including those activated in 2008—reported that they had received SCRA training and 35 percent of regular active duty servicemembers reported that they had received training. Without an assessment of the effectiveness of its educational methods (for example, by using focus groups of servicemembers or results of testing to reinforce retention of SCRA information), we noted that DOD might not be able to ensure it reached servicemembers in the most effective manner. We recommended that DOD assess the effectiveness of its efforts to educate servicemembers on SCRA and determine better ways for making servicemembers aware of their SCRA rights and benefits, including improving the ways in which reservists obtain such information. In response to our recommendation, as of December 2013, DOD was reviewing the results of its recent surveys on the overall financial well- being of military families. The surveys have been administered to three groups: servicemembers, military financial counselors, and military legal assistance attorneys. While the surveys are not focused solely on SCRA, they take into account all financial products, including mortgages and student loans, covered by SCRA. DOD officials explained that they would use the results, including any recommendations from legal assistance attorneys, to adjust training and education on SCRA benefits, should such issues be identified. Our findings for this report—that many servicemembers appeared not to have taken advantage of their ability to reduce their mortgage interest rates as entitled—appear to reaffirm that DOD’s SCRA education efforts could be improved and that an assessment of the effectiveness of these efforts is still warranted. We provided a draft of this report to the Department of Defense, the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, and the Bureau of Consumer Financial Protection for comment. The Department of Defense and the Office of the Comptroller of the Currency provided technical comments that were incorporated, as appropriate. We are sending copies of this report to interested congressional committees. We will also send copies to the Chairman of the Board of Governors of the Federal Reserve System, the Secretary of Defense, the Comptroller of the Currency, and the Director of the Consumer Financial Protection Bureau. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This report examines (1) available information on changes in the financial well-being of servicemembers who received foreclosure-prevention and mortgage-related interest rate protections under SCRA, including the extent to which servicemembers became delinquent on their mortgages after leaving active duty and the impact of protection periods; and (2) the Department of Defense’s (DOD) partnerships with public- and private- sector entities to provide financial education and counseling about SCRA mortgage protections to servicemembers and views on the effectiveness of these partnerships. To assess changes in the financial well-being of servicemembers who receive SCRA mortgage protections, including the extent to which servicemembers became delinquent on their mortgages after leaving active duty and the impact of protection periods, we analyzed legislation and reviewed our prior work on SCRA. We obtained and analyzed loan- level data, institution-specific summary data, or both, from four financial institutions (three large single-family mortgage servicers and a large credit union). A fifth institution (a large single-family servicer) we contacted was unable to provide us with data for inclusion in our review. We did not identify financial institutions to protect the privacy of individual borrower data. Table 2 provides a summary of the data we obtained. We conducted a quantitative analysis of the data, which included information on (1) loan history, including loan status and total fees; (2) loan details such as the loan-to-value ratio and principal balance; and (3) financial outcomes of borrowers, such as initial and updated credit scores and whether the borrowers filed for bankruptcy or cured mortgage defaults. After controlling for loan and demographic characteristics and other factors to the extent that such data were available, we developed logistic regression models to estimate the probability of different populations becoming delinquent on their mortgage and curing their mortgage delinquency (by bringing their payments current). The estimates from these models may contain some degree of bias because we could not control for economic or military operations changes, such as changes in housing prices or force deployment that might affect a servicemember’s ability to repay a mortgage. Our analysis is not based on a representative sample of all servicemembers eligible to receive SCRA mortgage protections and therefore is not generalizable to the larger population. Moreover, we identified a number of limitations in the data of the four financial institutions. For example, the various servicer datasets identify SCRA status imperfectly and capture activity over different time periods with different periodicities. We also cannot rule out missing observations or other inaccuracies. Other issues include conflicting data on SCRA eligibility, data reliability issues related to the DOD database used to identify servicemembers (which is operated by the Defense Manpower Data Center, or DMDC), data quality differences across time within a given servicer’s portfolio, and data artifacts that may skew the delinquency statistics for at least one institution. Lastly, as servicer systems vary across institutions, none of the servicers from which we requested data provided us with every data field we requested for our loan-level analysis. Due to the differences in the data provided by each institution, we conducted a separate quantitative analysis of the data from each institution that provided loan-level data. To the extent that data were available, we also calculated summary statistics for each institution on the changes in financial well-being of the servicemembers, which allowed for some basis of comparison across institutions in levels of delinquency and cure rates. To conduct as reliable analyses as the data allowed, we also corrected apparent data errors, addressed inconsistencies, and corroborated results with past work where possible. Through these actions, and interviews with knowledgeable financial institution officials, we determined that the mortgage data and our data analysis were sufficiently reliable for the limited purposes of this report. However, because some servicer practices related to SCRA have made it difficult to distinguish SCRA-protected servicemembers from other military personnel, the relative delinquency and cure rates we derived from these data represent approximations, are not definitive, and should be interpreted with caution. Furthermore, we analyzed data from DOD’s Status of Forces (SOF) surveys from 2007 to 2012, which are administered to a sample of active duty servicemembers and reservists on a regular basis and cover topics such as readiness and financial well-being. We determined the survey data we used were sufficiently reliable for our purposes. We also analyzed DOD data on the size of the active duty military population and DOD survey data to estimate the percentage of servicemembers who make payments on a mortgage and may be eligible for SCRA protections, and the percentage of military borrowers that our sample of borrowers from selected financial institutions covers. Lastly, we also interviewed two lawyers with knowledge of SCRA, five selected financial institutions, DOD officials (including those responsible for individual military services, the Status of Forces Surveys, and a database of active duty status of servicemembers), and representatives of military associations and selected financial institutions to obtain available information or reports on the impact of SCRA protections on the long-term financial well-being of servicemembers and their families. To examine the effectiveness of DOD’s partnerships, we analyzed documentation on DOD’s partnerships with public and private entities that provide financial education and counseling to servicemembers. For example, we reviewed memorandums of understanding DOD signed with the federal agencies and nonprofit organizations engaged in its Financial Readiness Campaign. We reviewed the nature of such partnerships, including information or efforts related to SCRA mortgage protections. We also conducted interviews with DOD officials, including the program manager of DOD’s Personal Financial Readiness Program and personal financial managers in each of the individual military services; selected DOD partners that provide SCRA-related education to servicemembers; a military support association; and two lawyers with knowledge of SCRA. We asked about how such partnerships provide SCRA mortgage education and counseling and gathered views on and any assessments of the partnerships’ effectiveness. We conducted this performance audit from June 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cody Goebel, Assistant Director; James Ashley; Bethany Benitez; Kathleen Boggs; Abigail Brown; Rudy Chatlos; Grant Mallie; Deena Richart; Barbara Roesmann; and Jena Sinkfield made key contributions to this report. | SCRA seeks to protect eligible active duty military personnel in the event that their military service prevents them from meeting financial obligations. Mortgage-related protections include prohibiting mortgage servicers from foreclosing on servicemembers' homes without court orders and capping fees and interest rates at 6 percent. Traditionally, servicemembers received 90 days of protection beyond their active duty service, but this period was extended to 9 months in 2008 and to 1 year in 2012. The legislation that provided the 1-year protection period also mandated that GAO report on these protections. This report examines (1) available information on changes in the financial well-being of servicemembers who received foreclosure-prevention and mortgage-related interest rate protections under SCRA, including the extent to which they became delinquent and the impact of protection periods; and (2) DOD's partnerships with public- and private-sector entities to provide financial education and counseling about SCRA mortgage protections to servicemembers and views on the effectiveness of these partnerships. To address these objectives, GAO sought and received data from three large mortgage servicers and a large credit union covering a large portion of all mortgage loans outstanding and potentially SCRA-eligible borrowers. GAO also reviewed documentation on DOD's partnerships and relevant education efforts related to SCRA mortgage protections. GAO interviewed DOD officials and partners who provided SCRA mortgage education and counseling. The number of servicemembers with mortgages eligible for Servicemembers Civil Relief Act (SCRA) mortgage protections is unknown because servicers have not collected this information in a comprehensive manner. Based on the limited and nongeneralizeable information that GAO obtained from the three mortgage servicers and the credit union, a small percentage of the total loan portfolios were identified as eligible for SCRA protections. Two large servicers had loan-level data on delinquency rates. For those identified as SCRA-eligible, rates ranged from 16 to 20 percent and from 4 to 8 percent for their other military borrowers. Delinquencies at the credit union were under 1 percent. Some servicemembers appeared to have benefitted from the SCRA interest rate cap of 6 percent, but many eligible borrowers had apparently not taken advantage of this protection. For example, at one institution 82 percent of those who could benefit from the interest rate caps still had mortgage rates above 6 percent. The data also were insufficient to assess the impact of SCRA protections after servicemembers left active duty, although one institution's limited data indicated that military borrowers had a higher risk of delinquency in the first year after leaving active duty. But those with SCRA protections also were more likely to cure delinquencies during this period than the institution's other military borrowers. Given the many limitations to the data, these results should only be considered illustrative. Most of these institutions indicated that they made recent changes to better identify SCRA-eligible borrowers and improve the accuracy of the data. The Department of Defense (DOD) has partnerships with many federal agencies and nonprofit organizations to help provide financial education to servicemembers, but limited information on the effectiveness of these partnerships exists. DOD and its partners have focused on promoting general financial fitness rather than providing information about SCRA protections. But some partners provide SCRA outreach and support to servicemembers. For example, the Bureau of Consumer Financial Protection has an Office of Servicemember Affairs that provides SCRA outreach to servicemembers and mortgage servicers responsible for complying with the act. Although stakeholders GAO interviewed generally offered favorable views of these partnerships, some said obtaining additional information about educational resources and partnership performance could improve programs. However, DOD has not undertaken any formal evaluations of the effectiveness of these partnerships. This finding is consistent with GAO's July 2012 review of SCRA education efforts, which found that DOD had not assessed the effectiveness of its educational methods and therefore could not ensure it reached servicemembers in the most effective manner. GAO recommended in July 2012 that DOD assess the effectiveness of its efforts to educate servicemembers on SCRA to determine better ways for making servicemembers (including reservists) aware of SCRA rights and benefits. In response to that recommendation, as of December 2013, DOD was reviewing the results of its recent surveys on the overall financial well-being of military families and planned to use these results to adjust training and education for SCRA, as appropriate. GAO's current finding that many servicemembers did not appear to be taking advantage of the SCRA interest rate cap appears to reaffirm that DOD's SCRA education efforts could be improved and that an assessment of the effectiveness of these efforts is still warranted. |
For decades, the United States has struggled to prevent the proliferation of nuclear, biological, and chemical weapons. Nevertheless, the number of countries that possess nuclear, biological, or chemical capabilities grows each year. As a result, countries possessing these weapons could threaten the interests of the United States in every possible theater of the world. The Gulf War experience exposed (1) weaknesses in the U.S. forces’ preparedness to defend against chemical or biological agent attacks and (2) the risks associated with reliance on post-mobilization activities to overcome deficiencies in chemical and biological readiness. Post-conflict studies confirmed that U.S. forces were not fully prepared to defend against Iraqi use of chemical or biological weapons and could have suffered significant casualties had they been used. Units and individuals often arrived in theater without needed equipment, such as protective clothing and adequate chemical and biological agent detectors. Active and reserve component forces required extensive chemical and biological training before and after arrival in Southwest Asia. Medical readiness problems included inadequate equipment and training. Biological agent vaccine stocks, and policies and procedures for their use, were also inadequate. While post-mobilization and in-theater activities increased readiness, equipment and training problems persisted to varying degrees throughout the conflict. Complacency and the absence of command emphasis on chemical and biological defense prior to deployment were among the root causes of this lack of preparedness. We previously reported on these problems in May 1991. Since the Gulf War, Congress has expressed concern about the proliferation of chemical and biological weapons and the readiness of U.S. forces to operate in a contaminated environment. In November 1993, the National Defense Authorization Act for Fiscal Year 1994 (P. L. 103-160) directed the Secretary of Defense to take specific actions designed to improve chemical and biological defense and to report annually to Congress on the status of these efforts. Although DOD is taking steps to improve the readiness of U.S. ground forces to conduct operations in a chemical or biological environment, serious weaknesses remain. Many early deploying active and reserve units do not possess the amount of chemical and biological equipment required by regulations, and new equipment development and procurement are often proceeding more slowly than planned. Many units are not trained to existing standards, and military medical capability to prevent and treat casualties on a contaminated battlefield is very limited. During the Gulf War, units and individuals often deployed without all the chemical and biological detection, decontamination, and protective equipment they needed to operate in a contaminated environment. For example, some units did not have sufficient quantities or the needed sizes of protective clothing, and chemical detector paper and decontamination kits in some instances had passed expiration dates by as much as 2 years. These shortages in turn caused logistical problems, such as the rapid depletion of theater equipment reserves, and required extraordinary efforts by logisticians and transporters to rectify the situation during the 6-month interval between deployment and the initiation of major combat. Had chemical or biological weapons been used during this period, some units might have suffered significant, unnecessary casualties. To prevent this problem from recurring in future conflicts, in 1993 the U.S. Forces Command (FORSCOM) revised its requirements regarding the amount of chemical and biological defense equipment early deploying active and reserve units are required to store on hand. This action was intended to ensure that these units would have sufficient equipment on hand upon deployment until in-theater logistical support could be established. We found that neither the Army’s approximately five active divisions composing the crisis response force (divisions with mobilization to deployment requirements of less than 30 days) nor any of the early deploying Army reserve units we visited were in full compliance with the new stock level requirements. All had shortages of various types of critical equipment. For example, three of the active divisions had 50 percent or greater shortages of protective clothing (battle dress overgarments), and shortages of other critical items (such as protective boots, gloves, hoods, helmet covers, mask filters, and decontamination kits) ranged from no shortage to an 84-percent shortage depending on the unit and the item concerned. Shortages in on-hand stocks of this equipment were often exacerbated by poor inventorying and reordering techniques, shelf-life limitations, and difficulty in maintaining appropriate protective clothing sizes. For example, none of the active units we visited had determined how many and what sizes of chemically protected overgarments were needed. FORSCOM officials told us the Army’s predetermined standard formula for the numbers of different clothing sizes needed by the average unit was often inaccurate, particularly for support units that are likely to have larger percentages of female soldiers. Furthermore, shortages of chemical protective clothing suits are worsening because most of the active divisions we visited had at least some of these items on hand with 1995 expiration dates. Unit stock levels are also being affected by problems with the availability of appropriate warehouse space at most of the installations we visited. Army officials at FORSCOM and in the active units we visited were aware of these shortages. They said that the operation and maintenance funds normally used to purchase this equipment had been consistently diverted by unit commanders to meet other higher priority requirements such as base operating costs, quality-of-life considerations, and costs associated with other-than-war deployments such as those to Haiti and Somalia. Our review of FORSCOM financial records showed that while the operation and maintenance account included funds budgeted for chemical and biological training and equipment, very little had actually been spent on equipment during fiscal year 1995 at the FORSCOM units we visited. Army records were inadequate to determine for what purposes the diverted funds had been used except by reviewing individual vouchers. We did not attempt to review these because of the time and resources such a review would require. Army officials acknowledged that increasing operation and maintenance funding levels was unlikely to result in increased unit chemical equipment stocks unless in operation and maintenance funding increases are specifically designated for this purpose. Numerous other activities also dependent on operation and maintenance funding are being given a higher priority than chemical defense equipment by all the early deploying active Army divisions we visited. The cost of purchasing this equipment is relatively low. Early deploying active divisions in the continental United States could meet current stock requirements for an additional cost of about $15 million. However, some may need to acquire additional warehouse storage space for this equipment. FORSCOM officials told us that due to a variety of funding and storage problems, they were considering decreasing chemical defense equipment contingency stock requirements to the level needed to support only each early deploying division’s ready brigade and relying on depots to provide the additional equipment needed on a “just-in-time” basis before deployment. FORSCOM officials told us that other potential solutions were also being considered, such as funding these equipment purchases through procurement rather than operation and maintenance accounts, or transferring responsibility for purchasing and storing this material on Army installations to the Defense Logistics Agency. It is unclear to what extent this and other alternatives might be effective in providing the needed equipment prior to deployment. At the beginning of the Gulf War, U.S. forces were vulnerable because the services lacked such things as (1) effective mobile systems for detecting and reporting chemical or biological agents; (2) a decontaminate solution suitable for use in sensitive interior areas of aircraft, ships, and vehicles; and (3) a suitable method for decontaminating large areas such as ports and airfields. Protective clothing was problematic because it was heavy, bulky, and too hot for warm climates. In response to lessons learned in the Gulf War and subsequent congressional guidance, DOD has acted to improve the coordination of chemical and biological doctrine, requirements, research, development, and acquisition among DOD and the military services. During 1994 and 1995, DOD planned and established the Joint Service Integration and Joint Service Materiel Groups, which are overseen by a single office within DOD—the Assistant Secretary of Defense (Atomic Energy/Chemical and Biological Matters). The Joint Service Integration Group is to prioritize chemical and biological research efforts and establish a modernization plan, and the Joint Service Materiel Group is to develop the research, development, acquisition, and logistics support plans. These groups have begun to implement the requirements of Public Law 103-160. However, progress has been slower than expected. At the time of our review, the Joint Service Integration Group expected to produce its proposed (1) list of chemical and biological research priorities and (2) joint service modernization plan and operational strategy during March 1996. The Joint Service Materiel Group expects to deliver its proposed plan to guide chemical and biological research, development, and acquisition in October 1996. It is unclear whether or when DOD will approve these plans. However, fiscal year 1998 is the earliest that DOD can begin their formal implementation if they are quickly approved. Consolidated research and modernization plans are important for avoiding duplication among the services and otherwise achieving the most effective use of limited resources. DOD officials told us progress by these groups has been adversely affected by personnel shortages and other assigned tasks. DOD’s efforts to develop and improve specific equipment have had mixed results. The Fox mobile reconnaissance system, fielded during the Gulf War, features automated sampling, detection, and warning equipment. However, due to budgetary constraints, DOD approved the acquisition of only 103 of the more than 200 Fox systems originally planned. Early deploying Army mechanized and armored divisions have been assigned 6 Fox vehicles each, the Marine Corps has 10, and virtually all the remainder have been assigned to a chemical company from which they would be assigned as needed in the event of a conflict. Our discussions with Army officials revealed concerns about the adequacy of assigning only 6 Fox vehicles per division. They said a total of 103 Fox vehicles might be insufficient to meet needs if chemical and/or biological weapons are used in two nearly simultaneous regional conflicts, particularly until the Army’s light divisions and the Marine Corps are equipped with a planned smaller and lighter version of a reconnaissance system. In January 1996, DOD also began to field the Biological Integrated Detection System, a mobile system for identifying biological agents, and plans to field 38 by September 1996. Other programs designed to address critical battlefield deficiencies have been slow to resolve problems. DOD’s 1995 Annual Report to Congress identified 11 chemical and biological defense research goals it expected to achieve by January 1996. Of these, five were met on time. Of the remaining goals, two will not be achieved by 1997, and it is unclear when the remainder will be achieved. An effort ongoing since 1987 to develop a less corrosive and labor-intensive decontaminate solution is not expected to be completed until 2002. Work initiated in 1978 to develop an Automatic Chemical Agent Alarm (designed to provide visual, audio, and command-communicated warning of chemical agents) remains incomplete, and efforts to develop wide-area warning and decontamination capabilities are not expected to be achieved until after the year 2000. Army and Marine Corps regulations require that individuals be able to detect the presence of chemical agents, quickly put on their protective suits and masks, decontaminate their skin and personal equipment, and evaluate casualties and administer first aid. Units must be able to set alarms to detect agents, promptly report hazardous agent attacks to higher headquarters, mark and bypass contaminated areas, and remove hazardous agents from equipment and vehicles. Commanders are required to assess their units’ vulnerability to chemical or biological attacks, determine the level of protection needed by their forces, implement a warning and reporting system, employ chemical units to perform reconnaissance and decontamination operations, and ensure that adequate measures are in place to evacuate and treat casualties. Training for these tasks is accomplished through a variety of live and simulated exercises conducted at units’ home stations and at combat training centers such as the Army’s National Training Center at Fort Irwin, California, and the Marine Corps Air Ground Combat Center at 29 Palms, California. Since the Gulf War, the services have acted to improve their chemical and biological training. They (1) issued policy statements on the importance of chemical and biological readiness, (2) revised doctrinal guidance and training regulations, and (3) collocated chemical defense training for all four services at the Army’s Chemical School, Fort McClellan, Alabama.Commanders were instructed to ensure that their units were fully trained to standard to defend and sustain operations against battlefield chemical and biological hazards. Further, they were instructed that chemical and biological training must be fully integrated into unit exercises and must test the capability of commanders, staffs, and units to perform their mission under chemical and biological conditions. In spite of these efforts, many problems of the type encountered during the Gulf War remain uncorrected, and U.S. forces continue to experience serious training-related weaknesses in their chemical and biological proficiency. In a series of studies conducted by the Army from 1991 to 1995, the Army found serious weaknesses at all levels in chemical and biological skills. For example, a 1993 Army Chemical School study found that a combined arms force of infantry, artillery, and support units would have extreme difficulty in performing its mission and suffer needless casualties if forced to operate in a chemical or biological environment. The Army concluded that these weaknesses were due to the force being only marginally trained to operate in a chemical and biological environment. Many of these problems had been identified a decade ago. For example, the Army found similar problems in three other studies of mechanized and armored units conducted by the Chemical School in 1986, 1987, and 1989. Our analysis of Army readiness evaluations, trend data, and lessons learned completed from 1991 to 1995 also showed serious problems. At the individual, unit, and commander level, the evaluations showed a wide variety of problems in performing basic tasks critical to surviving and operating in a chemical or biological environment. These problems included (1) inability to properly don protective masks, (2) improper deployment of detection equipment, (3) inability to administer first-aid to chemical or biological casualties, (4) inadequate planning on the evacuation of casualties exposed to chemical or biological agents, and (5) failure to integrate chemical and biological issues into operational plans. More detailed information on these problems is contained in appendixes I and II. Our work showed that the Marine Corps also continued to be affected by many of the same problems experienced during the Gulf War. Marine Corps 1993 trendline data from its combat training center at 29 Palms, California, showed that (1) submission of chemical and biological warning reports were not timely, (2) units and individuals were inexperienced with detection equipment, and (3) units did not properly respond to a chemical attack, issue alarms to subordinate elements, and follow proper unmasking techniques following a chemical attack. Current U.S. military strategy is based on joint air, land, sea, and special operations forces operating together in combat and noncombat operations. The Chairman of the Joint Chiefs of Staff (CJCS) Exercise Program is the primary method DOD uses to train its commanders and forces for joint operations. Our analysis of exercises conducted under the program showed that little chemical or biological training was being done. In October 1993, the Joint Staff issued the Universal Joint Task List for the regional commanders in chief (CINC) and the services to use to help define their joint training requirements. The list includes 23 chemical and biological tasks to be performed, such as gathering intelligence information on the enemy’s chemical and biological warfare capabilities, assessing the effects of these agents on operations plans, and performing decontamination activities. In fiscal year 1995, 216 exercises were conducted under the CJCS program. These were planned, conducted, and evaluated by each CINC. Our analysis of the exercises conducted by four major CINCs (U.S. Atlantic, Central, European, and Pacific commands) in fiscal year 1995 and planned for fiscal year 1996 showed little joint chemical or biological training is being conducted. Overall, these CINCs conducted at least 70 percent of the total number of CJCS exercises held in fiscal year 1995 and planned for fiscal year 1996. However, only 10 percent of the CJCS exercises they conducted in 1995 and 15 percent of those to be conducted in fiscal year 1996 included any chemical or biological training. Of the exercises conducted, none included all 23 tasks, and the majority included less than half of these tasks. Appendixes III and IV show the amount of joint training being conducted by these CINCs. Two reasons account for the little amount of joint chemical and biological training. First, notwithstanding Joint Staff guidance to CINCs on the need to train for chemical and biological warfare threats, the CINCs generally consider chemical and biological training and preparedness to be the responsibility of the individual military services. Second, most of the CINCs have assigned a lower priority to chemical and biological issues than others that they feel more directly relate to their mission. In this regard, CINCs and other major commanders have made a conscious decision to better prepare for other, more likely threats and to assume greater risk regarding chemical and biological defense. For many years, DOD has maintained a medical research and development program for biological defense. However, at the time of the Gulf War, the United States had neither fielded equipment capable of detecting biological agents nor stocked adequate amounts of vaccine to protect the force. When the Gulf War started, DOD also had not established adequate policies and procedures for determining which vaccines needed to be administered, when they were to be given, and to whom. According to DOD officials, this caused much DOD indecision and delay and resulted in U.S. forces being administered varying types of vaccines about 5 months after they began arriving in theater and only a month or so before the major ground offensive began. Sufficient protection was not provided by the time the offensive began either, since virtually all biological agent vaccines require a minimum of 6 to 12 weeks or longer after immunization to become effective. Since the Gulf War, DOD has increased the attention given to biological warfare defense. DOD consolidated the funding and management of several biological warfare defense activities, including vaccines, under the new Joint Program Office for Biological Defense. In November 1993, DOD established the policy, responsibilities, and procedures for stockpiling biological agent vaccines and determined which personnel should be immunized and when the vaccines should be administered. This policy specifically states that personnel assigned to high-threat areas and those predesignated for immediate contingency deployment to these areas (such as personnel in units with deployment dates up to 30 days after mobilization) should be vaccinated in sufficient time to develop immunity prior to deployment. DOD has also identified which biological agents constitute critical threats and determined the amount of vaccine that should be stocked for each. At present, the amount of vaccines stocked remains insufficient to protect the force. The Joint Chiefs of Staff and other high-ranking DOD officials have not yet approved implementation of the established immunization policy. No decision has yet been made on which vaccines to administer, nor has an implementation plan been developed. DOD officials told us the implementation plan should be developed by March 1996, but this issue is highly controversial within DOD, and it is unclear whether the implementation plan will be approved and carried out. Until such an implementation plan is developed and approved and immunizations are given, existing vaccines cannot provide the intended protection from biological agents for forces already stationed in high-threat areas and those designated for early deployment if a crisis occurs and biological agents are used. Problems also exist with regard to the vaccines available to DOD for immunization purposes. Only a few biological agent vaccines have been approved by the Food and Drug Administration (FDA). Many remain in Investigational New Drug (IND) status. Although IND vaccines have long been safely administered to personnel working in DOD vaccine research and development programs, the FDA usually requires large-scale field trials in humans to demonstrate new drug safety and effectiveness before approval. DOD has not performed such field trials because of the ethical and legal considerations involved in deliberately exposing humans to toxic or lethal biological agents; nor has it effectively pursued other means of obtaining FDA approval for IND vaccines. IND vaccines can therefore now be administered only under approved protocols and with written informed consent. During the Gulf War, DOD requested and received a waiver from the FDA requirement for written informed consent since this was a contingency situation. If DOD intends to use vaccines to provide protection against biological agents to personnel already assigned to high-threat areas or designated for rapid deployment, then it needs to make the required decisions for proceeding with immunizations and either using IND vaccines or obtaining FDA approval for them. DOD officials told us they hoped to acquire a prime contractor during 1996 to subcontract vaccine production with the pharmaceutical industry and take the actions needed to obtain FDA approval for existing IND vaccines. Medical units assigned to support the early deploying Army divisions we visited often lacked certain types of equipment needed to treat casualties in a chemically or biologically contaminated environment. For example, these units are authorized chemical patient treatment sets and patient decontamination kits that contain items such as suction apparatuses and airways, aprons, gloves, scissors, and drugs and chemicals for treating or decontaminating casualties. Overall, the medical units we visited had on hand only about 50 to 60 percent of their authorized patient treatment kits and patient decontamination kits. Some units we visited had not been issued any of these kits. Further, our inspection of some kits showed that they were missing critical components, such as drugs used for treating chemical casualties. Army officials said that the shelf life of these items had expired and that operation and maintenance funds were not available to replace them. Forward medical support for combat units, such as battalion aid stations and mobile army surgical hospitals, need to be capable of operating in contaminated environments. However, none of the medical units we visited had any type of collective shelter that would enable them to provide such treatment. Army officials acknowledged that the lack of shelters would virtually prevent any forward area treatment of casualties, and would cause greater injury and death rates. They told us that older versions of collective shelters developed to counter the Soviet threat were unsuitable, unserviceable, and no longer in use. While new shelters—both a field hospital version and a small mobile version mounted on a vehicle—are in development, they are not expected to be available for initial issuance to units until at least fiscal years 1997 and 1998. Furthermore, Army officials told us that the Army plans to limit issuance of the mobile shelters to about 90 percent of the crisis response force, has canceled plans for a tracked version for mechanized and armored divisions, and might not purchase the currently planned version due to its funding priority. Military physicians assigned to medical units supporting early deploying Army divisions need to be trained to treat and manage casualties in a chemical or biological environment. All Army physicians attend the Medical Officer Basic Course and receive about 44 hours of training on nuclear, biological, and chemical (NBC) topics. The Officer Advanced Course provides another 40 hours of instruction for medical officers when they reach the rank of major or lieutenant colonel, but is optional. Also optional, the Management of Chemical and Biological Casualties Course provides 6-1/2 days of classroom and field instruction to military health care providers and is designed to establish the essential skills needed to save lives, minimize injury, and conserve fighting strength in a chemical or biological warfare environment. During Operation Desert Storm, this course was provided on an emergency basis to medical units already deployed to the theater. These three courses constitute the bulk of formal military medical training specifically oriented toward chemical and biological warfare casualty treatment, with some additional training provided through other shorter courses. Our examination showed that of the physicians either currently assigned to medical units in selected early deploying Army divisions or designated to report to these units at deployment, only a limited number had completed the medical officer advanced and casualty management courses. The percentage of physicians that had attended the advanced course ranged from 19 to 53 percent, while from 3 to 30 percent had attended the casualty management course. Army medical officials told us that the demands of providing peacetime medical care to military personnel and their dependents often prevented attendance at these courses. Furthermore, the Army had made no effort to monitor whether these physicians had received this training, and attendance of the casualty management course was neither required nor targeted toward physicians assigned to early deploying units or otherwise needing this training. We also found little or no training is being conducted on casualty decontamination from chemical or biological agents at most of the early deploying divisions and medical units we visited. There was usually confusion among these units regarding who was responsible for performing this task. According to Army doctrine, tactical units are expected to conduct initial casualty decontamination before their evacuation or arrival at forward medical treatment facilities. Army lessons learned from Operation Desert Storm noted that some units lacked understanding of the procedures and techniques used to decontaminate casualties. This situation had not been corrected at the time of our review. Although DOD has taken actions to improve chemical and biological defense since the Gulf War, DOD’s emphasis has not been sufficient to resolve many serious lingering problems. Our measurement of key indicators—DOD funding, staffing, mission priority, and monitoring—showed that chemical and biological defense tends to be relegated a lower level of priority than other threat areas. Historically, DOD has allocated less than 1 percent of its total budget to chemical and biological defense. Annual funding for this area has decreased by over 30 percent in constant dollars, from approximately $750 million in fiscal year 1992 to $504 million in fiscal year 1995. Funding for chemical and biological defense activities could decrease further if the Secretary of Defense agrees to a recent proposal by the Joint Staff. In response to a recent Joint Staff recommendation to reduce counterproliferation funding over $1 billion over the next 5 years, DOD identified potential reductions of approximately $800 million. DOD officials told us that, if implemented, this reduction would severely impair planned chemical and biological research and development efforts and reverse the progress already made in several areas. For example, procurement of the Automatic Chemical Agent Alarm would be delayed well into the next century, as would the light NBC reconnaissance system. At the time we completed our work, DOD officials told us that DOD was considering reducing the amount of the proposed funding reduction to about $33 million, resulting in a far less serious impact on chemical and biological warfare programs. However, we believe that the limited funding devoted to chemical and biological defense, the tendency to reduce this funding to avoid cuts in other operational areas, and the tendency of commanders to divert operation and maintenance funding budgeted for chemical and biological defense is indicative of the lower priority often given this area. Chemical and biological defense activities were frequently understaffed and heavily tasked with other unrelated duties. At the CINC and military service levels, for example, chemical officers assigned to CINC staffs were often heavily tasked with duties not related to chemical and biological defense. At FORSCOM and U.S. Army III Corps headquarters, chemical staff positions were being reduced, and no chemical and biological staff position exists at the U.S. Army Reserve Command. Finally, according to DOD officials, the Joint Service Integration and Joint Service Materiel Groups (the groups charged with overseeing research and development efforts for chemical and biological equipment) have made less progress than planned due to staffing shortages and other assigned tasks. The priority given to chemical and biological defense matters varied widely. Most CINCs appear to assign chemical and biological defense a lower priority than other threats. CINC staff members told us that responsibility for chemical and biological defense training was primarily a service matter, even though the Joint Staff has tasked the CINCs with ensuring that their forces are trained in certain joint chemical and biological tasks. Several high-ranking DOD officials told us that U.S. forces still face a limited, although increasing, threat of chemical and biological warfare. At Army corps, division, and unit levels, the priority given to this area depended on the commander’s opinion of its relative importance. For example, one early deploying division we visited had an aggressive system for chemical and biological training, monitoring, and reporting. At another, the division commander made a conscious decision to emphasize other areas due to limited resources and other more immediate requirements, such as other than war deployments and quality of life considerations. As previously discussed, Army medical officials told us that the demands of providing peacetime medical care to military personnel and their families often interfered with medical training oriented toward combat-related subjects such as chemical and biological casualties. Officials from Army major commands, corps, divisions, and individual units said that chemical and biological defense skills not only tended to be difficult to attain and highly perishable but also were often given a lower priority than other areas for the following reasons: too many other higher priority taskings, low levels of monitoring or interest by higher headquarters, the difficulty of performing tasks in cumbersome and uncomfortable protective gear, the time-consuming nature of chemical training, heavy reliance on post-mobilization training and preparation, and the perceived low likelihood of chemical and biological warfare. The lower emphasis given to chemical and biological matters is also demonstrated by weaknesses in the methods used to monitor its status. DOD’s current system for reporting overall readiness to the Joint Staff is the Status of Resources and Training System (SORTS). This system measures the extent to which individual service units possess the required resources and are trained to undertake their wartime missions. SORTS was established to provide the current status of specific elements considered essential to readiness assessments, such as personnel and equipment on hand, equipment condition, and the training of operating forces. The SORTS elements of measure, “C” ratings that range from C-1 (best) to C-4 (worst), are probably the most frequently cited indicator of readiness in the military. In a 1993 effort to improve the monitoring of chemical and biological defense readiness, DOD required units from all services to assess their equipment and training status for operations in a contaminated environment and report this data as a distinct part of SORTS. DOD’s 1994 and 1995 annual reports to Congress on nuclear, biological, and chemical warfare defense reported the continued lack of an adequate feedback mechanism on the status of chemical and biological training, equipment, and readiness. We found that the effectiveness of SORTS for evaluating unit chemical and biological readiness is limited. While the current report requires unit commanders to report shortages of critical chemical or biological defense equipment, it leaves the determination of which equipment is critical up to the commander. The requirements also allow commanders to subjectively upgrade their overall SORTS status, regardless of their chemical and biological status. For example, one early deploying active Army division was rated in the highest SORTS category (C-1) despite rating itself in the lowest category (C-4) for chemical and biological equipment readiness. In addition, SORTS does not require reporting of some critical unit and individual equipment items if they are being stored at corps, rather than unit level, and SORTS reports are sometimes inaccurate due to poor equipment inventorying techniques. Furthermore, while individual units must fill out these reports, divisions are not required to do so. FORSCOM officials told us that most of the early deploying active Army divisions did not complete summaries of this report for at least 4 months in 1995 and that FORSCOM did not monitor these reports for about 6 months in 1995 due to a lack of personnel and other priorities. FORSCOM officials told us they normally performed only limited monitoring of unit chemical and biological readiness and relied mostly on unit commanders to report any problems. The U.S. Army Reserve Command does not have an office or individual assigned to monitor reserve units’ chemical and biological equipment and training status. With the exception of SORTS, the monitoring of chemical and biological readiness varied widely. At the CINC level, virtually no monitoring was being done. None of the CINCs we visited required any special reports on chemical or biological matters or had any special monitoring systems in place. At lower levels, monitoring was inconsistent and driven by the commander’s emphasis on the area. At both division and corps levels, monthly briefings, reports, and other specific monitoring of chemical and biological readiness were sometimes required and sometimes not, depending on the commander’s view of the importance of this area. Other methods the Army uses to monitor chemical and biological proficiency are (1) after-action and lessons-learned reports summarizing the results of operations and unit exercises at the Army’s combat training centers and (2) operational readiness evaluations. The effectiveness of these tools is hindered by the varying amounts of chemical and biological training included in unit rotations at the combat training centers and the frequent lack of realism under which chemical and biological conditions are portrayed. Unit commanders influence the amount of chemical and biological training to be included in exercises at the centers and how and when it will be used in the exercises. In some cases, Army officials said that these exercises often include little chemical and biological training and that in others it is conducted separately from more realistic combat training. Operational readiness evaluations (ORE), on the other hand, were more standardized in the areas of chemical and biological proficiency that were assessed. FORSCOM used OREs to obtain external evaluations of active, reserve, and National Guard unit readiness and to identify areas needing improvement. These evaluations focus on unit ability to perform its wartime missions prior to mobilization and deployment. OREs consist of a records check of personnel, logistics, training, and mobilization data and an assessment of a unit’s ability to perform critical collective and individual mission tasks, including chemical and biological defense tasks. However, since the second quarter of fiscal year 1995, the Army has discontinued OREs at all active units and certain Army National Guard units. Marine Corps monitoring of chemical and biological matters was more extensive than the Army’s. The Marine Corps conducts standardized Operational Readiness and Commanding General Inspections, Combat Readiness Evaluation Programs, and Marine Corps Combat Readiness Evaluations that assess chemical and biological proficiency. The Corps also requires monthly reports to division commanders that assess home station training in several specified chemical and biological areas. However, the effectiveness of some of its evaluation tools is also questionable for some of the same reasons as those we found for the Army. As discussed earlier, Marine Corps trend data and lessons-learned information from its main combat training center at 29 Palms, California, showed serious weaknesses in units’ chemical and biological proficiency. Despite these deficiencies, in 1994 the Marine Corps decided, as a result of downsizing, to discontinue comprehensive exercises and evaluations of unit chemical and biological defense proficiency at the 29 Palms combat training center and concentrate instead on fire support and maneuver training. Marine chemical and biological training is therefore now largely relegated to the home station training exercises and evaluations mentioned above. Like the Army, the Marine Corps now relies on unit commanders to determine the amount of chemical and biological training needed at their home stations based on their assessments of their units’ capabilities and the evaluations described above. The commander’s primary source of determining unit chemical and biological readiness is the Operational Readiness Inspection. Our analyses of these inspections conducted in 1994 and 1995 for the 2d Marine Expeditionary Force showed that units were trained with a few minor deficiencies. The other evaluations for the same time period showed little discussion of chemical and biological proficiency. Marine Corps officials stated that unless problems are found, these programs would not include discussions of these matters. In the few instances where the evaluations discussed chemical and biological matters, they for the most part concluded that the units were trained. However, Marine Corps officials told us that these home station evaluations do not expose units to the same training rigor and battlefield conditions as exercises conducted at 29 Palms and therefore are questionable indicators of actual unit chemical and biological defense proficiency. Thus, the extent that the Marine Corps has corrected the chemical and biological problems it encountered during Operation Desert Storm and since is uncertain. Although DOD has improved chemical and biological defense capability since the Gulf War, many problems of the type experienced during this war continue to exist. This is in large part due to the inconsistent but generally lower priority DOD, and especially the Joint Chiefs of Staff and the warfighting CINCs, assign chemical and biological defense relative to other priorities. These problems are likely to continue given current reductions in military funding and the limited emphasis placed on chemical and biological defense, unless the Secretary of Defense and the CJCS specifically assign a higher priority to this area. Until these problems are resolved, U.S. forces are likely to encounter operational difficulties and could incur needless casualties if attacked with chemical or biological weapons. We could not determine whether increased emphasis on chemical and biological warfare defense is warranted at the expense of other priorities. This is a matter of DOD’s military judgment and congressional funding priorities. In view of the increasing chemical and biological warfare threat and the continuing weaknesses in U.S. chemical and biological defense capabilities noted in this report, we recommend that the Secretary of Defense reevaluate the priority and emphasis given to this area throughout DOD. We also recommend that the Secretary, in his next annual report to Congress on NBC Warfare Defense, address (1) proposed solutions to the deficiencies identified in this report and (2) the impact that shifting additional resources to this area might have on other military priorities. If the Secretary’s reevaluation of the priority and emphasis given chemical and biological defense determines that more emphasis is needed, and if efforts by the Joint Service Materiel and Joint Service Integration Groups prove less effective than desired, the Secretary should consider elevating the single office for program oversight to the assistant secretary level in DOD rather than leaving it in its present position as part of the Office of the Assistant Secretary for Atomic Energy. The Secretary should also consider adopting an increased single manager concept for the execution of the chemical and biological program. This would provide a single manager with more authority, responsibility, and accountability for directing program management and acquisition for all the services. We further recommend that the Secretary of Defense take the following specific actions designed to improve the effectiveness of existing activities: Direct FORSCOM to reevaluate current chemical defense equipment stock requirements for early deploying active and reserve units to determine the minimal amounts required to be on hand to meet deployment requirements and to determine any additional storage facility requirements. If chemical defense equipment stock requirements are retained, we recommend that FORSCOM take the actions necessary to see that early deploying units can and do maintain these stocks. Review some services’ practice of funding the purchase of this equipment through Operation and Maintenance, rather than Procurement, funds. This review is necessary because Operation and Maintenance funds intended for chemical and biological defense equipment and training are too easily and frequently diverted to other purposes, and the uses of these funds are not well recorded. A consistent DOD system for funding these activities and recording the amount of funds spent on chemical and biological defense would greatly improve oversight of the resources and emphasis directed to this area. We recommend that DOD also consider at least temporarily earmarking Operation and Maintenance funds to relieve existing shortages of this equipment if current funding practices for purchasing this equipment are retained. Consider modifying SORTS to require active Army divisions to complete and submit SORTS division summaries for chemical and biological reporting categories, and implementing changes that would require overall unit readiness assessments to be more directly affected by their chemical and biological readiness status. More emphasis should be placed on accurately inventorying and reporting unit stocks of critical chemical and biological defense equipment through SORTS and other monitoring and reporting systems. SORTS reporting requirements should also be modified to more accurately reflect shortcomings in units’ ability to meet existing chemical and biological training standards. Determine and direct the implementation of an effective and appropriate immunization program for biological warfare defense that is consistent with existing DOD immunization policy. Direct that DOD medical courses of instruction regarding chemical and biological warfare treatment techniques, such as the Management of Chemical and Biological Casualties Course, be directed toward those personnel occupying positions in medical units most likely to have need of this training and that medical units assigned such personnel keep adequate records to determine whether the appropriate number and types of their personnel have attended such courses. Direct the Secretary of the Army to ensure that tactical unit training addresses casualty decontamination and that the current confusion regarding responsibility for performing casualty decontamination is corrected. Direct the Secretary of the Army and the Commandant of the Marine Corps to ensure that all combat training centers routinely emphasize and include chemical and biological training, and that this training is conducted in a realistic manner. Further, we recommend that the Secretary and the Commandant direct units attending these centers to be more effectively evaluated on their ability to meet existing chemical and biological training standards. Direct the CINCs to routinely include joint chemical and biological training tasks in exercises conducted under the CJCS exercise program and evaluate the ability of joint forces to perform chemical and biological tasks. Further, we recommend that the Secretary direct the CINCs to report annually on the results of this training. DOD generally concurred with the report findings, and acknowledged that a relatively low emphasis has been placed on chemical and biological defense in the past. DOD also concurred with 9 of the10 report recommendations. In commenting on this report, DOD stated it has recently increased the emphasis and funding given to chemical and biological defense and has begun a number of initiatives that are expected to address many of the problems we identified. DOD’s full comments and our evaluation are shown in appendix VI. A discussion of our scope and methodology is in appendix V. We conducted our review from October 1994 to December 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, and the Senate and House Committees on Appropriations; the Secretaries of Defense and the Army; the Commandant of the Marine Corps; and the Chairman, Joint Chiefs of Staff. Copies will also be made available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VII. 2d Army(percentage of units inadequately trained) 5th Army(percentage of units inadequately trained) Active National Guard U.S. Army Reserve Active National Guard U.S. Army Reserve Active National Guard U.S. Army Reserve Preparing for a chemical attack Active National Guard U.S. Army Reserve Responding to a chemical attack Active National Guard U.S. Army Reserve Integrating chemical and biological tasks into training Active National Guard U.S. Army Reserve (Table notes on next page) PACOM did not provide information for fiscal year 1995. EUCOM did not provide information on specific chemical and biological tasks done in its joint exercises. The Chairman and Ranking Minority Member, Subcommittee on Military Readiness, House Committee on National Security, requested that we provide a current assessment of the ability of early deploying U.S. ground forces to survive and operate in a chemically or biologically contaminated environment. Our objectives were to determine (1) DOD’s actions to address chemical and biological warfare defense problems identified during the Gulf War and (2) the current preparedness of these forces to operate in a contaminated environment. To determine the Department of Defense’s (DOD) actions to correct the problems identified in the Gulf War, we reviewed DOD’s Nuclear/Biological/Chemical (NBC) Warfare Defense annual reports submitted in 1994 and 1995 to Congress, lessons-learned documents, and other studies prepared by the Joint Chiefs of Staff, the Army, and the Marine Corps. We performed a similar analysis of problems identified in routine training exercises conducted under the Chairman, Joint Chiefs of Staff Exercise Program and at the Army’s combat training centers—the National Training Center, located at Fort Irwin, California; the Joint Readiness Training Center, located at Fort Polk, Louisiana; the Combat Maneuver Training Center, located at Hohenfels, Germany; and the Marine Corps Air Ground Combat Center at 29 Palms, California. We also analyzed operational readiness inspections and evaluations and other Army and Marine Corps documents that assessed the results of home station training exercises. To determine the preparedness of U.S. ground forces to operate in a chemical or biological environment, we focused on three areas: the availability of critical chemical and biological defense equipment, such as protective suits, masks, and alarms; the adequacy of chemical and biological training, including the extent to which tasks are conducted in joint and service training; and the availability of medical countermeasures to prevent and treat chemical and biological casualties, including supplies of critical vaccines and medical procedures to decontaminate and evacuate casualties. Regarding equipment availability at the units visited, we compared equipment on hand with that required by Army and Marine Corps regulations. To determine training adequacy, we analyzed Army, Marine Corps, and Joint Staff training guidance specifying chemical and biological tasks to be done as well as after-action and lessons-learned reports to identify any weaknesses. We also analyzed the training exercises conducted under the Chairman, Joint Chiefs of Staff Exercise Program to determine the extent that joint exercises include chemical and biological defense training. To assess the adequacy of medical countermeasures, we interviewed DOD officials and analyzed lessons-learned reports from the Gulf War to determine what problems had occurred. We then assessed medical unit equipment availability and training, the training provided to military physicians for the treatment and management of chemical and biological casualties, and the adequacy of biological agent vaccine stocks and policies and procedures for their use. We also assessed the efforts by DOD, the Joint Staff, and CINCs to monitor chemical and biological readiness. We interviewed key officials, examined guidance and reporting requirements, and analyzed reports to determine the extent that chemical and biological matters are included. We met with key DOD, Joint Staff, and service officials to discuss chemical and biological problems and the efforts to correct them; as well as readiness issues, including the emphasis placed on chemical and biological matters and other issues. At the DOD level, we contacted officials in the offices of the Assistant Secretary of Defense (Atomic Energy) (Chemical and Biological Matters); the Armed Forces Medical Intelligence Center, Fort Detrick, Maryland; and the Joint Program Office for Biological Defense. At the Joint Staff level, we met with officials in the offices of the Director for Strategic Plans and Policy (J-5), Weapons Technology Control Division, and the Director for Operational Plans and Interoperability (J-7), Joint Exercise and Training Division. At the commander in chief (CINC) level, we contacted officials at the U.S. Atlantic, Central, European, and Pacific Commands. At the Army, we held discussions and reviewed documents at U.S. Army Forces Command, Fort McPherson, Georgia; the U.S. Army Reserve Command, Atlanta, Georgia; the Office of the Army Surgeon General, Falls Church, Virginia; the Army Chemical School, Fort McClellan, Alabama; the Army Medical Command and the Army Medical Department Center and School, Fort Sam Houston, Texas; the Chemical and Biological Defense Command, Aberdeen, Maryland; the U.S. Army Medical Research Institute of Infectious Diseases, Fort Detrick, Maryland; Walter Reed Army Medical Center, Washington, D.C.; and the U.S. Army Medical Research and Materiel Command, Fort Detrick, Maryland. We interviewed officials and reviewed documents at the Army’s III Corps Headquarters, Fort Hood, Texas; the XVIII Airborne Corps Headquarters, Fort Bragg, North Carolina; and the Marine Corps’ Combat Development and Combat Systems Development Commands, Quantico, Virginia. We visited four of the 5-1/3 active Army divisions composing the crisis response force as well as the 2d Armored Division, Fort Hood, Texas, and the 25th Light Infantry Division, Schofield Barracks, Hawaii. We visited the 2d U.S. Army (now 1st U.S. Army) headquarters, Fort Gillem, Georgia; the 5th U.S. Army headquarters, Fort Sam Houston, Texas; the 90th U.S. Army Reserve Command, San Antonio, Texas; the 98th U.S. Army Reserve Support Command, Little Rock, Arkansas; and the 143d Transportation Command, Orlando, Florida. We also visited a chemical company, a chemical detachment, a chemical brigade headquarters, a signal company, an engineer group, and a transportation detachment from the U.S. Army Reserves that, at the time of our review, were designated for deployment in less than 30 days from mobilization. We visited the following Marine Corps Units: II Marine Expeditionary Force, Camp Lejeune, North Carolina; II Marine Division, Camp Lejeune, North Carolina; II Marine Force Service Support Group, Camp Lejeune, North Carolina; and II Marine Aircraft Wing, Cherry Point, North Carolina. We conducted our work from October 1994 to December 1995 in accordance with generally accepted government auditing standards. The following are GAO’s comments on DOD’s letter dated March 20, 1996. 1. Our report acknowledges that a single office within DOD currently has responsibility for chemical and biological program oversight and execution. However, as we noted in our report, many aspects of joint military service planning of research, development, acquisition, and logistics support for chemical and biological activities are dependent on the effectiveness of the committee-like Joint Service Integration and Joint Service Materiel Groups. The effectiveness of these groups in resolving interservice chemical and biological issues remains to be seen, and the Joint Service Integration Group was continuing to have start-up staffing problems at the time of our review. Some DOD officials have expressed concern regarding the ability of these groups to obtain sufficient support and emphasis from the individual services to be effective. We believe more of a single manager approach to this planning should be considered if these groups are unable to effectively address current problems and develop timely solutions. We have slightly modified our recommendation to clarify our position on this point. 2. We agree that the Status of Resources and Training System (SORTS) is not intended to function as a detailed management tool. However, the current system leaves significant opportunity for broadly inaccurate reporting of unit chemical and biological preparedness status. For example, although 3 of the 5-1/3 Army divisions composing the crisis response force had 50 percent or less of the protective clothing required by regulations for chemical and biological defense, these shortages were discernable through SORTS for only one of these divisions. This type of problem was evident during the Persian Gulf conflict, as after-action reports and other analyses revealed that units reporting 90 to 95 percent of their equipment on hand through SORTS actually had far less serviceable equipment for a variety of reasons, thereby causing logisticians and transporters to make extraordinary post-mobilization and post-deployment efforts to fill requisitions for unit shortages. Furthermore, during our review, at least one early deploying division was able to report C-1 for individual protective equipment status (90 percent or more of equipment on hand) although less than 50 percent of the required protective clothing and other items were actually available (C-4 status). This occurred because Army regulations allow units to forego reporting on equipment stored in facilities not specifically controlled by the unit. In this case, the division’s chemical defense equipment was stored in a warehouse controlled by corps headquarters, and reporting these shortages through SORTS was therefore not required, even though the corps headquarters and the division were physically located on the same installation. In this case, the level of stockage was not only inadequate for the division, but for other early deploying units within the corps as well. Also, leaving SORTS reporting mandatory for individual units, but optional for divisions, not only complicates the process but also makes review by higher commands such as U.S. Forces Command (FORSCOM) much more difficult. Finally, DOD’s annual reports to Congress acknowledged continuing problems regarding the accountability and management of NBC defense item inventories. While we concur that SORTS is not an appropriate tool for detailed management, we believe the assessment it provides, particularly regarding unit inventories of critical chemical and biological defense equipment, needs to be reasonably accurate in order to provide a meaningful readiness assessment. As long as units are required to be capable of defending themselves and operating in a contaminated environment, we believe that a readiness evaluation system that permits an overall unit readiness rating of C-1 while chemical and biological equipment readiness is rated C-4 could easily provide misleading information about that unit’s actual combat readiness. Also, requiring at least a moderate level of chemical and biological readiness in order to achieve a high overall readiness rating would do much to emphasize chemical and biological defense, and thus address some of the disparity that often occurs between the level of emphasis placed on chemical and biological defense by DOD policy and guidance and that actually being applied at unit level (see comment 4). We are therefore retaining this recommendation. 3.There is no question that Army doctrine and manuals are clear about who has responsibility for patient decontamination. However, both medical and tactical units we visited that were involved in implementing these tasks were often unaware of the doctrine and, consequently, usually had not either planned or trained to perform these functions. 4. We concur that military service training documents and standards require commanders to ensure that units and individuals are trained to defend and survive in a contaminated environment. However, there appears to be a difference between the policy and guidance established and the extent to which it has been effectively applied. For example, while the last two FORSCOM commanders have issued NBC defense training guidance requiring commanders to ensure that units are fully trained to sustain operations and defend against battlefield NBC hazards, the various DOD readiness and evaluation mechanisms we reviewed continue to indicate that many units are in fact not trained to DOD standards for chemical and biological defense. Our report also shows that Army unit commanders have not met FORSCOM requirements for unit on-hand stocks for critical NBC equipment, and that FORSCOM has not provided either the funds or the supervisory oversight needed to ensure compliance. Benjamin Douglas Joseph F. Lenart, Jr. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed U.S. chemical and biological warfare defense capabilities, focusing on: (1) the chemical and biological warfare defense problems identified during the Gulf War; and (2) the preparedness of early-deploying ground forces to survive and fight in a chemically or biologically contaminated environment. GAO found that: (1) the Department of Defense (DOD) has taken steps to improve the readiness of U.S. forces to operate in chemically or biologically contaminated environments, but equipment, training, and medical shortcomings persist and could cause needless casualties and a degradation of U.S. combat capability; (2) during the Gulf War, many early-deploying units did not have all of the chemical and biological detection, decontamination, and protective equipment they needed; (3) the services continue to place lower emphasis on chemical and biological defense activities than on other high-priority activities; (4) research and development efforts to improve the detection and decontamination of biological and chemical agents have progressed slower than planned because of other priorities and personnel shortages; (5) the Army and Marine Corps have acted to improve their biological and chemical training, but many problems encountered during the Gulf War persist; (6) there was little biological or chemical defense training included in joint training exercises because regional commanders in chief (CINC) believe that this training is the responsibility of the individual services and have assigned other types of training a higher priority; (7) medical units often lack adequate biological agent vaccine stocks and immunization plans, appropriate defense equipment, and sufficient instruction on how equipment is to be used; and (8) the lower emphasis the services give to chemical and biological defense activities is reflected in the funding, staffing, monitoring, and mission priority levels dedicated to these activities. |
An effective military medical surveillance system needs to collect reliable information on (1) the health care provided to service members before, during, and after deployment, (2) where and when service members were deployed, (3) environmental and occupational health threats or exposures during deployment (in theater) and appropriate protective and countermeasures, and (4) baseline health status and subsequent health changes. This information is needed to monitor the overall health condition of deployed troops, inform them of potential health risks, as well as maintain and improve the health of service members and veterans. In times of conflict, a military medical surveillance system is particularly critical to ensure the deployment of a fit and healthy force and to prevent disease and injuries from degrading force capabilities. DOD needs reliable medical surveillance data to determine who is fit for deployment; to prepare service members for deployment, including providing vaccinations to protect against possible exposure to environmental and biological threats; and to treat physical and psychological conditions that result from deployment. DOD also uses this information to develop educational measures for service members and medical personnel to ensure that service members receive appropriate care. Reliable medical surveillance information is also critical for VA to carry out its missions. In addition to VA’s better known missions—to provide health care and benefits to veterans and medical research and education— VA has a fourth mission: to provide medical backup to DOD in times of war and civilian health care backup in the event of disasters producing mass casualties. VA needs reliable medical surveillance data from DOD to treat casualties of military conflicts, provide health care to veterans who have left active duty, assist in conducting research should troops be exposed to environmental or occupational hazards, and identify service- connected disabilities to adjudicate veterans’ disability claims. Investigations into the unexplained illnesses of service members and veterans who had been deployed to the Persian Gulf uncovered the need for DOD to implement an effective medical surveillance system to obtain comprehensive medical data on deployed service members, including Reservists and National Guardsmen. Epidemiological and health outcome studies to determine the causes of these illnesses have been hampered by a lack of (1) complete baseline health data on Gulf War veterans; (2) assessments of their potential exposure to environmental health hazards; and (3) specific health data on care provided before, during, and after deployment. The Presidential Advisory Committee on Gulf War Veterans’ Illnesses’ and IOM’s 1996 investigations into the causes of illnesses experienced by Gulf War veterans confirmed the need for more effective medical surveillance capabilities. The National Science and Technology Council, as tasked by the Presidential Advisory Committee, also assessed the medical surveillance system for deployed service members. In 1998, the council reported that inaccurate recordkeeping made it extremely difficult to get a clear picture of what risk factors might be responsible for Gulf War illnesses. It also reported that without reliable deployment and health assessment information, it was difficult to ensure that veterans’ service-related benefits claims were adjudicated appropriately. The council concluded that the Gulf War exposed many deficiencies in the ability to collect, maintain, and transfer accurate data describing the movement of troops, potential exposures to health risks, and medical incidents in theater. The council reported that the government’s recordkeeping capabilities were not designed to track troop and asset movements to the degree needed to determine who might have been exposed to any given environmental or wartime health hazard. The council also reported major deficiencies in health risk communications, including not adequately informing service members of the risks associated with countermeasures such as vaccines. Without this information, service members may not recognize potential side effects of these countermeasures or take prompt precautionary actions, including seeking medical care. In response to these reports, DOD strengthened its medical surveillance system under Operation Joint Endeavor when service members were deployed to Bosnia-Herzegovina, Croatia, and Hungary. In addition to implementing departmentwide medical surveillance policies, DOD developed specific medical surveillance programs to improve monitoring and tracking environmental and biomedical threats in theater. While these efforts represented important steps, a number of deficiencies remained. On the positive side, the Assistant Secretary of Defense (Health Affairs) issued a health surveillance policy for troops deploying to Bosnia. This guidance stressed the need to (1) identify health threats in theater, (2) routinely and uniformly collect and analyze information relevant to troop health, and (3) disseminate this information in a timely manner. DOD required medical units to develop weekly reports on the incidence rates of major categories of diseases and injuries during all deployments. Data from these disease and non-battle-injury reports showed theaterwide illness and injury trends so that preventive measures could be identified and forwarded to the theater medical command regarding abnormal trends or actions that should be taken. DOD also established the U.S. Army Center for Health Promotion and Preventive Medicine—a major enhancement to DOD’s ability to perform environmental monitoring and tracking. For example, the center operates and maintains a repository of service members’ serum samples—the largest serum repository in the world—for epidemiological studies to examine potential health issues for services members and veterans. The center also operates and maintains a system for integrating, analyzing, and reporting data from multiple sources relevant to the health and readiness of military personnel. This capability was augmented with the establishment of the 520th Theater Army Medical Laboratory—a deployable public health laboratory for providing environmental sampling and analysis in theater. The sampling results can be used to identify specific preventive measures and safeguards to be taken to protect troops from harmful exposures and to develop procedures to treat anyone exposed to health hazards. During Operation Joint Endeavor, this laboratory was used in Tuzla, Bosnia—where most of the U.S. forces were located—to conduct air, water, soil, and other environmental monitoring. Despite the Department’s progress, we and others have reported on DOD’s implementation difficulties during Operation Joint Endeavor and the shortcomings in DOD’s ability to maintain reliable health information on service members. Knowledge of who is deployed and their whereabouts is critical for identifying individuals who may have been exposed to health hazards while deployed. However, in May 1997, we reported that inaccurate information on who was deployed and where and when they were deployed—a problem during the Gulf War—continued to be a concern during Operation Joint Endeavor.For example, we found that the Defense Manpower Data Center (DMDC) database—where military services are required to report deployment information—did not include records for at least 200 Navy service members who were deployed. Conversely, the DMDC database included Air Force personnel who were never actually deployed. In addition, we reported that DOD had not developed a system for tracking the movement of service members within theater. IOM also reported that during Operation Joint Endeavor, locations of deployed service members were still not systematically documented or archived for future use. We also reported in May 1997 that for the more than 600 Army personnel whose medical records we reviewed, DOD’s centralized database for postdeployment medical assessments did not capture 12 percent of those assessments conducted in theater and 52 percent of those conducted after returning home. These data are needed by epidemiologists and other researchers to assess at an aggregate level the changes that have occurred between service members’ pre- and postdeployment health assessments. Further, many service members’ medical records did not include complete information on the in-theater postdeployment medical assessments that had been conducted. The Army’s European Surgeon General attributed missing in-theater health information to DOD’s policy of having service members hand-carry paper assessment forms from the theater to their home units, where their permanent medical records were maintained. The assessments were frequently lost en route. We have also reported that not all medical encounters in theater were being recorded in individual records. Our 1997 report indicated that this problem was particularly common for immunizations given in theater. Detailed data on service members’ vaccine history are vital for scheduling the regimen of vaccinations and boosters and for tracking individuals who received vaccinations from a specific vaccine lot in the event that health concerns about the lot emerge. We found that almost one-fourth of the service members’ medical records that we reviewed did not document the fact that they had received a vaccine for tick-borne encephalitis. In addition, in its 2000 report, IOM cited limited progress in medical recordkeeping for deployed active duty and reserve forces and emphasized the need for records of immunizations to be included in individual medical records. Responding to our and others’ recommendations to improve information on service members’ deployments, in-theater medical encounters, and immunizations, DOD has continued to revise and expand its policies related to medical surveillance, and the system continues to evolve. In addition, in 2000, DOD released its Force Health Protection plan, which presents the Department’s vision for protecting deployed forces and includes the goal of joint medical logistics support for all services by 2010. The vision articulated in this capstone document emphasizes force fitness and health preparedness, casualty prevention, and casualty care and management. A key component of the plan is improved monitoring and surveillance of health threats in military operations and more sophisticated data collection and recordkeeping before, during, and after deployments. However, IOM criticized DOD’s progress in implementing its medical surveillance program as well as its failure to implement several recommendations that IOM had made. In addition, IOM raised concerns about DOD’s ability to achieve the vision outlined in the Force Health Protection plan. We have also reported that some of DOD’s programs designed to improve medical surveillance have not been fully implemented. IOM’s 2000 report presented the results of its assessment of DOD’s progress in implementing recommendations for improving medical surveillance made by IOM and several others. IOM stated that, although DOD generally concurred with the findings of these groups, DOD had made few concrete changes at the field level. In addition, environmental and medical hazards were not yet well integrated in the information provided to commanders. The IOM report notes that a major reason for this lack of progress is that no single authority within DOD has been assigned responsibility for the implementation of the recommendations and plans. IOM said that because of the complexity of the tasks and the overlapping areas of responsibility involved, the single authority must rest with the Secretary of Defense. In its report, IOM describes six strategies that in its view demand further emphasis and require greater efforts by DOD: Use a systematic process to prospectively evaluate non-battle-related risks associated with the activities and settings of deployments. Collect and manage environmental data and personnel location, biological samples, and activity data to facilitate analysis of deployment exposures and to support clinical care and public health activities. Develop the risk assessment, risk management, and risk communication skills of military leaders at all levels. Accelerate implementation of a health surveillance system that completely spans an individual’s time in service. Implement strategies to address medically unexplained symptoms in deployed populations. Implement a joint computerized patient record and other automated recordkeeping that meets the information needs of those involved with individual care and military public health. DOD guidance established requirements for recording and tracking vaccinations and automating medical records for archiving and recalling medical encounters. While our work indicates that DOD has made some progress in improving its immunization information, the Department faces numerous challenges in implementing an automated medical record. DOD also recently established guidelines and additional policy initiatives for improving military medical surveillance. In October 1999, we reported that DOD’s Vaccine Adverse Event Reporting System—which relies on medical staff or service members to provide needed vaccine data—may not have included some information on adverse reactions because these personnel had not received guidance needed to submit reports to the system. According to DOD officials, medical staff may also report any other reaction they think might be caused by the vaccine, but because this is not stated explicitly in DOD’s guidance on vaccinations, some medical staff may be unsure about which reactions to report. Also, in April 2000, we testified that vaccination data were not consistently recorded in paper records and in a central database, as DOD requires.For example, when comparing records from the database with paper records at four military installations, we found that information on the number of vaccinations given to service members, the dates of the vaccinations, and the vaccine lot numbers were inconsistent at all four installations. At one installation, the database and records did not agree 78 percent to 92 percent of the time. DOD has begun to make progress in implementing our recommendations, including ensuring timely and accurate data in its immunization tracking system. The Gulf War revealed the need to have information technology play a bigger role in medical surveillance to ensure that information is readily accessible to DOD and VA. In August 1997, DOD established requirements that called for the use of innovative technology, such as an automated medical record device that can document inpatient and outpatient encounters in all settings and that can archive the information for local recall and format it for an injury, illness, and exposure surveillance database. Also, in 1997, the President, responding to deficiencies in DOD’s and VA’s data capabilities for handling service members’ health information, called for the two agencies to start developing a comprehensive, lifelong medical record for each service member. As we reported in April 2001, DOD’s and VA’s numerous databases and electronic systems for capturing mission-critical data, including health information, are not linked and information cannot be readily shared. DOD has several initiatives under way to link many of its information systems—some with VA. For example, in an effort to create a comprehensive, lifelong medical record for service members and veterans and to allow health care professionals to share clinical information, DOD and VA, along with the Indian Health Service (IHS), initiated the Government Computer-Based Patient Record (GCPR) project in 1998. GCPR is seen as yielding a number of potential benefits, including improved research and quality of care, and clinical and administrative efficiencies. However, our April 2001 report described several factors— including planning weaknesses, competing priorities, and inadequate accountability—that made it unlikely that DOD and VA would accomplish GCPR or realize its benefits in the near future. To strengthen the management and oversight of GCPR, we made several recommendations, including designating a lead entity with a clear line of authority for the project and creating comprehensive and coordinated plans for sharing meaningful, accurate, and secure patient health data. For the near term, DOD and VA have decided to reconsider their approach to GCPR and focus on allowing VA to access selected health data on service members captured by DOD. According to DOD and VA officials, full operation is expected to begin the third quarter of this fiscal year, once testing of the near-term system has been completed. DOD health information is an especially critical information source given VA’s fourth mission to provide medical backup to the military health system in times of national emergency and war. Under the near-term effort, VA will be able to access laboratory and radiology results, outpatient pharmacy data, and patient demographic information. This approach, however, will not provide VA access to information on the health status of personnel when they enter military service; on medical care provided to Reservists while not on active duty; or on the care military personnel received from providers outside DOD, including TRICARE providers. In addition, because VA will only be able to view this information, physicians will not be able to easily organize or otherwise manipulate the data for quick review or research. DOD has several other initiatives for improving its information technology capabilities, which are in various stages of development. For example, DOD is developing the Theater Medical Information Program (TMIP), which is intended to capture medical information on deployed personnel and link it with medical information captured in the Department’s new medical information system. As of October 2001, officials told us that they planned to begin field testing for TMIP in spring 2002, with deployment expected in 2003. A component system of TMIP— Transportation Command Regulating and Command and Control Evacuation System—is also under development and aims to allow casualty tracking and provide in-transit visibility of casualties during wartime and peacetime. Also under development is the Global Expeditionary Medical System (GEMS), which DOD characterizes as a stepping stone to an integrated biohazard surveillance and detection system. In addition to its ongoing information technology initiatives, DOD recently issued two major policies for advancing its military medical surveillance system. Specifically, in December 2001, DOD issued clinical practice guidelines, developed collaboratively with VA, to provide a structure for primary care providers to evaluate and manage patients with deployment- related health concerns. According to DOD, the guidelines were issued in response to congressional concerns and IOM’s recommendations. The guidelines are expected to improve the continuity of care and health-risk communication for service members and their families for the wide variety of medical concerns that are related to military deployments. Because the guidelines became effective January 31, 2002, it is too early for us to comment on their implementation. Finally, DOD issued updated procedures on February 1, 2002, for deployment health surveillance and readiness. These procedures supersede those laid out in DOD’s December 1998 memorandum. The 2002 memorandum adds important procedures for occupational and environmental health surveillance and updates pre- and postdeployment health assessment requirements. These new procedures take effect on March 1, 2002. According to officials from DOD’s Health Affairs office, military medical surveillance is a top priority, as evidenced by the Department’s having placed responsibility for implementing medical surveillance policies with one authority—the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness. However, these officials also characterized force health protection as a concept made up of multiple programs across the services. For example, we learned that each service is responsible for implementing DOD’s policy initiatives for achieving force health protection goals. This raises concerns about how the services will uniformly collect and share core information on deployments and how they will integrate data on the health status of service members. These officials also confirmed that DOD’s military medical surveillance policies will depend on the priority and resources dedicated to their implementation. Clearly, the need for comprehensive health information on service members and veterans is compelling, and much more needs to be done. However, it is also a very difficult task because of uncertainties about what conditions may exist in a deployed setting, such as potential military conflicts, environmental hazards, and the frequency of troop movements. Moreover, the outlook for successful surveillance is complicated by scientific uncertainty regarding the health effects of exposures and changes in technology that affect the feasibility of monitoring and tracking troop movements. While progress is being made, DOD will need to continue to make a concerted effort to resolve the remaining deficiencies in its surveillance system and be vigilant in its oversight. VA’s ability to perform its missions to care for veterans and compensate them for their service-connected conditions will depend in part on the adequacy of DOD’s medical surveillance system. For further information, please contact Cynthia A. Bascetta at (202) 512- 7101. Individuals making key contributions to this testimony included Ann Calvaresi Barr, Diana Shevlin, Karen Sloan, and Keith Steck. | The Department of Defense (DOD) and the Department of Veterans Affairs (VA) recently established a medical surveillance system to respond to the health care needs of both military personnel and veterans. A medical surveillance system involves the ongoing collection and analysis of uniform information on deployments, environmental health threats, disease monitoring, medical assessments, and medical encounters and its timely dissemination to military commanders, medical personnel, and others. GAO and others have reported extensively on weaknesses in DOD's medical surveillance capability and performance during the Gulf War and Operation Joint Endeavor. Investigations into the unexplained illnesses of Gulf War veterans revealed DOD's inability to collect, maintain, and transfer accurate data on the movement of troops, potential exposures to health risks, and medical incidents during deployment. DOD improved its medical surveillance system under Operation Joint Endeavor, which provided useful information to military commanders and medical personnel. However, several problems persist. DOD has several efforts under way to improve the reliability of deployment information and enhance its information technology capabilities. Although its recent policies and reorganization reflect a commitment to establish a comprehensive medical surveillance system, much needs to be done to implement the system. To the extent DOD's medical surveillance capability is realized, VA will be better able to serve veterans and provide backup to DOD in times of war. |
Federal employees have had protections against whistleblower reprisal— also known in some cases as adverse consequences or retaliation—for several decades. The Civil Service Reform Act of 1978 and the Whistleblower Protection Act of 1989 both provided federal employees with certain rights against reprisal for disclosing certain wrongdoing and created avenues of investigation of complaints. More recently, the Whistleblower Protection Enhancement Act of 2012 expanded and clarified protections for federal employee whistleblowers, including adding clarity that federal employees are protected even if the disclosures are identified as part of their existing job duties, such as for auditors and safety inspectors. Disclosure: An allegation to certain bodies and individuals made by an employee who believes he or she has witnessed certain wrongdoing, such as gross mismanagement or gross waste. Reprisal Complaint: Following a disclosure, a complaint that an employee has experienced reprisal as a result of the disclosure, such as demotion or discharge. In 1986, whistleblower reprisal protections were extended to employees of defense contractors. The National Defense Authorization Act for Fiscal Year 1987 provided protections for employees of defense contractors, who were prohibited from discharging, demoting, or otherwise discriminating against an employee for disclosing certain wrongdoing. Similar protections were expanded to other executive agencies in 1994, when legislation provided certain rights for contractor employees at civilian executive agencies. For example, one right is to have the OIG of the executive agency conduct an investigation into reprisal complaints when the contractor employee believes reprisal has occurred as a result of disclosing certain information to authorized persons or bodies, such as a member of Congress. In 2013, after the passage of the NDAA for Fiscal Year 2013, the pilot program went into effect and further expanded protections to also include employees of subcontractors and grantees and directs the agency head to make the determination on whether a contractor employee had been reprised against. In 2013, the pilot program went into effect and, among other enhancements, limited the OIG investigation of complaints to 180 days, whereas previously there was no time limitation on the investigation. Further, under the pilot program, contractor, subcontractor, and grantee employees are protected from reprisal if they disclose to certain persons or bodies information they reasonably believe is evidence of gross mismanagement of a federal contract or grant, a gross waste of federal funds, an abuse of authority relating to a federal contract or grant, a substantial and specific danger to public health or safety, or a violation of law, rule, or regulation related to a federal contract or grant. Moreover, in addition to protections under the previous statute for disclosing certain information to a Member of Congress or an authorized official of an executive agency or the Department of Justice, employees are now protected when disclosing information related to certain wrongdoing to a broader range of authorized persons or bodies, such as a management official at the contractor, or to a law enforcement agency. Under the pilot program, both the OIG at each executive agency as well as certain agency officials are responsible for executing provisions of the pilot program. The pilot program not only enhances agency responsibility to help ensure contractor employees are aware of their rights, but clearly identifies which office within the agency has responsibility for handling reprisal complaints. Figure 1 depicts the disclosure process and the complaint process. Changes in Disclosure Process. Under the pilot program, the number of persons and bodies to whom a contractor employee may disclose protected information has expanded. Under the prior statute, a contractor employee was only covered if he or she disclosed certain wrongdoing to a Member of Congress, an authorized official of an executive agency, or the Department of Justice. Figure 1 above describes the disclosure process under the pilot program. Agencies’ OIG Responsibilities. Upon receiving a reprisal complaint, OIGs must evaluate whether a reprisal complaint is covered under the pilot program. OIGs might not investigate for a variety of reasons, such as in cases where the complaint is already under investigation by another authority such as another OIG, or otherwise does not allege a violation of the law, such as if the claim was made prior to July 1, 2013. If the OIG determines the case is not covered under the pilot program, it may then notify the complainant that no further action will be taken on the reprisal complaint. If the reprisal complaint is covered, the OIG must investigate the complaint and submit a report of its findings to the agency head, the complainant, the head of the contracting activity, and the contractor. OIGs may make a preliminary determination of whether reprisal occurred based on the investigation; however, the final determination of reprisal must be made by the agency head. As described in figure 1 above, the report provided by the OIG to the agency head must be sent within 180 days from receipt of the reprisal complaint. If the OIG determines it needs more time to investigate, OIGs are able to seek an extension of this timeline by getting permission from the complainant to do so. Federal Acquisition Regulation 52.203-17 Contractor Employee Whistleblower Rights and Requirement to Inform Employees of Whistleblower Rights (a) This contract and employees working on this contract will be subject to the whistleblower rights and remedies in the pilot program on Contractor employee whistleblower protections established at 41 U.S.C. 4712 by section 828 of the National Defense Authorization Act for Fiscal Year 2013 (Pub. L. 112-239) and FAR 3.908. (b) The Contractor shall inform its employees in writing, in the predominant language of the workforce, of employee whistleblower rights and protections under 41 U.S.C. 4712, as described in section 3.908 of the Federal Acquisition Regulation. (c) The Contractor shall insert the substance of this clause, including this paragraph (c), in all subcontracts over the simplified acquisition threshold. Agencies’ Responsibilities. Once the investigation findings are forwarded from the OIG, the agency head must determine whether there is a sufficient basis to conclude that a contractor employee was reprised against, and must either issue an order that the contractor take some form of remedial action or issue an order denying relief. During the 30-day period after the agency head receives the OIG report, the agency head may ask the OIG for additional investigative work. In addition, the complainant and the contractor must be afforded an opportunity to submit a written response to the OIG report during the same 30-day period. Under the pilot program, contracting officers are also responsible for inserting Federal Acquisition Regulation clause 52.203-17 (FAR clause) into applicable contracts and agency heads are responsible for ensuring that contractors communicate to their employees their rights under the pilot program. This FAR clause lays out the responsibility of contractors to communicate to their employees their rights under the pilot program, which requires these protections to be communicated to contractor, subcontractor, and grantee employees in writing and in their predominant language. Applicable contracts that require the FAR clause include all contracts over the simplified acquisition threshold awarded on or after September 30, 2013, according to the FAR interim rule. The pilot program also requires agencies to make best efforts to include the clause in contracts awarded before July 1, 2013, that have undergone major contract modifications; the terms “best efforts” and “major modifications,” are not defined in the statute. In 2015 and 2016, we reported on whistleblower protection issues, including issues related to the general public and federal employees, as illustrated below: In October 2015, we reported on whistleblower protections for any individual, including the general public, reporting tax fraud to the Internal Revenue Service Whistleblower Office. We found that whistleblowers may not have adequate protections against employer retaliation when filing disclosures. We made 10 recommendations to the Internal Revenue Service including tracking dates, strengthening and documenting procedures for award payments and whistleblower protections, and improving external communications. The Internal Revenue Service agreed with our recommendations. In July 2016, we reported on the whistleblower process at Homeland Security for a specific regulation on Chemical Facility Anti-Terrorism Standards and found that the Department did not have documented procedures for investigating disclosures made by whistleblowers and their website provided only limited guidance. We recommended that Homeland Security develop a documented process and procedures to address whistleblower retaliation reports, and provide additional guidance on the Homeland Security whistleblower website and telephone tip line. Homeland Security agreed with our recommendations. In September 2016, we testified before a House subcommittee on the status of DOD’s implementation of whistleblower protections and reported that of the 18 recommendations we had previously made, DOD had implemented 15, including that DOD ensure that investigations are conducted by someone outside of the complainant’s chain of command. DOD also had implemented our recommendations to improve and track investigation timeliness and strengthen oversight of the military services’ investigations, and was considering steps to implement the remaining three recommendations regarding standardized investigations and reporting to Congress. In November 2016, we reported on the status of implementing the Whistleblower Protection Enhancement Act, which strengthens protections for federal employees. We reported that the Merit Systems Protection Board has taken steps to collect and report whistleblower appeals data, but we found a number of weaknesses in Merit Systems Protection Board’s data collection. We recommended that the Merit Systems Protection Board help ensure the accuracy of its reporting on whistleblower appeals received and closed by (1) updating its data entry user guide to include additional guidance and procedures and (2) adding a quality check in its data analysis and reporting process to better identify discrepancies. The Merit Systems Protection Board agreed with these recommendations. In 2016, we also reported on aspects of the Department of Energy’s (Energy) whistleblower program and its contractor-run facilities, including its implementation of the 2013 pilot program. In our July 2016 report, we reported that Energy had taken limited to no action to hold responsible contractors that had created a chilled work environment, or an environment that may not respond favorably to whistleblower disclosures. We recommended that Energy revise existing guidance to clarify what constitutes a chilled work environment and define appropriate steps the Department can take to hold contractors accountable. Energy agreed with this recommendation. As the pilot program was being implemented, the number of reprisal complaints received varied across the 14 executive departments, according to the OIGs’ responses to our survey. According to the OIGs, of the estimated 1,560 reprisal complaints received from July 1, 2013, to December 31, 2015, the OIGs investigated about one-third of the total 127 complaints submitted by contractors, subcontractors, and grantee employees covered under the pilot program. All remaining reprisal complaints were disposed of for various reasons, but none of the pilot program investigations completed thus far resulted in findings that substantiated reprisal. In addition, the 14 OIGs reported using multiple mechanisms to implement the pilot program, including incorporating a new contract clause to notify contractors of their responsibilities. The number of reprisal complaints received varied across the 14 executive departments we surveyed. OIGs at the 14 executive departments reported receiving an estimated 1,560 whistleblower reprisal complaints from July 1, 2013, through December 31, 2015. The 1,560 reprisal complaints consisted of each department receiving a range of complaints, from approximately 3 to 600 based on survey responses. The 1,560 reprisal complaints included complaints from employees of contractors, subcontractors, and grantees as well as groups not covered by the pilot program, such as federal employees and the general public. Of the estimated 1,560 reprisal complaints received from July 1, 2013, through December 31, 2015, OIGs from the 14 departments reported that 127 were submitted by employees of contractors, subcontractors, and grantees under the pilot program. However, the OIGs reported varying levels of insight into whether federal, contractor, subcontractor, or grantee employees had submitted the reprisal complaints. For example, 2 departments reported actual counts for all categories while 4 departments provided a mix of actual counts and estimates. Two of the 14 departments could not separate the number of reprisal complaints by category, and did not track how many they had received from federal, contractor, subcontractor, or grantee employees. At these 2 departments, OIG officials said that their case management systems, the electronic systems they use to track complaints, could not provide this level of detail on the source of the complaint. As a result, the officials at these 2 departments said that they reviewed individual cases to determine if the reprisal complaints filed were relevant to the pilot program. The 14 departments differed in the number of reprisal complaints received under the pilot program. For example, 2 departments reported receiving as few as 1 complaint apiece while 1 department received 35 complaints. Three departments accounted for almost 60 percent of the pilot program complaints received from employees of contractors, subcontractors, and grantees between July 1, 2013, and December 31, 2015. Almost all of the 127 reprisal complaints were reported directly to the department’s OIG. For the remaining reprisal complaints, 4 were referrals from within the respective department, 1 was a referral from Congress, and 1 was filed by an advocacy group on the behalf of a complainant. Of the 127 reprisal complaints submitted by employees of contractors, subcontractors, and grantees under the pilot program, 44 were investigated by the OIG and none of the investigations completed thus far resulted in findings that substantiated reprisal. See figure 2 for more information about the disposition of reprisal complaints covered in the pilot program. According to OIG responses to our survey, they had completed investigations for 27 of the 44 investigated reprisal complaints. As required under the pilot program, OIGs reported forwarding their investigation findings to the agency head in 12 of the 27 completed investigations. The remaining 15 investigations were completed by 1 OIG that reported it did not forward its findings to the agency head. This is not consistent with a provision of the pilot program and is discussed later in this report. Of the 32 reprisal complaints submitted but not investigated, OIGs determined that the cases were one of the following: frivolous, previously decided by another federal or state judicial proceeding, to be referred to another investigative body, or to receive an “other disposition.” In cases that received other dispositions, OIGs reported that these cases could not proceed because the complainants did not respond to requests for information or declined to waive confidentiality, which they stated were necessary to conduct an investigation. Of the 51 reprisal complaints submitted for which it was determined that the complaints were not covered by the pilot program, the OIGs at the respective departments—10 in total—did not take any further actions to investigate. In these cases, the OIGs determined that the initial disclosure was related to conduct that did not, for example, allege gross mismanagement covered under the pilot, and therefore, these reprisal complaints were not covered by the pilot program. All 14 OIGs reported using a combination of mechanisms to implement the pilot program, including existing efforts to manage whistleblower disclosures and new efforts to handle reprisal complaints filed under the pilot program. Some of these mechanisms were extensions of existing efforts, such as using existing whistleblower hotlines to accept reprisal complaints related to the pilot program. Several OIGs also noted that they developed education programs for contractors, subcontractors, and grantees, such as adding information about the pilot program to their whistleblower websites. In addition to these efforts, a few OIGs reported developing efforts specifically for the pilot program. For example, one OIG reported using a monthly report to provide a snapshot for the status of complaints and when the 180-day investigative period would end for each complaint—a specific time frame that is part of the pilot program’s enhancements to whistleblower protections. See table 1 for various methods used by OIGs to implement the pilot program. Under the pilot program as implemented, contracting officers are also required to include a FAR clause—which instructs contractors to communicate to their employees, in writing and in their predominant language, their protections under the pilot program—in new contracts (contracts awarded after September 30, 2013) that exceeded the simplified acquisition threshold, generally over $150,000. All 14 departments reported in the survey that they had required insertion of the FAR clause 52.203-17 into new contracts as a means of ensuring that contractor employees are informed of their rights under the pilot program. In addition to the clause, 2 departments reported taking additional steps to ensure contractors are informing their employees of their rights. One department reported developing new guidance that will require their contracting staff to obtain email confirmation from contractors that they have notified employees of their rights. Also, during a roundtable discussion we conducted with senior procurement officials, another department official said that they had conducted forums with contractors to inform them about the importance of the pilot program and to gather feedback about challenges. Despite using various mechanisms to implement the pilot program, most of the 14 OIGs identified ambiguities and some challenges with the pilot program. For example, over half of the OIGs identified at least one of each of the following as a challenge that they experienced while implementing the pilot: Ambiguities in the pilot program (10 of 14 departments)—for example, the OIGs reported that there is a lack of guidance regarding the definition of a “frivolous” allegation. Personnel or funding (9 of 14 departments)—for example, the OIGs reported that these are complex cases where the investigation can be extensive and consume significant investigative manpower. Timeliness requirements for investigating reprisal complaints (8 of 14 departments)—for example, the OIGs reported that it is difficult to determine how much time it will take to complete an investigation because they have little formal control over non-government entities. Two whistleblower advocacy groups we spoke with echoed these concerns, noting that contractor employees’ reprisal complaints can take a backseat to other issues because OIGs may have limited resources or other priorities, such as investigating federal employee complaints. Given these limited resources, one of the groups said that they had started to offer training on whistleblower protections during the implementation of the pilot program to help OIGs better understand issues such as what is considered a covered disclosure or personnel actions that may constitute reprisal to the detriment of the contractor. Four selected departments—Commerce, Homeland Security, Interior, and State—used various processes for implementing the pilot program, and some had not yet fully implemented the program. In particular, OIGs of these departments reported they provided internal training on the protections provided by the pilot program. Further, the OIGs reported that they either had existing guidance or developed guidance during the implementation of the pilot program, however we found that the guidance was lacking in certain details. Moreover, the pilot program requires that the departments’ OIGs forward a report of their investigation findings to several entities, but we found two OIGs with completed investigations that did not fully implement these reporting requirements. Additional details of contractor and subcontractor employees’ reprisal complaints submitted to the selected departments and the handling of the complaints are included in appendix III. In addition, within the four departments’ contracting offices, some of the new contracts we reviewed were missing one of the required FAR clauses as required by law, and none of the four departments have policies in place to make best efforts to include a required FAR clause in major contract modifications, as required by the pilot program. Finally, departments have not taken full advantage of opportunities to improve communications between department officials and contractors to help make contractors’ employees aware of their protections from reprisal for disclosing potential wrongdoing. At the four selected departments we reviewed, the OIGs reported that they provided internal training on the protections provided by the pilot program. For example, an official at State reported having training available not only for OIG staff, but also for contracting officers. Interior officials reported that they had developed detailed training slides that cover several whistleblower laws, including the pilot program protections. Homeland Security officials reported that they had a slide dedicated to the pilot program in whistleblower training slides, but also said additional training would be helpful. Commerce OIG officials reported that the Office of Special Counsel and Department of Justice officials provided training related to whistleblower protections to the OIG staff. Commerce, Homeland Security, Interior, and State OIGs all reported having guidance in place to implement the pilot program, but that guidance varied and lacked certain details regarding the provisions in the pilot program. Specific details follow: Commerce OIG officials provided a flow chart and a legal memorandum as the pilot program guidance which detail the OIG and department responsibilities under the pilot program. Commerce OIG also has guidance related to conducting investigations, but not specifically those that fall under the pilot program. Commerce officials we spoke with said that these documents are sufficient as guidance to effectively implement the pilot program. However, we noted that while the flow chart provides a description of the pilot program, it does not include some program details that will facilitate implementing the program, such as identifying to which offices within Commerce a report should be sent following an investigation. For example, it does not identify which office is the “head of the contracting activity” or the designee to where a report should be sent. Further, the investigations guidance may benefit from incorporating some elements of the flow chart specific to the pilot program. Homeland Security OIG officials provided a directive as the pilot program guidance. The directive outlines OIG responsibilities under the pilot program, including intake and investigation procedures, as well as a process for tracking complaints. Homeland Security officials we spoke with said this directive is thorough. However, we noted the directive does not include the FAR 3.908-5 requirement to send the investigation findings to the head of the contracting activity and believe there may be opportunity to include more guidance. When we asked about the FAR requirement, OIG officials said they believed forwarding findings to the head of the contracting activity is a responsibility of the agency head which had not previously provided the proper contact to the OIG. Interior OIG officials provided their policy for investigations as the pilot program guidance. However, we noted that Interior’s OIG investigations policy document was not specific to the pilot program processes or protections. Interior OIG officials agreed and reported that if the pilot is made permanent they plan to make changes to the policy for investigations to include the pilot program details. State OIG officials provided their policy for pilot program investigations as the pilot program guidance. The policy includes instructions on obtaining evidence for pilot program investigations, on the reporting process when an investigation is complete, as well as identifies levels of review. The policy instructs State officials to share investigation findings with the agency head; however, we found it does not specify how that information should be communicated. A State OIG official said that the report of findings is communicated to the agency head through a system that allows memoranda to be submitted as either an action memorandum or information memorandum. State OIG officials reported that initially, they had submitted information memoranda because action memoranda traditionally have a one page limit, which is insufficient to communicate the findings of an investigation. However, according to State OIG officials, in 2016, the Office of the Executive Secretariat (which handles executive communication) requested that OIG put its whistleblower reports in the form of an action memorandum, but this change has not been put into guidance. An action memorandum signals that action by the agency head is required, while an information memorandum does not. Although a determination by the agency head is required by law, we noted that the OIG guidance does not specify that the action memoranda should be sent to the agency head, signaling action is to be taken. According to federal internal control standards, management should internally communicate the necessary information to achieve the entity’s objectives. This can be achieved through clear guidance or policies. Further, FAR 3.908-5 establishes pilot program requirements, and department guidance should include the requirements laid out in the FAR, such as time frames for determinations by the agency head and who receives copies of the investigation results. Although the four selected OIGs all provided some level of guidance on executing the pilot program, it is possible that some steps in this process may be missed because they do not have detailed guidance that addresses all required elements of the pilot program. Without providing more details in their guidance, these departments may be at risk of not fully implementing all the provisions of the pilot program. The pilot program statute and implementing regulations require that the OIG forward a report of its investigation findings to several entities, including the agency head, the complainant, and the contractor. Additionally, the FAR requires that the agency’s head of the contracting activity also receive a report of investigation findings. Of our four selected departments, Commerce and Interior reported that they did not have any investigations finalized during our review period. In contrast, Homeland Security and State had investigations with findings that were not forwarded to all appropriate entities to allow the agency head to make a final determination of whether reprisal occurred. Specific details follow: Homeland Security OIG officials reported that they found the complaints to be unsubstantiated in their two investigative reports and reported forwarding the findings from their two investigations during this period to the contractor and the complainant. However, although OIG officials reported attempting to send the report to the agency head, department officials reported that the reports did not actually go to the appropriate contacts. As a result, the reports were not received by the correct contact in the department, and the agency head did not make the determination in either case, as required by law. Agency officials reported that in one case a report was sent to the Office of General Counsel Labor and Employment division, not the agency head, and in the other case, the report was forwarded to the Secretary’s office, but nothing was done with the report. OIG officials said their implementation of the pilot was an evolving process and that they were not notified that the reports had gone to the wrong person. State OIG officials reported that the five investigations completed by December 31, 2015, were forwarded to the agency head, and the results of the OIG investigations were communicated to both the complainant and the contractor. During the course of our review, a State OIG official said that he had previously sent the reports to the relevant contracting activity at each Bureau, as designated by the Department of State Acquisition Regulation, in an effort to meet the requirement to provide the investigation results to the head of the contracting activity. However, starting in October 2016 and going forward, officials said the OIG plans to send reports to State’s Procurement Executive, the head of the contracting activity at the State Department, who has since been designated by the agency head during the course of this review to make determination of potential reprisals, as well as provide remedies. For these five cases, a State OIG official reported that the complaints were unsubstantiated and the OIG forwarded all findings as information memoranda to the agency head. The information memoranda include a cover page indicating the investigation’s findings, and that the Secretary should review the report for informational purposes, but there is no indication on the cover page of actions required—including that the agency head has 30 days to make a determination—because a determination had been made by the OIG. However, the pilot program requires that even if the OIG determines the reprisal complaint is unsubstantiated, the agency head must make the final determination. Officials from the agency head at State explained that for the five cases in which information memoranda were provided to report the investigations’ findings, they understood that no action was required, and no action was taken, however the responsibility to make a final determination of whether reprisal occurred under the pilot program remained. As a result, no documentation exists indicating that the agency head agreed with the investigations’ findings. According to the statute, however, the agency head, not the OIG, must make the determination of whether reprisal occurred within 30 days of receiving the report of investigation findings. During our review, and in part as a result of ongoing work, in June 2016, OIG officials said State instructed the OIG to provide the results of its investigations as action memoranda, rather than information memoranda, for both substantiated and unsubstantiated investigation findings of a reprisal complaint in order to indicate action from the agency head is necessary. In addition, the action memorandum now includes the 30-day requirement for the agency head to make a determination as to whether the employee was subjected to a reprisal on the complaint. As a result of these changes, two additional investigation findings from October 2016 were reported to the agency head in the action memorandum format. In addition to investigating reprisal complaints, the pilot program required a new FAR clause to be inserted into contract actions; this action is to be accomplished by the departments’ contracting officials. As discussed earlier, the FAR clause instructs contractors to communicate to their employees, in writing and in the predominant language of their workforce, their rights under the pilot program. These rights include who an employee may report an initial disclosure or submit a reprisal complaint to, their right to an investigation for covered reprisal complaints, and other rights and remedies. The FAR clause is required to be inserted into new contracts over the simplified acquisition threshold, generally $150,000, for any contracts awarded after September 30, 2013, until the close of the pilot program on July 1, 2017. For commercial item acquisitions, contracting officers must insert an already-required clause, 52.212-4, that now requires compliance with the pilot program statute 41 U.S.C. § 4712. Commerce, Homeland Security, Interior, and State contracting officials reported that they use the FAR clause to inform contractors of their responsibilities. However, we found that at State, Commerce, and Homeland Security, contract writing systems may not automatically include the clause into contracts that are required, and some required a contracting officer to insert the FAR clause into each contract into which it is required to be included, rather than through an automated system. At Interior, officials said the clause would be automatically inserted into new awards as appropriate, however, we found the clause was not inserted in all contracts that we reviewed. Internal control standards require that an entity should establish monitoring activities and evaluate results. However, we found all four selected departments reported having no department-wide, regular compliance review that would detect whether the required FAR clause is included in required contracts. For example, Commerce officials reported that while they do have a compliance review that checks for the insertion of mandatory clauses, and a review was conducted in 2014 and included contract actions from 2011 through 2013, a review has not been done since; therefore, no department-wide review has been done on the inclusion of the FAR clause required by the pilot program. A contracting official from Homeland Security said that all contracting officers, as part of the review process before a contract is signed, are required to review contract actions to ensure that all applicable clauses are included; however, no department-wide review is done. Contracting officials from Interior said that, while they conduct contracting compliance reviews, they do not include specific clauses in those reviews unless the agency has a specific reason to do so, such as if they determined through risk analysis that the clause may not be included. To date, according to these officials, Interior has not checked compliance of the inclusion of the FAR clause. Officials from State report they rely solely on supervisory review of contract documents and there is no higher-level compliance review to determine whether the FAR clause is inserted into new contracts. Despite the acknowledgement from all four departments that the required clause was to be included in new contracts, we found that some contracts in our review lacked the required whistleblower protections FAR clause 52.203-17 or 52.212-4 for commercial item contracts. The contracts for Homeland Security, Interior, and State were not commercial item contracts, but the contract for Commerce was. At Commerce, a contract awarded in September 2015 of more than $450,000 for computer hardware and software licenses provided by contractors does not include the required whistleblower protections FAR clause or the commercial item contract clause. Further, at Homeland Security, a contract awarded in September 2015 for over $550,000 for the design and implementation of security software does not include the required whistleblower protections FAR clause. In addition, at Interior, a contract awarded in August 2015 of about $200,000 to perform research and development does not include the required whistleblower protections FAR clause. At State, a contract awarded in September 2015 for project development and design services for over $230,000 also did not include the required whistleblower protections FAR clause. Without a process in place to ensure the required contract clause is inserted into new contracts, these clauses may continue to be excluded. If acquisition officials fail to include the required clauses and fail to take other action that would inform the contractor employees of their rights under the pilot program, contractor employees may not be aware of their rights. The pilot program requires that executive agencies make a best effort to include the FAR clause in major contract modifications of existing contracts awarded before July 1, 2013. Officials from Commerce reported that they do not include the FAR clause in major modifications, but pointed out that the standard FAR convention for incorporating clauses into existing contracts allows the contracting officer to use discretion. Homeland Security officials also noted that contracting officers are encouraged to include the clause in major modifications to required contracts and task orders. Interior officials reported that it is up to the bureaus within Interior to decide if the clause is inserted into major modifications, and there is no department-wide policy. State officials reported that the clause is added on a case-by-case basis, and contracting officials are responsible for determining whether it is necessary to add the clause. Contracting officials at all four departments said they do not have a policy in place that defines major modification, or any policy or guidance that instructs contracting officials on how to determine if a modification would be considered “major” or what the contracting officer should do to make a best effort to include the FAR clause. Some contracting officials reported that even though there are requirements in the statute regarding making best efforts, they rely on the FAR and generally do not seek out additional counsel on the implementation of the law. However, the requirement to make best efforts to include the FAR clause into existing contracts (those awarded before the effective date of the pilot program) during major modifications of those contracts is not implemented in the FAR. In the FAR interim rule, agencies are only “encouraged” to put the clause in major modifications, but there is no mention of “best efforts” to do so. As a result, some departments’ officials who rely on the FAR guidance and rules may not be aware of the statutory requirement to make a best effort to include the FAR clause in major modifications to contracts awarded before July 1, 2013. Additionally, some of the contracting officials we spoke with said there may be costs associated with asking a contractor to include the clause during a major modification of an existing contract. However, contractors we spoke with said that adding the FAR clause would be largely administrative and they would be unlikely to ask for additional compensation to do so. Further, one contractor we spoke with pointed out that the company he represents would be hesitant to argue against including the FAR clause because the contractor understood and agreed with the importance of protecting whistleblowers from potential reprisal. Without a department-wide policy in place to determine whether or not to include the FAR clause into an existing contract during a major modification and to define what is major, it may not be possible for these departments to ensure their contracting officers are making a best effort to include the clause into existing contracts awarded prior to July 1, 2013, as required by the pilot program. Some contractors we spoke with were unaware of their obligations under the pilot program. These contractors not only have received federal funds from one or more of the four selected departments in our review, but also other federal agencies. They pointed out that they generally have not been contacted by agencies to follow up on what steps or actions they have taken to communicate in writing to employees about their rights against reprisal. However, another contractor pointed out that agencies have followed up and sought confirmation or attestation on other contract clauses, such as clauses designed to address human trafficking. In addition, one whistleblower advocacy group we spoke with noted that contractors’ employees may not be aware of their rights or where to find more information about the pilot program protections. This reinforces the need for agencies to ensure the mechanisms are in place for contractors to communicate these rights to the covered employees. At the four selected departments, department officials reported taking no additional action beyond inserting the FAR clause to inform contractors about their responsibilities to communicate to their employees—in writing and in the employees’ predominant language—their rights under the pilot program. Some officials noted that contractors are responsible for implementing FAR clauses, and if they do not do so, they are in breach of the contract. Federal internal control standards, under the information and communication standard, note that management should externally communicate necessary information to achieve its objectives. Given that contractors we spoke with stated they were not all aware of the need to communicate to their employees about their rights in this area, opportunities for improvements to communications between the two parties exist. For example, one department in our survey of 14 departments reported conducting external communication beyond including the FAR clause in new contracting actions by developing new guidance that will require its contracting staff to obtain email confirmation from contractors that they have notified their employees of their rights as reported above. Without additional communication about the requirements and protections provided by the whistleblower protections pilot program between the four departments and their contractors, contractors may not fully understand or appreciate the significance of their responsibility to communicate to their employees. Executive departments have an opportunity to help reduce fraud, waste, abuse, and mismanagement of government funds by leveraging the willingness of contractor, subcontractor, and grantee employees to report such instances. Because whistleblowers risk reprisal, including potential job loss, agencies must ensure those contractor employees are aware of their protections against reprisal. To fully implement the enhancement of contractor employee whistleblower protections pilot program, especially now that it has been made permanent, and to ensure that the review process does not stop short of the agency head review, OIGs must report their investigation findings to the agency head. When reports are not forwarded to the agency head for final determination, the requirement under the statute is not met. Further, the determination of the agency head may differ from that reached by the OIG, possibly affecting the complainant’s recourse. At the four selected departments reviewed, confusion among department officials about the pilot program’s processes and requirements remain, and further guidance may help clarify responsibilities under the pilot program. Further, opportunities exist for these four departments to ensure that the necessary FAR clause is included in all required contracts, and that they make a best effort to include the FAR clause in major modifications to existing contracts. Finally, improving communication with contractors, subcontractors, and grantees to ensure employees are aware of their responsibilities and rights under the pilot program are important steps for the selected executive departments’ contracting officials to take. By fully implementing the pilot program, these departments can encourage contractor personnel to disclose evidence of wrongdoing. Without these critical oversight elements of contracts, contractor employees may be unaware of the protections they have against reprisal, which may ultimately impact their willingness to come forward when witnessing fraud, waste, abuse, and mismanagement. We recommend that the Inspectors General of Commerce, Homeland Security, Interior, and State develop or clarify existing guidance on the implementation of the pilot program. For example, the guidance should identify specific pilot program processes such as levels of review during an investigation, and where the findings of investigations are to be reported. We also recommend that the Secretaries of Commerce, Homeland Security, Interior, and State develop policies and processes to help ensure that the FAR clause 52.203-17 is inserted in new contracts and major modifications as appropriate, contracting officials can determine whether a modification is major and the applicability of the FAR clause, and whether they are making their best efforts to include the clause into existing contracts during major modifications, and contracting officials communicate with contractors and subcontractors to help ensure employees are informed about the requirements and protections provided by the whistleblower protection pilot program. We provided a draft of this product to the Departments of Commerce, Homeland Security, Interior, and State for comment. All four departments concurred with the recommendations. The agencies’ comments are summarized below and written comments from Commerce, Homeland Security, and State are reproduced in appendices IV, V, and VI respectively. Interior agreed with the recommendations in an email. We also received technical comments from Commerce, Homeland Security, and State which we incorporated, as appropriate. In Commerce’s written comments, the Department said the differences between the statute and the FAR regulations need to be addressed, and agreed to encourage contractors to communicate with their subcontractors about the requirements and protections of the pilot program. Commerce OIG agreed to incorporate some of the guidance in their policy manual into their flowchart guidance, and revise their investigative policy manual as necessary. In Homeland Security’s written comments, the Department agreed to review processes to ensure the FAR clause is inserted into new contracts, develop policies and procedures to ensure contracting officers have clear guidance on when to incorporate the FAR clause, and will communicate broadly with those who do business with the Department to remind them of their contractual obligation under the pilot. The Homeland Security OIG has updated their directive in accordance with our recommendation. In an email, Interior noted that the Department plans to develop supplemental guidance in fiscal year 2017 to assist contracting officers in appropriately applying the FAR clause and remind them of their responsibility to communicate the requirements of the clause to their contractors and subcontractors where possible. In State’s written comments, the Department agreed to ensure that the FAR clause is inserted in new contracts and major modifications, assist contracting officers to determine whether a modification is major and whether they are making best efforts to include it, and assist contracting officials with communicating to contractors and subcontractors to help ensure contractor employees are informed about the requirements under the pilot program. The State OIG has updated policies to include the 30- day deadline for agency head determination in whistleblower reports, accommodate the agency head’s specifications for sending the report, and specify that the Procurement Executive is the Secretary of State’s designee for whistleblower investigations. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Commerce, Interior, Homeland Security, and State, and to other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The National Defense Authorization Act for Fiscal Year 2013 contained a provision for us to evaluate and report on the implementation of the Pilot Program for the Enhancement of Contractor Employee Whistleblower Protections (pilot program). In December 2016, Congress enacted legislation making the pilot program permanent. Our report: (1) describes the results of the whistleblower pilot program between July 1, 2013, and December 31, 2015, across 14 executive departments; and (2) assesses the extent to which four selected departments implemented the pilot program. To describe the results of the whistleblower pilot program, we surveyed the Office of Inspector General (OIG) at the 14 executive departments covered by the legislation on the reprisal complaints received between July 1, 2013, and December 31, 2015. In this report, we use the terms “agency” and “agency head” when referring to provisions of the whistleblower protections pilot program legislation in general because the legislation uses these terms. We use the term “departments” when we refer to the 14 executive departments defined by statute and covered by the whistleblower protections pilot program that were the focus of this review. Specifically, we surveyed the OIGs at the Departments of Agriculture, Commerce, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Interior, Justice, Labor, State, Transportation, Treasury, and Veterans Affairs. We sent the survey questionnaire—by e-mail in an attached Microsoft Word form that respondents could return electronically after completing it—to 14 executive departments on June 15, 2016, and received responses from the OIGs at all 14 departments. We coordinated survey responses through each department’s OIG, which consulted with their cognizant department officials to respond to questions on an as-needed basis. Among other things, the survey collected information about the number of disclosures of waste, fraud, abuse, and mismanagement as well as reprisal complaints and mechanisms used by executive departments to implement provisions of the pilot program. For each department, we asked officials to provide information about activities and data related to whistleblower complaints, including data on complaints that were not subject to the pilot program. For pilot program-related information, we requested data on contractor, subcontractor, and grantee employees, such as the number of complaints received from each group and how many of the complaints were investigated by the OIG. When necessary, we performed limited follow-up with all 14 departments to clarify answers and request relevant documentation; this follow-up took place from July 26, 2016, to December 8, 2016. We did not independently verify information obtained through the survey, including data describing the case numbers the departments provided; however, to determine the information was reliable for our purposes we asked the departments to describe the source(s) of information used and steps taken to determine these numbers. We believe these data are reliable for our purposes. The survey used for this study is reprinted in appendix I. Since this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the survey, collecting the data, and analyzing them to minimize such nonsampling error. We conducted three telephone pretests of the survey instrument with officials at three departments to ensure that questions were clear, comprehensive, and unbiased, and to minimize the burden the questionnaire placed on respondents. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to administration of the survey. We made changes to the content and format of the questions based on feedback from the pretests and independent review. In addition to pretesting the survey, we coordinated with the Council of the Inspectors General on Integrity and Efficiency (CIGIE) to hold a question and answer session after releasing the survey. To assess the extent to which departments implemented the pilot program, we selected four departments based primarily on the dollar value of their fiscal year 2015 contract funds awarded, the most recent year available at the time we began our review. To obtain a range of experience level with contracting at departments, we included two departments with higher contract funds awarded (Homeland Security, State) and two departments with lower contract funds awarded (Commerce, Interior). To identify these departments, we ranked department contract funds awarded from highest to lowest, and selected two departments from the top half of the 14 departments, and two from the bottom half. Our secondary criteria included the proportion of contract funds awarded to overall obligations in fiscal year 2015 and whether the departments’ OIG website included mention of the pilot program. At each department, we focused on the department’s handling of reprisal complaints filed by contractor and subcontractor employees. We interviewed or obtained written answers from department OIG officials, the office of the agency head, and contracting officials about their processes and practices for the agency duties outlined in the mandate. Where applicable, we reviewed documentation such as relevant policies, guidance, and internal reports. Findings based on information collected from the four departments cannot be generalized to all departments. To identify whether a Federal Acquisition Regulation (FAR) clause was included in contracts as required, we reviewed a non-generalizable sample from each of the four case study departments. To identify an example of a contract without the clause, we reviewed documentation for a random selection of at least 50 contracts at each of the four departments. We used the Federal Procurement Data System–Next Generation (FPDS-NG) to generate a sample of contract actions over $150,000 that were awarded by the four departments included in our review in the fourth quarter of fiscal year 2015. The sample also included orders awarded in the fourth quarter of fiscal year 2015, regardless of the award date of the associated contract. To avoid selecting contracts where the underlying base contract was awarded by another department, we excluded interagency contracts. We asked for contract actions awarded in the fourth quarter of fiscal year 2015 to ensure we were sampling contracts that are required to have the clause and would be reasonably accessible by the departments (e.g., they would likely not be archived). To avoid selecting contracts where the underlying base contract was awarded by another department, we excluded interagency contracts. We also excluded task or delivery orders awarded using blanket purchase agreements because we could not consistently determine which department awarded the underlying base contract based on FPDS-NG data. For Homeland Security, we excluded contracts awarded by the Coast Guard because the Coast Guard is not covered under the pilot program and its contracts would not be required to contain this clause. We excluded personal services contracts because they are not specifically included in the pilot program statute. We conducted data reliability checks on the FPDS-NG dataset by comparing it to contract documentation obtained from contract files and determined it was sufficiently reliable for our purposes. Finally, in order to learn about challenges experienced during the implementation of the pilot program, we also conducted interviews with contractors and whistleblower advocacy groups. We contacted five large and eight small business contractors based on their contract obligations from fiscal year 2013 through fiscal year 2015, as reported in FPDS-NG. For large contractors, we contacted firms that were listed on FPDS-NG’s “Top 100 Contractors” list for at least two of the four selected departments and in at least two of the fiscal years since 2013, when the pilot program went into effect. For small business contractors, we contacted firms that received among the largest amount of contract obligations at each of the four selected departments in at least two separate years since 2013. We ultimately interviewed or obtained written answers from seven contractors. While information collected from the contractors is not generalizable to all contractors, they provide important perspectives on challenges experienced by both large and small contractors. Lastly, we spoke with two advocacy groups for whistleblowers. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional information on reprisal complaints for the four selected departments—Commerce, Homeland Security, Interior, and State—under the Pilot Program for Enhancement of Contractor Employee Whistleblower Protections (pilot program). Table 2 provides a summary of the data collected regarding reprisal complaints received at each department and the Office of Inspector General officials’ disposition of each complaint. In addition to the contact names above, Penny Berrier (Assistant Director), Mary Diop, Meghan Perez, and Jocelyn Yin were principal contributors to this report. In addition, the following people made key contributions to this report: James Ashley, Lorraine Ettaro, Stephanie Gustafson, Kurt Gurka, Julia Kennon, John Krump, Kate Lenane, Sylvia Schatz, and Roxanna Sun. | Whistleblowers play an important role in safeguarding the federal government against fraud, waste, abuse, and mismanagement. The National Defense Authorization Act for Fiscal Year 2013 introduced a pilot program to expand whistleblower rights against reprisal for executive agencies' contractors, subcontractors, and grantee employees. Also, in 2013, the FAR was amended to require agencies to insert a contract clause to ensure contractors communicate rights to their employees for certain contracts. The act also contained a provision for GAO to report on the status of the pilot program. This report: (1) describes the results of the whistleblower pilot program across 14 selected executive departments from July 1, 2013, to December 31, 2015 and (2) assesses the extent to which four departments implemented the pilot program. GAO analyzed survey data from 14 executive departments, which are a subset of all entities covered by the legislation; selected four departments based on high and low contract funds awarded to conduct a more detailed review of the pilot program implementation; interviewed agency officials and contractors; and reviewed a non-generalizable sample of contracts included in the pilot program. The Whistleblower Protections Pilot Program (pilot program) provides enhanced legal protections to contractor employees who believe that they have experienced reprisal as a result of disclosing certain wrongdoings. Among other enhancements, the act expanded the persons and entities to which a whistleblower could disclose wrongdoing and identified which office within an agency has responsibility for handling complaints. For example, under the pilot program, when the Office of Inspector General (OIG) receives a complaint, it must determine whether a complaint is covered by the pilot program and if covered, conduct an investigation and submit the findings to the agency head, complainant, and contractor. The 14 selected departments that GAO reviewed reported receiving an estimated 1,560 whistleblower reprisal complaints from July 1, 2013, through December 31, 2015. Of these complaints, 127 were submitted by contractor, subcontractor, and grantee employees under the pilot program. The 14 OIGs investigated 44 of the 127 complaints but did not find that reprisal had occurred in any of them. The complaints not investigated by the OIGs were excluded for a variety of reasons, such as the complaint was deemed to be frivolous or was being decided by another judicial authority. GAO's in-depth review of four selected departments' implementation of the pilot program found various opportunities for improvement. Specific details follow: The pilot program requires findings of investigated reprisal complaints to be forwarded to several entities, including to the agency head for a determination of whether reprisal occurred and, as of December 2015, to the head of the contracting activity. However, at two of the four departments reviewed, the OIGs either did not forward their investigation findings to the appropriate entities or did not forward findings in the necessary format because, according to OIG officials, they were unclear about how to execute the requirement. As a result, at these two departments, the agency heads did not make the determination of whether reprisal occurred as required by the pilot program. Contracting officers must insert the required Federal Acquisition Regulation (FAR) whistleblower clause to be inserted into contracts exceeding the simplified acquisition threshold, which is generally $150,000, as a method to communicate with contractors about pilot program requirements. However, while the four selected departments reported that they inserted the clause into the required contracts, GAO found new contracts awarded during the pilot program's timeframe that did not include the required clause. Without effective internal control policies, agencies may continue to omit the required clause. Some contractors GAO spoke with were unaware of their obligations under the pilot program. Officials from all four departments reported taking no additional action to communicate to contractors their responsibilities to inform employees of their rights under the pilot program. This is inconsistent with federal internal control standards for communication. Without actions to help contractors fully understand their responsibilities under the pilot program, the departments do not have assurance that contractor employees are also aware of the protections afforded by the pilot program legislation. GAO is making specific recommendations to the four selected departments to improve whistleblower protections policies and guidance and communication with contractors. The departments agreed with the recommendations and have taken or identified actions to address the recommendations. |
During the first year of the advocacy review panel requirements’ implementation, OSHA convened a panel for one draft rule and published two other proposed rules for which panels were not held. SBA’s Chief Counsel for Advocacy agreed with OSHA’s certification that neither of these two proposed rules required an advocacy review panel. As of November 1, 1997, EPA had convened advocacy review panels for four draft rules. EPA also published 17 other proposed rules that were reviewed by OIRA for which panels were not held because EPA certified that the proposed rules would not have a significant economic impact on a substantial number of small entities. SBA’s Chief Counsel said that EPA should have convened panels for 2 of these 17 proposed rules—the rules setting national ambient air quality standards for ozone and for particulate matter. Some of the small entity representatives that we interviewed also said that EPA should have convened advocacy review panels for these two proposed rules. EPA officials said that review panels were not required for the ozone and particulate matter rules because they would not, by themselves, have a significant economic impact on a substantial number of small entities. The officials said that any effects that the rules would have on small entities would only occur when the states determine how the standards will be specifically implemented. However, SBA’s Chief Counsel for Advocacy disagreed with EPA’s assessment. He said that the promulgation of these two rules cannot be separated from their implementation, and that effects on small entities will flow “inexorably” from the standards EPA established. We could not determine whether EPA should have convened advocacy review panels for the ozone and particulate matter rules because there are no clear governmentwide criteria for determining whether a rule has a “significant economic impact on a substantial number of small entities.” Specifically, it is unclear whether health standards that an agency establishes by regulation should be considered separable from implementation requirements established by state governments or other entities. The Regulatory Flexibility Act (RFA), which SBREFA amended, does not define the term “significant economic impact on a substantial number of small entities.” Although the RFA requires the SBA Chief Counsel for Advocacy to monitor agencies’ compliance with the act, it does not expressly authorize SBA or any other entity to interpret key provisions. In a previous report we noted that agencies had different interpretations regarding how the RFA’s provisions should be interpreted. In another report, we said that if Congress wishes to strengthen the implementation of the RFA it should consider amending the act to provide clear authority and responsibility to interpret key provisions and issue guidance. In our report that is being issued today, we again conclude that governmentwide criteria are needed regarding what constitutes a “significant economic impact on a substantial number of small entities.” Therefore, we said that if Congress wishes to clarify and strengthen the implementation of the RFA and SBREFA, it should consider providing SBA or another entity with clear authority to interpret the RFA’s key provisions. We also said that Congress could consider establishing, or requiring SBA or some other entity to develop, governmentwide criteria defining the phrase “significant economic impact on a substantial number of small entities.” Specifically, those criteria should state whether the establishment of regulatory standards by a federal agency should be separated from implementation requirements imposed by other entities. Governmentwide criteria can help ensure consistency in how the RFA and SBREFA are implemented across federal agencies. However, those criteria must be flexible enough to allow for some agency-by-agency variations in the kinds of impacts that should be considered “significant” and what constitutes a “substantial” number of small entities. As of November 1, 1997, EPA and OSHA had convened five advocacy review panels. OSHA convened the first panel on September 10, 1996, to review its draft standard for occupational exposure to tuberculosis (TB). EPA convened panels to review the following four draft rules: (1) control of emissions of air pollution from nonroad diesel engines (Mar. 25, 1997); (2) effluent limitations guidelines and pretreatment standards for the industrial laundries point source category (June 6, 1997); (3) stormwater phase II—national pollutant discharge elimination system (June 19, 1997); and (4) effluent limitations guidelines and standards for the transportation equipment-cleaning industry (July 16, 1997). The panels, EPA and OSHA, and SBA’s Chief Counsel for Advocacy generally followed SBREFA’s procedural requirements on how those panels should be convened and conducted. For example, as required by the statute: EPA and OSHA notified the SBA Chief Counsel before each of the panels and provided him with information on the potential impacts of the draft rules and the types of small entities that might be affected. The Chief Counsel responded to EPA and OSHA no later than 15 days after receipt of these materials and helped identify individuals representative of the affected small entities. Each of the five panels reviewed materials that the regulatory agencies had prepared and collected advice and recommendations from the small entity representatives. However, there were a few minor inconsistencies with SBREFA’s specific statutory requirements in the five panels we reviewed. For example, three of the panels took a few days longer than the 60 days allowed by the statute to conclude their deliberations and issue a report. Also, EPA did not formally designate a chair for its panels until June 11, 1996—about 6 weeks later than the statute required. Members of Congress and congressional staff viewed this as an attempt to prejudice the panel members’ consideration, and the practice was changed. For subsequent panels, EPA developed a summary of the comments it had received from small entities before the panels were convened, which it provided to the panel members. The panel members themselves then gathered advice and recommendations from the small entity representatives and drafted the final reports. As of November 1, 1997, two of the draft rules for which EPA and OSHA held advocacy review panels had been published as notices of proposed rulemaking in the Federal Register—OSHA’s proposed rule on the occupational exposure to TB and EPA’s proposed rule to control nonroad diesel engine emissions. The panels’ recommendations for these draft rules focused on providing small entities with flexibility in how to comply with the rules and on the need to consider potentially overlapping local, state, and federal regulations and enforcement. OSHA and EPA primarily responded to the panels’ recommendations in the supplementary information sections of the proposed rules, although OSHA also made some changes to the text of its rule. For example, one of the TB panel’s major recommendations was that OSHA reexamine the application of the draft rule to homeless shelters. In the supplementary information section of the proposed rule, OSHA said that it was conducting a special study of this issue and would hold hearings on issue\ related to TB exposure in homeless shelters. The TB panel also recommended that OSHA examine the potential cost savings associated with allowing TB training that a worker received in one place of employment to be used to satisfy training requirements in another place of employment. In response, OSHA changed the text of the draft rule to allow the portability of nonsite specific training. officials had already decided how the rules would be written before convening the panels, and that the officials were not interested in making any significant changes to the rules. Although most of the 32 small entity representatives with whom we spoke said that they thought the review panel process was worthwhile, about three-fourths of them suggested changes to improve that process. Their comments primarily focused on the following four issues: (1) the time frames in which the panels were conducted, (2) the composition of the groups of small entity representatives commenting to the panels, (3) the methods the panels used to gather comments, and (4) the materials about the draft rules that the regulatory agencies provided. Seven of the small entity representatives said they would have liked more advance notice of panel meetings and telephone conference calls with the panels. Some of these representatives said that short notices had prevented them from participating in certain panel efforts. Fourteen representatives said they were not given enough time to study the materials provided before being asked to comment on the draft rules. Five representatives suggested holding the panels earlier in the rulemaking process to increase the likelihood that the panels could affect the draft rules. Fourteen small entity representatives thought that the composition of those providing input to the panels could be improved. Specifically, they said that the panels should have obtained input from more representatives of (1) individual small entities, not just representatives from associations; (2) certain types of affected small entities that were not included (e.g., from certain geographic areas); (3) small entities that would bear the burden of implementing the draft rules (e.g., small municipalities); and (4) small entities that were reviewing the draft rule for the first time, and that had not been previously involved in developing the draft rules. Nine of the small entity representatives said that the conference calls that OSHA and EPA typically used to obtain their views limited the amount of discussion that could take place. Most of these representatives expressed a preference for face-to-face meetings because they believed the discussions would be fuller and provide greater value to the panels. informed discussion of the rules’ potential impacts on small entities, eight representatives said they believed the materials could have been improved. Six thought the materials were too vague or did not provide enough information. However, two representatives said that the materials were too voluminous and complex to expeditiously review. The agency officials we interviewed also offered suggestions for improving the panels. Because you will be hearing from those same officials later in this hearing, I will not go into detail about those suggestions. However, their comments centered on some of the same issues raised by the small entity representatives, including the timing of the panels, the materials provided to the representatives, and the manner by which comments are obtained. Many of the agency officials and small entity representatives that we interviewed said they believed the panel process has provided an opportunity to identify significant impacts on small entities and has given the agencies a better appreciation of the small entities’ concerns. However, implementation of the panel process has not been without controversy or concern. Our greatest concern about the panel process is the lack of clarity regarding whether EPA should have convened advocacy review panels for its national ambient air quality standards for ozone and for particulate matter. That concern is directly traceable to the lack of agreed-upon governmentwide criteria as to when a rule has a “significant economic impact on a substantial number of small entities” under the RFA. If governmentwide criteria had been established regarding when initial regulatory flexibility analyses should be prepared (and, therefore, when SBREFA advocacy review panels should be convened), the dispute regarding whether EPA should have convened additional panels would likely not have arisen. In particular, governmentwide criteria should address whether the establishment of regulatory standards by a federal agency should be separated from the subsequent implementation requirements imposed by states or other entities. Some of the concerns that small entity representatives expressed about the panel process may be difficult to resolve. When panels are held earlier in the process, it is less likely that the materials will be fully developed to provide detailed data and analyses to the small entity representatives. However, delaying the panels until such data are available could limit the opportunity for small entities to influence key decisions. How agencies implement the advocacy review panel process will have a pronounced effect on its continued viability. If small entity representatives are given the opportunity to discuss the issues they believe are important and see that their input is taken seriously, it is likely that they will continue to view the panel process as a useful opportunity to provide their comments on draft rules relatively early in the rulemaking process. Mr. Chairman and Madam Chairwoman, this completes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Small Business Regulatory Enforcement Fairness Act's (SBREFA) advocacy review panel provisions, focusing on: (1) whether the Environmental Protection Agency (EPA) or the Occupational Safety and Health Administration (OSHA) had applied the advocacy review panel requirements to all applicable rules that they proposed in the first year of the panel requirements; (2) whether the EPA and OSHA panels, the regulatory agencies themselves, and the Small Business Administration's (SBA) Chief Counsel for Advocacy followed the statute's procedural requirements; (3) identify any changes that EPA and OSHA made to the draft rules as a result of the panels' recommendations; and (4) identify any suggestions that agency officials and small entity representatives had regarding how the advocacy review panel process could be improved. GAO noted that: (1) as of November 1, 1997, EPA and OSHA had convened five review panels; (2) EPA and SBA's Chief Counsel for Advocacy disagree regarding the applicability of the panel requirements to two other rules that EPA proposed in December 1996--the national ambient air quality standards for ozone and for particulate matter; (3) specifically, EPA and the Chief Counsel disagree regarding whether the effects of states' implementation of these health standards can be separated from the standards themselves in determining whether EPA's rules may have a significant economic impact on a substantial number of small entities; (4) GAO suggested that Congress resolve this issue by taking steps to clarify the meaning of the term "significant impact"; (5) the agencies and the panels generally met SBREFA's procedural requirements, but there were several differences in how the panels operated; (6) the panels' recommendations regarding the two proposed rules that had been published as of November 1, 1997, focused on various issues, such as providing small entities with greater compliance flexibility and considering the effects of potentially overlapping regulations; (7) the agencies generally responded to those recommendations in the supplementary information sections of the proposed rules; and (8) the small entity representatives with whom GAO spoke and, to a lesser extent, the agency officials GAO interviewed, offered several suggestions to improve the advocacy review panel process. |
AFMC, headquartered at Wright-Patterson Air Force Base, Ohio, was created in 1992. It conducts research, development, test and evaluation, as well as provides acquisition-management services and logistics support necessary to ensure the readiness of Air Force weapon systems. AFMC has traditionally fulfilled its mission of equipping the Air Force through: the Air Force Research Laboratory; product centers that develop and acquire the weapon systems; test centers that offer the testing of the systems; and air logistics centers that service, upgrade, and repair the systems over their lifetimes. In addition, AFMC’s various specialized centers are designed to perform other functions, including foreign military sales and delivery of nuclear capabilities. In light of the budget pressures that the Department of Defense (DOD) and, in turn, AFMC faced in recent years, the Office of the Secretary of Defense’s Resource Management Decision 703A2 directed that civilian staffing levels for all services be returned to fiscal year 2010 levels. In response, AFMC announced a plan for reorganization in November 2011, which was designed to achieve position cuts and produce efficiencies throughout the command. As such, the reorganization eliminated 1,051 civilian positions and combined the functions of its 12 centers into 5 centers with each center assuming responsibility for one of AFMC’s 5 mission areas: (1) science and technology, (2) life-cycle management, (3) test and evaluation, (4) sustainment, and (5) nuclear weapon support. The geographic location where the functions of the former 12 centers were performed generally did not change as a result of the reorganization to the 5 current centers. Figure 1 shows the structure of AFMC before and after the reorganization. ESC was one of AFMC’s 12 former centers, headquartered at Hanscom Air Force Base, Massachusetts. ESC served as the Air Force’s center for the development and acquisition of electronic command-and-control systems. Under the reorganization, ESC’s functions were consolidated with other centers to become AFLCMC, a center at Wright-Patterson Air Force Base established in July 2012 with responsibility for total life-cycle management of all aircraft, engines, munitions, and electronic systems. Life-cycle management involves the refinement of product requirements to address existing needs, technology development, system development, production and fielding, and ongoing sustainment of the product. Hanscom Air Force Base has two directorates that are responsible for the life-cycle management of electronic systems: (1) Battle Management and (2) Command, Control, Communications, Intelligence and Networks. The directorates are led by PEOs who are ultimately responsible for acquisition of the systems in their portfolio and their timely delivery to the customer. To achieve their mission of acquisition and product support, PEOs are supported by system program managers, each of whom has responsibility for the development and design of support systems for a particular electronic system. PEOs are also supported by functional offices, which provide technical services such as acquisition, engineering, financial management, and contracting. The reorganization affected the reporting chains of command and the workforce composition for some offices, but did not change the acquisition mission at Hanscom Air Force Base. The reorganization affected the reporting chains of command and the composition of the workforce for some offices at Hanscom Air Force Base. Specifically, the reorganization affected the reporting chains of command within PEO directorates by inactivating ESC, removing its 3-star Commander, and integrating the former ESC into AFLCMC, a newly established organization led by a 3-star commander at Wright- Patterson Air Force Base. Although Hanscom’s two PEOs continue to report to the Air Force’s service acquisition executive at the Pentagon in performing their mission related to acquisition of weapon systems and product support, under the reorganization they also support the AFLCMC Commander in organizing, training, and equipping the PEO directorates (see fig. 2). The reorganization also affected the reporting chains of command for system program managers, who support PEOs. Prior to the reorganization, system program managers reported to the PEOs for initial system development, system procurement, manufacturing, and testing of weapon systems. Once the weapon systems matured, the functions of the system program manager transferred to an air logistics center where the system program manager reported to the designated acquisition officials for product sustainment responsibilities. The reorganization eliminated the position of designated acquisition officials and, as a result, system program managers report to PEOs at all stages of the product life cycle, including product sustainment. This change affected the functions of PEOs, who under the reorganization have oversight responsibility not just for the acquisition of the weapon systems, as they did under the old structure, but also for the sustainment and product support of these systems. Further, the reorganization affected the reporting chains of command for personnel in Hanscom’s functional offices. Specifically, functional office personnel at Hanscom Air Force Base—who provide technical services such as acquisition, engineering, financial management, and contracting—previously were managed by locally-based ESC leadership. Under the reorganization, they report directly to senior functional managers at Wright-Patterson Air Force Base. As a result of this change, senior functional managers oversee the flow of funding and task assignments that were formerly managed at individual locations, according to Hanscom’s functional office personnel. For example, Hanscom officials said that prior to the reorganization, officials at Hanscom Air Force Base could determine what positions required a top-secret security clearance, whereas since the reorganization senior functional managers at Wright-Patterson Air Force Base make these determinations. In addition to its effects on the reporting chains of command, the reorganization also affected the composition of Hanscom’s workforce by eliminating about 10 percent of its civilian authorizations.the reorganization eliminated 131 of Hanscom’s 1,258 civilian authorizations that were comprised exclusively of government positions and did not include contractor positions, according to a Hanscom contracting official. All of these positions were identified by AFMC as overhead. AFMC officials said they targeted overhead positions for elimination, rather than first eliminating vacant positions or making uniform cuts across all centers, in an effort to implement the cuts in a strategic manner. After deciding to focus the cuts on positions identified as overhead, AFMC officials stated that they consulted with all of their product centers to come to an agreement on positions that qualified as overhead. VERA/VSIP are programs that allow agencies to incentivize surplus or displaced employees to separate by early retirement, voluntary retirement, or resignation. The Homeland Security Act of 2002, Pub. L. No. 107-296, §1313(b), authorized these programs under regulations issued by the Office of Personnel Management. The Office of Personnel Management has issued guidance to the agencies stating that these programs may be used when the buyout averts an involuntary separation of the person taking the buyout or another individual who can fill the position that was vacated by the person taking the buyout. either already were vacant or became vacant as the result of other employees agreeing to leave through VERA/VSIP, and 1 person was removed while in a probationary period (see fig. 3). The reorganization did not change the mission of Hanscom’s directorates that are responsible for the acquisition of electronic systems. Our analysis of documentation from Hanscom and Wright-Patterson Air Force Bases showed that the PEOs responsible for carrying out Hanscom’s acquisition mission have remained at Hanscom Air Force Base and no positions were eliminated within Hanscom’s directorates that are directly involved with the implementation of its acquisition mission. Moreover, both of the PEOs at Hanscom Air Force Base who directly manage the acquisition of weapon systems, as well as system program managers who work for them, told us the reorganization did not change the processes for carrying out their mission, or change acquisition and fielding processes and timeframes. While the Air Force recently expanded the portfolios of the two PEOs at Hanscom Air Force Base, Air Force officials attributed this change to an unrelated initiative by the Air Force’s service acquisition executive. In addition, none of the six customers we interviewed identified any changes in how Hanscom Air Force Base components carry out their acquisition functions, including how they interact with and deliver products to the customer. The reorganization resulted in opportunities and some concerns at Hanscom Air Force Base, and AFLCMC has taken steps to facilitate its implementation. Officials at Hanscom Air Force Base and Wright-Patterson Air Force Base, as well as Hanscom customers and contractors, stated that the reorganization resulted in opportunities to help strengthen the delivery of products to customers. These opportunities include increased focus on life-cycle management of weapon systems by PEOs, an increase in collaboration of personnel within the restructured AFLCMC, and greater standardization of processes. Increased focus on life-cycle management. According to officials at Wright-Patterson Air Force Base and Hanscom Air Force Base, one of the benefits of the reorganization is the focus on life-cycle management achieved by giving PEOs responsibilities over all phases of the weapon system’s life cycle. By assuming oversight over all phases of the life cycle, PEOs can more efficiently manage the systems in their portfolio, according to Hanscom’s PEOs and system program managers whom we interviewed. For example, one PEO told us that overseeing the system through its entire life cycle has allowed him to be more aware of sustainment-related costs during a system’s development, thus bringing the potential for more long-term value to the customer. Further, three of the six customers we interviewed stated that an increased focus on life-cycle management could result in greater efficiencies and value to the customer in the long term. Increased collaboration within the command. Wright-Patterson Air Force Base and Hanscom Air Force Base officials cited increased opportunities for collaboration as a result of bringing several centers and all of AFMC’s PEOs under the command of AFLCMC. For example, the Commander of AFLCMC and both of Hanscom’s PEOs stated that the reorganization provided PEOs and their staff with increased opportunities to exchange key information related to products. According to one of the PEOs, the sharing of information is especially important when different PEOs are responsible for products that complement each other, such as products that comprise a single weapon system. Further, senior functional managers at Wright- Patterson Air Force Base said the establishment of AFLCMC enables functional office personnel from different AFLCMC locations to share technical expertise related to weapon systems under their purview, and an engineering official at Hanscom Air Force Base said that she and her counterparts at other AFLCMC locations have become more aware of each others’ needs in carrying out duties such as recruiting and hiring personnel. Greater standardization of processes. AFMC and AFLCMC headquarters officials at Wright-Patterson Air Force Base stated that the reorganization allowed them to standardize processes and avoid duplication associated with each location-based product center maintaining its own set of processes. For example, personnel officials at Wright-Patterson Air Force Base cited the benefits of having a standard process of approving waivers from certain training requirements across AFLCMC. Standardization of processes is one of AFLCMC’s six strategic objectives, and the organization has taken steps to promote standardization, including establishing a Process and Standards Board, which led the effort to identify key processes best suited to standardization, such as processes for developing cost estimates by financial management personnel, awarding contracts by contracting personnel, and conducting analysis of information technology systems by engineering personnel. However, a former ESC command staff member expressed concerns about the appropriateness of standardizing certain processes given the specialized needs of each of the former product centers subsumed under AFLCMC. For example, he said the engineering expertise required to support the development of aeronautical systems at Wright-Patterson Air Force Base is different than the engineering expertise and processes required to support the development of electronic systems at Hanscom Air Force Base. Current and former Hanscom officials and six contractors we interviewed also raised some concerns associated with the reorganization. These concerns related to: increased workload for functional office personnel at Hanscom Air Force Base due to position eliminations there, process delays resulting from centralization of various administrative processes and actions at Wright-Patterson Air Force Base, officials at Wright- Patterson Air Force Base not having a full understanding of Hanscom’s programs, and possible future diminished importance of Hanscom Air Force Base as the center of electronic systems for the Air Force. AFMC and AFLCMC officials said they do not share these concerns and do not agree that these issues reflect significant problems. Specifically, current and former personnel and contractors we interviewed stated the following concerns. Increased workload. Functional office personnel at Hanscom Air Force Base said they experienced an increase in their workload due to the reorganization. They said they have had to assume responsibility for the tasks previously performed by personnel whose positions were eliminated. For example, an official providing functional support to one of Hanscom’s directorates said her colleague had to review immunization records for personnel within the directorate, a task previously performed by other functional office personnel within ESC. This official said her concern was that such tasks could take time away from her office’s primary responsibility of supporting the directorate’s acquisition mission. Moreover, functional office personnel said due to ESC inactivation and the subsequent elimination of positions providing ESC-wide functional support, they no longer have the capability to maintain some of the projects previously performed at the ESC level. For example, Hanscom’s functional office officials stated they discontinued projects, such as a mentoring program for financial management personnel and a knowledge-sharing online resource for engineering personnel. In response, the AFLCMC Commander said Hanscom Air Force Base retained key functional expertise on site because it has remained an operating location for functional office personnel under the new structure. Process delays. In interviews, functional office personnel at Hanscom Air Force Base, system program managers, and two contractors stated that some processes have become more time consuming with senior functional managers at Wright-Patterson Air Force Base approving actions previously approved by ESC leadership at Hanscom Air Force Base. For example, a financial management official at Hanscom Air Force Base said due to the reorganization her office experienced delays in the flow of funds from headquarters at Wright-Patterson Air Force Base, which created concerns about meeting fielding timelines. Similarly, contracting and personnel officials at Hanscom Air Force Base said some processes, such as obtaining waivers from certain standard requirements or filling positions, take longer since they have to wait for approval by AFLCMC headquarters at Wright-Patterson Air Force Base. In the past, officials said these actions could be expeditiously approved by the ESC leadership at Hanscom Air Force Base. A former ESC command staff member stated that these process delays could lead to program decision delays, which could affect the PEOs’ acquisition mission. With regard to centralization of approval authority, AFMC and AFLCMC officials said any delays in approval authority have not adversely affected the customers. Moreover, they said standardizing processes will help reduce duplication and is expected to generate greater efficiencies for the customer in the long term. Lack of full understanding of Hanscom’s programs. In interviews, functional office personnel at Hanscom Air Force Base, members of the former ESC leadership team, and two of the seven contractors expressed concerns that AFLCMC personnel at Wright-Patterson Air Force Base, who provide support to all AFLCMC locations, may not have a full understanding of Hanscom’s programs. For example, a former ESC command staff member and an engineering official at Hanscom Air Force Base stated that the type of engineering support required for electronic systems is different from the type of support required for other systems that fall under AFLCMC. The engineering official said information technology requirements for airplanes differ from those for electronic systems, and personnel at Wright-Patterson Air Force Base may not have a full understanding of the technical requirements needed to support Hanscom’s programs. Similarly, a financial management official at Hanscom Air Force Base said the process of estimating the cost of software applicable to Hanscom’s electronic systems is different than the cost-estimating procedures for other types of products such as aircraft engines. While ESC’s former Commander credited AFLCMC’s leadership with trying to increase the capacity of Wright-Patterson Air Force Base personnel to support Hanscom’s electronic systems, a former ESC command staff member stated it may be more difficult to locate the needed engineering and information technology expertise at Wright-Patterson Air Force Base, which may not have as strong of a relationship with academia in the Dayton, Ohio, area as Hanscom Air Force Base does in the Boston, Massachusetts, region. In addressing the limited understanding of Hanscom’s electronic systems programs by Wright-Patterson Air Force Base personnel, AFMC and AFLCMC officials stated senior functional managers at Wright-Patterson Air Force Base do not require specific expertise in electronic systems because the processes, such as personnel and financial management, apply across systems and programs. Possibility of diminished importance of Hanscom Air Force Base in the future. Hanscom officials and the majority of the contractors we interviewed expressed concerns about the extent of Hanscom’s continued importance to the Air Force. They said the inactivation of ESC as a stand-alone center and the removal of a 3-star commander from the base raised questions among Hanscom personnel and contractors whether the base might be susceptible to closure in the future. Additionally, contractors cited concerns about the loss of an on-site leader who can serve as an advocate for Hanscom’s unique role in the acquisition of electronic systems and as a link between Hanscom and the contracting community that supports these programs. Regarding Hanscom’s future, the AFLCMC Commander told us that AFLCMC fully recognizes the importance of Hanscom’s mission for national defense and plans to retain its core mission implementation functions. AFLCMC has taken steps to facilitate the implementation of the reorganization across all affected locations, including Hanscom Air Force Base. To help manage the reorganization process, AFLCMC established a governance structure that includes the following entities: the 100-Day Taskforce, which addresses administrative issues that may arise in the course of the reorganization; the AFLCMC Council, which meets monthly to track performance against the established metrics; and the Standards and Process Board, which convenes as needed to identify ways to standardize processes across AFLCMC. Further, AFLCMC has taken steps to communicate reorganization goals, plans, and progress to stakeholders across the command such as periodic newsletters, teleconferences, and web-based discussion forums. For example, AFLCMC’s senior officials said that they hold weekly teleconferences with PEOs at each of AFLCMC’s locations, including Hanscom Air Force Base, to better understand the concerns they may be having. AFLCMC also publishes a monthly newsletter that offers a forum for keeping stakeholders informed of issues affecting the new organization, such as the development of new organizational objectives and performance metrics. Other communication mechanisms that AFLCMC officials mentioned include regular visits by the AFLCMC Commander to Hanscom Air Force Base, conferences of personnel across AFLCMC, and encouraging AFLCMC personnel to submit ideas for improvements in the processes of the new organization. In addition, all 10 senior functional managers at Wright-Patterson Air Force Base whom we interviewed stated that they use various mechanisms to regularly communicate with the functional office personnel in different geographic locations, such as video teleconferences, computer cameras, and secure video chats. Hanscom’s functional office personnel whom we interviewed had different perceptions regarding the sufficiency of AFLCMC’s efforts. Some functional office personnel at Hanscom Air Force Base stated that AFLCMC leadership has been effective in reaching out to them and hearing their concerns. For example, officials from contracting and acquisition offices credited the AFLCMC Commander for making regular visits to the base to discuss the reorganization with the stakeholders and obtain their input. By contrast, other functional office personnel stated existing efforts to address their concerns were insufficient. For example, two functional office personnel told us they have raised concerns with AFLCMC headquarters about the reorganization and its effects at Hanscom Air Force Base—such as hiring rules set by Wright-Patterson Air Force Base that do not reflect the realities of Hanscom’s more competitive labor market in the Boston region—and, in their opinion, the leadership did not address them. AFLCMC senior officials said that the various communication mechanisms that they have put in place allow them to obtain and address concerns from stakeholders across each location affected by the reorganization. The effects of the reorganization on Hanscom’s core mission of delivering electronic systems to customers are not yet fully known, and AFLCMC has developed metrics to measure how it is meeting customers’ needs. The effects of the reorganization on Hanscom’s core mission of delivering electronic systems to customers are not yet known, as it is too early to assess changes resulting from the reorganization; also multiple factors unrelated to the reorganization may affect mission implementation. Given that the reorganization went into effect on October 1, 2012, AFLCMC’s Vice Commander, system program managers, and various functional office personnel at Hanscom Air Force Base stated it is too early to know the reorganization’s effects on Hanscom’s ability to meet customer needs. One customer told us it could take several years for his office to discern the effects, if any, from the reorganization, such as changes in Hanscom’s ability to deliver on schedule. Five contractor representatives also stated they have not experienced changes in their relationships with Hanscom Air Force Base as the result of the reorganization, and four of them noted it is too early to know the effect of the reorganization on the contractor community. AFMC and AFLCMC officials also stated that it is difficult to attribute to the reorganization any changes in how Hanscom Air Force Base is meeting its customers’ needs because of multiple external factors that can affect mission, such as budget changes and decisions made at the Air Force’s headquarters and at DOD levels. In addition, when these factors occur nearly simultaneously, it may be difficult to attribute the effects to any particular factor. They said the reorganization at Hanscom Air Force Base coincided with a number of other initiatives affecting the base, all of which could potentially affect Hanscom’s ability to meet the needs of its customers. For example, the Air Force restructured the portfolios of PEOs and placed two rather than three PEOs at Hanscom Air Force Base effective July 2012, a decision that two customers told us could affect PEOs’ responsiveness to the customer. The change in PEOs’ portfolios was during the time that ESC was inactivated as part of the reorganization. Another change involved the reduction in the level of contractor support at Hanscom Air Force Base, which was driven by multiple initiatives, unrelated to AFMC’s reorganization, such as the Office of the Secretary of Defense Comptroller’s Resource Management Decision 802. For example, two of the seven contractors we interviewed reported cuts in their number of contracts with Hanscom Air Force Base, but AFMC and AFLCMC officials stated that such cuts were not related to the reorganization and were driven by other factors, such as the budgetary pressures faced by the Air Force and DOD. AFLCMC established objectives and associated metrics to assess how it is organizing, training, and equipping program offices to fulfill their core mission of delivering electronic systems to the customer. These metrics are designed to measure how AFLCMC is meeting customer needs, rather than the effects of the reorganization itself. However, officials said that by assessing acquisition processes and outcomes, the metrics will provide information on how well the reorganization is working. AFLCMC relied on the expertise of its acquisition and product support leaders in developing the metrics. Specifically, AFLCMC assigned each of its six objectives to a team of senior officials, giving each team the responsibility for developing the metrics for an assigned objective and for tracking the metrics to assess attainment of the objective. Senior AFLCMC leaders said that the teams will report on their progress during monthly meetings of the AFLCMC Council, discussing initiatives in support of their assigned objective and the need for any adjustments to the metrics. As of February 2013, the metrics had been approved by AFMC. Table 1 shows the objectives and what the related metrics are intended to measure. A detailed list of AFLCMC’s metrics is provided in appendix II. According to AFLCMC officials, these metrics generally are based on the data that have long been collected at the program or directorate levels; they will be aggregated for all programs within AFLCMC to show how well the new organization is meeting its objectives. AFLCMC senior officials said such aggregated measures will allow them to examine trends across the organization, as well as identify specific areas within the organization where improvement may be needed in organizing, training, or equipping AFLCMC components to better meet customer needs. For example, although program offices have always looked at schedule achievement, the new schedule achievement metric will aggregate this information across all program offices, identify which area of the organization may be lagging behind, and serve as an indicator of whether AFLCMC is fulfilling its responsibilities of assisting program offices with setting realistic acquisition schedules. Hanscom’s stakeholders generally agreed that metrics focused on acquisition outcomes—rather than on the reorganization—are adequate measures of how well Hanscom Air Force Base is fulfilling its mission of meeting the needs of its customers. For example, Hanscom’s system program managers, as well as five of its customers, said the key metric of the reorganization’s success is the continuous ability of Hanscom Air Force Base to deliver capabilities to the customer on time, on cost, and within existing regulations and specifications—all of which the new metrics are designed to capture. AFLCMC senior officials said AFLCMC began data collection for the new metrics in February 2013, with measures to be continuously tracked by individual offices and aggregated monthly at the AFLCMC level. AFLCMC intends to rely on existing data systems to minimize the data collection burden, and they have undertaken a number of initiatives, such as enhancing existing information technology tools, to allow data to be aggregated at the AFLCMC level. We requested comments on the draft of this report from DOD. The department provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Secretary of the Air Force; the Commander, Air Force Materiel Command; and the Commander, Air Force Life Cycle Management Center. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6912 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To conduct our review of the reorganization of the Air Force’s Electronic Systems Center (ESC) at Hanscom Air Force Base, we visited or contacted the organizations shown in table 2. In examining the extent to which the Air Force Life Cycle Management Center (AFLCMC) developed metrics to measure how well it is meeting the needs of the customer, we obtained the objectives and the associated metrics developed by AFLCMC. Table 3 presents a summary of AFLCMC’s six objectives and the associated metrics to measure performance against each of these objectives. In addition to the contact named above, GAO staff who made key contributions to this report include Mark A. Pross, Assistant Director; Natalya Barden; Jennifer Cheung; Rajiv D’Cruz; Greg Marchand; Travis Masters; Richard Powelson; Amie Steele; Sabrina Streagle; and Elizabeth Wood. | Electronic command and control systems, which rely on technologies such as radar, satellite, and electronic surveillance, play a critical role in modern-day defense strategy. ESC at Hanscom Air Force Base supported the Air Force's ability to develop and acquire these capabilities. It was inactivated in July 2012 as part of an effort to respond to an initiative by the Office of the Secretary of Defense to reduce civilian positions to fiscal year 2010 levels. The reorganization consolidated ESC into AFLCMC at Wright-Patterson Air Force Base, which manages weapon systems from inception to retirement. Congress directed GAO to assess the effect of the reorganization on Hanscom's mission. This report examines (1) how the reorganization affected reporting chains of command, workforce composition, and the acquisition mission at Hanscom Air Force Base, (2) opportunities and concerns resulting from the reorganization at Hanscom, and (3) what is known about the effects of the reorganization and what metrics have been developed to assess how the new organization is meeting customers' needs. GAO evaluated relevant documentation; reviewed data on eliminated positions; and interviewed Air Force officials, selected contractors based on size and proximity to Hanscom Air Force Base, and Hanscom's primary customers. Results from these interviews cannot be generalized but offer stakeholders' perspectives on the reorganization. The reorganization of the Air Force Materiel Command (AFMC) affected reporting chains of command and workforce composition for some offices at Hanscom Air Force Base, but did not change how former components of the Electronic Systems Center (ESC) at Hanscom carry out their acquisition mission. Personnel in functional offices who provide technical services previously reported to the locally-based ESC leadership; they now report directly to senior functional managers at Wright-Patterson Air Force Base, who oversee functional offices across all locations of the new Air Force Life Cycle Management Center (AFLCMC) established by the reorganization. In addition, the reorganization eliminated 131 functional office positions (about 10 percent of Hanscom's civilian positions), which AFMC determined were not directly involved with development, delivery, or sustainment of weapon systems. GAO's analysis of Hanscom's data showed that the eliminated positions included 13 which were unfilled; of personnel in the remaining 118 positions, 15 accepted voluntary-separation agreements, 102 were reassigned at Hanscom Air Force Base, and 1 was removed. The reorganization did not change the mission of directorates that deliver electronic capabilities to customers. Various opportunities and concerns at Hanscom Air Force Base resulted from the reorganization. According to officials at Hanscom and Wright-Patterson Air Force Bases, customers, and contractors, the opportunities include increased focus on life-cycle management of electronic systems, increased collaboration within the command, and greater standardization of processes. Hanscom Air Force Base officials and contractors identified some concerns related to increased workload for functional office personnel due to position eliminations, process delays, the lack of full understanding of Hanscom's programs by AFLCMC officials, and whether Hanscom Air Force Base will continue as the center of electronic systems for the Air Force. However, AFMC and AFLCMC senior officials generally did not see these concerns as significant problems. For example, they stated that AFLCMC's senior functional managers do not require in-depth technical knowledge of Hanscom's programs because the functions, such as financial management, apply across programs. AFLCMC's steps to facilitate the reorganization include establishing a governance structure and communicating with stakeholders. The effects of the reorganization on Hanscom's core mission of delivering electronic systems to customers are not yet fully known, but AFLCMC has developed metrics to measure how well it is meeting customer needs. Officials stated the changes went into effect only recently and multiple factors unrelated to the reorganization, such as budget changes, may affect the mission. However, AFLCMC developed organizational objectives and associated metrics in areas such as delivering cost-effective acquisition solutions and providing affordable and effective product support. The metrics, while not designed to measure the effects of the reorganization, are intended to measure how AFLCMC is meeting customers' needs. The data for the metrics will be collected by individual offices and aggregated monthly at the AFLCMC level, according to its senior officials. GAO is not making recommendations in this report. DOD provided technical comments, which GAO incorporated as appropriate. |
Since FLSA was enacted, Congress has amended it several times, including recently increasing the federal minimum wage from $5.15 an hour, which it has been since September 1997, to $7.25 an hour in three steps over a 2- year period ending in July 2009. In 2007, about 2 million workers were earning at or below the federal minimum wage. FLSA also limits the normal work week to 40 hours and requires that most employers pay 1½ times normal wages, or overtime pay, to eligible employees who work longer hours. Furthermore, FLSA and its regulations limit the types of jobs, number of hours, times of day, and types of equipment that youth can work. WHD’s headquarters office, 5 regional offices, and 74 district and field offices with approximately 730 investigative staff are responsible for enforcing employer compliance with labor laws. In 2007, WHD’s budget was approximately $165 million. WHD conducts several types of enforcement actions, ranging from comprehensive investigations covering all laws under the agency’s jurisdiction to conciliations, a quick remediation process generally limited to a single alleged FLSA violation—such as a missed paycheck for a single worker, in which a WHD investigator contacts the employer by phone to try to resolve a complaint received from a worker. WHD also initiates enforcement actions in an effort to target employers likely to violate FLSA. For many years, WHD officials have considered low wage workers to be most vulnerable to FLSA violations. In 2007, about 54 million workers were among this population. Furthermore, WHD officials, researchers, and employee advocates have expressed concerns that foreign born workers, although generally protected by FLSA to the same extent as other workers, may be less likely than others to complain because they may be unaware of federal laws or fear deportation if they are undocumented. About 19 percent of low wage workers, as defined by researchers in studies commissioned by WHD, were foreign born in 2007. When WHD finds violations during enforcement actions, it computes and attempts to collect back wages owed to workers and, where permitted by law, imposes penalties and other remedies. Other remedies pertaining to FLSA include the hot goods provision, which allows WHD to seize goods created in violation of FLSA, and liquidated damages, which permit workers to receive additional damages as a result of minimum wage or overtime violations. If employers refuse to pay the back wages and/or penalties assessed, WHD officials, with the assistance of attorneys from Labor’s Office of the Solicitor, may pursue the cases in the courts. WHD’s partnerships are formal written agreements with external groups— including states, foreign consulates, and employee and employer associations—designed to improve compliance. Its outreach activities include informational materials and seminars for employers and workers designed to improve public awareness of the provisions of FLSA. WHD holds seminars, provides training to employer associations, and distributes materials on FLSA provisions to employers and workers. In addition, as part of its outreach activities, WHD provides technical assistance to employers through its local offices, national hotline, and Web site. WHD, like other federal agencies, is required by the Government Performance and Results Act of 1993 (GPRA) to establish a framework to help align its activities with the agency’s mission and goals. It is also required to develop long-term goals as well as establish performance measures to use in assessing the success of its efforts. Furthermore, to promote agency accountability, it is required to issue annual performance reports on its progress in meeting these goals. From 1997 to 2007, the number of WHD’s enforcement actions decreased by more than a third, from approximately 47,000 actions in 1997 to just under 30,000 in 2007. According to WHD, although enforcement actions have comprised the majority of its compliance activities, the total number of actions decreased over this period because of three factors: the increased use of more time-consuming comprehensive investigations, a decrease in the number of investigators, and improved screening of complaints to eliminate those that may not result in violations. Most of these enforcement actions conducted from 1997 to 20007 were initiated by complaints from workers. The remaining enforcement actions, which were initiated by WHD, decreased 45 percent over the period, from approximately 13,000 in 1997 to approximately 7,000 in 2007. WHD’s partnerships and outreach activities constituted about 19 percent of its total staff time. From 1997 to 2007, the total number of FLSA enforcement actions WHD conducted decreased, and lengthy, comprehensive investigations made up an increasingly larger share of this total. Of WHD’s total resources, the majority was spent ensuring compliance with FLSA, which covers more workers than the other laws under WHD’s jurisdiction. Based on available data from 2000 to 2007, the majority of staff time spent on FLSA compliance activities—81 percent—was spent on enforcement. However, the total number of enforcement actions, including investigations and conciliations, declined from approximately 47,000 in 1997 to just under 30,000 in 2007, as shown in figure 1. In addition, WHD attributed the decrease in the number of enforcement actions to three factors. First, the proportion of comprehensive investigations, which require more staff time than other types of enforcement actions, increased over this period—from 39 percent of all enforcement actions in 2000 to 51 percent in 2007. Agency officials said that WHD emphasized comprehensive investigations in an effort to increase future compliance because they provide an opportunity for WHD to educate employers about the laws under its jurisdiction. Second, officials cited the decrease in the agency’s investigative staff—and the loss of experienced investigators in particular—as reasons for this trend. As shown in figure 2, the number of investigators decreased over this period by more than 20 percent, from 942 in 1997 to 732 in 2007. Finally, a senior WHD official told us that the agency now screens out complaints that are not likely to result in FLSA violations more effectively than it did previously. The majority (72 percent) of WHD’s enforcement actions were initiated in response to complaints from workers. From 2000 to 2007, more than half of these enforcement actions—approximately 52 percent—were conciliations, which WHD conducted over the phone. Conciliations were also the quickest type of enforcement action—taking 2½ hours, on average, compared to nearly 35 hours, on average, for other types of enforcement actions. However, conciliations are generally limited to a complaint about a single violation involving only one worker. Although this enforcement action allows initial complaints to be quickly closed, a WHD-commissioned study found conciliations to be associated with an increased probability of detecting violations in subsequent investigations of a specific employer. Further information on complaints handled via conciliations can be found in a companion GAO testimony being released today for this hearing. Nearly all of the remaining enforcement actions initiated by complaints from workers were comprehensive investigations (38 percent) or limited investigations (7 percent). See figure 3 for the types of enforcement actions WHD conducted in response to complaints from 2000 through 2007. From 1997 to 2007, the number of WHD-initiated enforcement actions declined by 45 percent, as shown in figure 4. As a proportion of all enforcement actions, those initiated by WHD decreased slightly over the period, from 28 percent of all actions in 1997 to 24 percent in 2007. From 2000 to 2007, in planning and conducting WHD-initiated enforcement actions, the agency primarily targeted four industry groups: agriculture, accommodation and food services, manufacturing, and health care and social services. These four industries generally coincide with those for which WHD had strategic initiatives for increasing compliance for several years: agriculture, restaurants, garment manufacturing, and health care. The agency conducted the largest proportion of WHD-initiated enforcement actions—22 percent—in the accommodation and food services industry. However, at the same time, WHD increased its focus on the agriculture industry from 7 percent of WHD-initiated enforcement actions in 2000 to 20 percent in 2007. The majority of enforcement actions in the agriculture industry—82 percent—were initiated by WHD, while actions in all other industries were usually initiated as a result of complaints. The number of enforcement actions and the proportion of WHD-initiated enforcement actions varied among WHD’s five regions. For example, WHD’s Southeastern region conducted the largest number of enforcement actions—approximately 128,000 from 1997 to 2007. In contrast, the Western region conducted the fewest—approximately 44,000. In addition, because the Western region had a smaller workload of enforcement actions initiated by complaints, nearly half of its enforcement actions conducted from 1997 to 2007 were initiated by WHD, compared to only 14 percent for the Southeastern region. Agency officials said that when states have no minimum wage or overtime standards, or weak enforcement of such laws, WHD regions in which those states are located have heavier complaint workloads. Across WHD’s five regions, regions with a greater proportion of states with a minimum wage below the federal level also had a greater proportion of enforcement actions that were initiated by complaints. In the majority of its enforcement actions—approximately 75 percent from 2000 to 2007—WHD found employers in violation of FLSA, and most of these violations were of the overtime provisions of FLSA. In 2007, for example, nearly 85 percent of the FLSA violations WHD found were related to overtime, while 14 percent were minimum wage violations, and 2 percent were violations of FLSA’s child labor provisions. When violations were found, employers agreed to pay some amount of the back wages owed to their workers approximately 90 percent of the time. In addition, the total amount of back wages employers agreed to pay increased by 41 percent, from approximately $164 million in 2000 to about $230 million in 2007—the highest amount for this period. Furthermore, the average amount of back wages per enforcement action nearly doubled, increasing from approximately $5,400 per enforcement action in 2000 to $10,500 in 2007. In those cases in which employers agreed to pay, most (about 94 percent) resulted in employers agreeing to pay the full amount they owed to workers. However, in 6 percent of the cases, employers agreed to pay less than the amount they owed—an average of 24 cents for each dollar owed. In addition, WHD could not provide us with data on the amount of back wages assessed that were collected because WHD does not track this information in their WHISARD database. In addition to assessing back wages from employers found to be in violation of FLSA, WHD may also assess penalties for repeated or willful violations, or for child labor violations, but the agency made limited use of these penalties from 2000 to 2007. WHD assessed penalties for 6 percent of the enforcement actions conducted during this period in which it found FLSA violations. This percentage increased to a peak of almost 9 percent in 2001, before falling steadily to under 5 percent in 2006. Partnerships and outreach represent a small proportion of WHD’s compliance activities, constituting about 19 percent of all WHD staff time from 2000 to 2007. From 1999 to 2007, the agency established 78 formal partnerships, 67 of which were still in place as of March 2008. Its earlier partnerships were largely with state governments, while more recent partnerships were primarily with employer groups. Other partnerships included worker associations, foreign consulates, and other agencies within the federal government. Overall, there was limited growth in the number of partnerships that WHD established, with a peak of 15 in 2004. According to its partnership agreements, WHD sought to utilize partnerships in several ways to improve FLSA compliance. The most common partnership activity was education, which was specified in 94 percent of partnership agreements. Education encompasses a number of activities, including WHD attendance at seminars and training sessions regarding wage and hour laws and the distribution of pamphlets and other educational materials to workers and employers. The second most common partnership activity was complaint referrals. More than half of the partnership agreement documents contained language that encouraged or provided guidelines for partners to refer relevant complaints to WHD and, in the case of other governmental partners such as state labor agencies, for WHD to refer cases to them. monitoring agreements, which provided guidelines for employers to use in monitoring themselves or their contractors for potential FLSA violations and reporting violations to WHD; sharing of enforcement information, mainly used in partnerships with other federal or state enforcement agencies; and bilingual assistance, which included the distribution of educational materials in foreign languages and assistance with translation of wage and hour regulations. From 2000 to 2007, WHD conducted approximately 13,600 FLSA-related outreach activities such as seminars, exhibits, media appearances, and mailings. During this period, the percentage of staff time spent on outreach events decreased, from approximately 22 percent in 2000 to 13 percent in 2007. From 2003 to 2007, the largest proportion of outreach events targeted employers, although more diverse audiences have been included in recent years. Over this period, employers were the intended audience for 46 percent of the outreach events WHD conducted. In contrast, workers were the intended audience for 14 percent of events. However, over this period, WHD began to target more diverse groups of non employer groups, including schools, governmental agencies, and community-based organizations. In planning and conducting its compliance activities, WHD does not effectively use available information and tools. First, WHD does not use information, such as data on the number of complaints each office receives or the backlog of complaints for each office, or other information, such as input from external groups. This information could help the agency manage its workload and allocate its staff resources accordingly. Second, in targeting employers for investigation, WHD focused on employers in the same industries from 1997 to 2007, despite findings from its commissioned studies intended to help it focus on low wage industries in which FLSA violations are likely to occur. Finally, the agency may not sufficiently leverage existing tools such as hotlines and partnerships to improve compliance with FLSA. In planning its FLSA compliance activities, WHD does not use the following information to focus its work: Information on complaints received from workers. WHD does not use key information regarding the complaints it receives from workers that could help the agency manage its workload. First, WHD does not have a consistent process for documenting the receipt of, or actions taken in response to, complaints. According to guidance on GPRA planning, understanding customers’ needs, such the demand for WHD’s services in response to complaints, is important to help ensure that an agency aligns its activities, processes, and resources to support its mission and help it achieve its goals. Although WHD’s Field Operations Handbook provides guidelines for recording complaints, and there is a complaint intake screen in the agency’s WHISARD database, the handbook also states that, even if a complaint indicates probable violations, it may be rejected by district office managers based on factors such as the office’s workload or available travel funds. Therefore, WHD staff usually enter a complaint into the database only when it is likely to result in finding of violations. In addition, although one office we visited maintained separate logs of all complaints received, WHD does not require all complaints, including the actions taken, to be recorded. As a result, WHD does not have a complete picture of all of the complaints it receives and the agency cannot be held accountable for the actions it takes in response to complaints. Backlogs of complaints. Although the number of complaints each office receives greatly affects its workload and ability to initiate investigations, WHD does not have a consistent process for tracking information on complaint backlogs across its offices. For data to be useful to GPRA planning and an agency’s decision making, they must be complete, accurate, and consistent. WHD officials told us that the agency’s offices vary in how they track their backlogs of complaints. However, headquarters officials said that they do not track the regional or district offices’ backlogs, nor do they know how they are measured. Therefore, WHD cannot consider these backlogs in its planning efforts, including its allocation of staff resources to its regional and district offices. Input from external groups such as employer and worker advocacy organizations with an interest in WHD’s activities. In the past, WHD held meetings with external stakeholders—organizations with an interest in the agency’s activities—at a national level, but more recently, the agency has relied on second-hand information from its district offices to identify the concerns of these groups. GAO has reported that it is important to involve external stakeholders in the planning process, such as developing goals and performance measures. Agencies that have involved these external groups report that this cooperation has allowed them to more effectively use their resources. According to agency headquarters officials, prior to 2000, WHD held meetings at a national level with external organizations such as industry groups, advocates, unions, and state officials. Around 2000, WHD began relying instead on its district office staff to gather input on external stakeholders’ concerns and provide this information at WHD’s annual planning meetings. However, these planning meetings are not held until after the agency’s national and regional priorities are set, thereby limiting external stakeholder input in the early phases of the process. In addition, WHD headquarters officials said its district offices report input from external stakeholders as part of annual performance reports submitted to the regional offices. However, we found little evidence of stakeholder recommendations in WHD’s planning and reporting documents. State labor regulations and levels of enforcement. In planning the allocation of staff to its regional offices, WHD does not consider information on state labor laws or the extent to which these laws are enforced for the states covered by the district offices in each region. According to GPRA guidance, understanding the external environment in which its offices operate should be a key part of WHD’s strategic planning process. Because WHD offices in states with weaker labor laws or enforcement may receive more complaints, these factors may directly affect the workload of WHD’s district offices. For example, according to WHD officials, because the state of Georgia does not conduct investigations of overtime or minimum wage violations, the Atlanta WHD district office has a heavy workload of complaints regarding these issues. Officials told us that WHD headquarters does not consider state laws or enforcement in making allocations of investigators to its regions, and that each region has been allocated approximately five investigators each year for the past few years. From 1997 to 2007, in targeting employers for investigation, WHD focused on employers in the same industries despite obtaining information from its commissioned studies on low wage industries in which FLSA violations are likely to occur. During its annual planning process, the agency develops national and local initiatives that focus on selected industries in which it will conduct investigations. Individual employers within these industries are often selected for these WHD-initiated investigations in one of two ways. WHD either obtains a statistical sample of employers or selects them using the judgment of its staff—for example, by looking through a telephone directory of local businesses. Over this period, WHD considered low wage workers to be most vulnerable to FLSA violations, but it did not clearly define who these workers were or identify the industries in which they were concentrated until 2004. Instead, according to WHD officials, the agency relied primarily on its historical enforcement data—the majority of which consisted of actions initiated by complaints—and observations from regional and district officials to focus its compliance activities. WHD centered its work on nine industries, and based many of its performance indicators on garment manufacturing, nursing homes, and agriculture. However, district officials told us that it was difficult to contribute to all of these national goals because few of WHD’s offices are located in areas that have a substantial number of employers in the garment manufacturing industry to investigate. To ensure that all of its offices could contribute to its national goals, and that industries in which workers are less likely to complain were included in its plans, WHD changed its focus to include more low wage industries. In 2002, the agency commissioned a series of studies to define the population of low wage workers, and to determine in which industries these workers were most likely to experience minimum wage and overtime violations. Researchers used data from the Bureau of Labor Statistics to estimate how common and severe minimum wage and overtime violations were throughout all industries. They found that 33 industries had a high potential for violations of the minimum wage and overtime provisions of FLSA, including 9 that ranked highest nationally for violation potential. However, since the completion of the studies in 2004, WHD has not used this information to substantially refocus its efforts or target its investigations. The proportion of WHD-initiated investigations targeting these top 9 industries has risen by approximately 2 percent since 2004. Therefore, the investigations initiated by WHD may not have addressed the needs of low wage workers most vulnerable to FLSA violations. Local WHD officials also told us that despite the results of these studies, the focus of their investigations has not substantially changed. For example, the agriculture industry, which is not on the national list of 33 priority industries, was the focus of 16 percent of WHD-initiated investigations from 2005 to 2007. In addition, WHD headquarters officials told us that the agency cannot regularly measure its progress in improving compliance in the 33 industries because it does not have the resources needed to conduct the investigations it uses to evaluate whether compliance has improved. Finally, most district-level WHD officials told us they were not aware of the specifics of these commissioned studies. For example, at one WHD district office, the managers told us brief presentations on some of the studies were provided at management meetings, but copies of the full studies were not provided, and investigators we spoke with at this office said they were not aware of the studies and therefore could not incorporate the results of these studies into planning their work. WHD does not sufficiently leverage its existing tools to increase compliance. These include the following: Use of penalties for willful and repeat violations. WHD does not know the extent to which it has leveraged its statutory penalty authority because it does not track how often willful or repeat violations are found. WHD can assess penalties when employers willfully or repeatedly violate FLSA but WHD does not track how often it finds repeated or willful violations or when penalties are not assessed for such violations. In addition, a study commissioned by WHD showed that, when employers are assessed penalties, they are more likely to comply in the future and other employers in the same region—regardless of industry—are also more likely to comply. Although the agency has occasionally addressed the use of penalties in its performance plans—for example, by including a measure for increasing the use of penalties and other remedies in its 2007 plan–WHD managers did not emphasize the importance of these tools by including them in the agency’s performance reports, which are used by external groups to hold the agency accountable. Furthermore, there was no quantifiable goal associated with the measure in the 2007 plan, and officials told us that it was intended only as a reminder to staff that penalties were one tool they could use to encourage compliance. Collection of back wages and penalties. WHD began collecting more data on its enforcement actions in 2000 with the introduction of its WHISARD database. However, the agency does not use information on whether back wages and penalties assessed are collected to determine whether it is fulfilling its mission of ensuring that workers receive the wages they are owed or verify that employers are being penalized for violating FLSA, respectively. WHD headquarters officials in charge of strategic planning told us they do not know whether back wages or penalties are collected from employers, although this information is tracked in its financial accounting systems. They also could not provide information on how long it takes the agency to collect back wages or penalties. Hotlines and office telephone lines. WHD is not fully utilizing its hotlines or its regular office telephone lines to reach potential complainants. WHD has set up some hotlines through partnerships, but these hotlines are not always effective. For example, one partnership set up a hotline targeted toward Latino workers and hosted by the Mexican Consulate. One member of the partnership said that she tested the hotline repeatedly over 6-month period but the phone was never answered. When we made test calls to this hotline asking about wage-related issues, staff either did not refer us to WHD or other government agencies or did not return our calls. Phone systems also vary among WHD’s offices, and only some have the capacity to take messages outside of office hours, when workers with complaints may be more likely to call. For example, at one district office, we were told that they did not have an answering machine on which callers could leave messages after hours because they had no one to return these calls during the day. In addition, state officials and advocates said that some local WHD offices are not always available by phone to help callers with detailed questions. At one district office we visited, investigators said that calls went straight to a voice mail system, where callers were instructed to leave a message and wait for a return call from WHD staff. Partnerships. Although partnerships can help WHD leverage resources and reach potential complainants, some of WHD’s partners, including state labor agency officials, told us that WHD does not always provide adequate support to its partnerships. First, some state officials said that WHD does not notify them of the status of complaints or of actions taken. For example, one state official told us about a case in which an employer violated state and federal labor laws, but WHD settled with the employer without consulting state officials. The state officials said they were unhappy with the settlement, mainly because it resulted in the employer paying less in back wages. Second, WHD has not allowed its investigators to take part in some joint investigations with state labor agencies or send investigators to events intended to help educate the worker community. Third, several of WHD’s partners told us that the agency has not provided adequate financial support for outreach events, leaving the funding to nonprofit organizations. For example, WHD officials in Houston told us that, although one of its partnership’s billboards advertising a hotline for Latino workers needed to be replaced, the office was unable to provide any funding to replace them because WHD headquarters had not approved the funds. In California, WHD officials told us they do not support expanding the agency’s Employment, Education, and Outreach (EMPLEO) partnership—which received an award from Harvard University’s Kennedy School of Government for successful innovation—to other areas of the state or hold certain outreach events because these efforts would generate more referrals than the agency could handle. The extent to which WHD’s activities have improved FLSA compliance is unknown, because WHD frequently changes both how it measures and how it reports on its performance. When agencies provide trend data in their performance reports, decision makers can compare current and past progress in meeting long-term goals. While WHD’s long-term goals and strategies have generally remained the same since 1997, WHD often changes how it measures its progress, keeping about 90 percent of its measures for 2 years or less. According to WHD officials, the agency decided to discontinue some of its measures either because they had been met or because WHD realized they were not appropriate. In addition, while WHD specified a number of performance measures each year in its planning documents, it included less than one-third of them in its annual performance reports. Moreover, although WHD established a total of 131 performance measures throughout the period from 1997 to 2007, it reported on 6 of them for more than 1 year. This lack of consistent information on WHD’s progress in meeting its goals makes it difficult to assess how well WHD’s efforts are improving compliance with FLSA. Since the first time Labor was required to report on it performance in 1999, WHD has included similar performance goals and strategies related to its FLSA compliance activities in its annual performance reports. For 1999 to 2006, WHD had the general outcome goal of increasing compliance with worker protection laws and, by 2002, also had a more program- specific goal of ensuring that American workplaces legally, fairly, and safely employed and compensated their workers. For 2007, the agency reported on the program-specific goal of ensuring workers received the wages due. Also, from 1999 to 2007, the agency reported on how it used its three types of compliance activities—enforcement, outreach, and partnerships—to reach its goals. While its goals and strategies did not change, WHD often changed how it measured its progress. From 1997 to 2007, WHD included 131 FLSA-related performance measures in its plans but kept about 90 percent of these for 2 years or less. A majority of these measures—67 percent—were reported for only 1 year. Furthermore, for most of the period from 1997 to 2007, WHD had strategic initiatives for improving compliance in its targeted industries—agriculture, garment, and health care—as well as a strategic initiative designed to measure and reduce recidivism by re-investigating employers it had previously investigated and found in violation of FLSA. However, the agency also frequently changed how it measured progress in both of these areas. For example, although WHD had 10 performance measures for improving compliance in agriculture from 1997 to 2007, it kept only 1 of them for more than a year. These frequent changes to its performance measures have affected the ability of agency officials and outside observers to understand WHD’s progress and for agency officials to make decisions for future strategic planning. In a recently issued study WHD commissioned to obtain recommendations for future performance measures for reducing recidivism, researchers found that they could not assess the agency’s progress to date because of the frequent changes in its measures. According to WHD officials, the agency discontinued some of its performance measures because they had been met or were not appropriate. Specifically, WHD officials stated that during their annual planning process, they make ongoing refinements to their performance measures. Throughout the years, the agency has decided to discontinue measures for several reasons, including (1) the agency data it used to assess its progress in meeting the measure were not reliable; (2) agency staff did not understand how the measures related to their work; (3) staff did not believe the agency could influence the measure through its work; (4) the issue the measure was attempting to address was no longer relevant; and (5) the agency had met the targets for the measure repeatedly. For example, although growers typically rotate their crops annually, WHD’s performance measures for the agriculture industry focused on compliance among growers of specific crops, such as lettuce and tomatoes. After 4 years of using various performance measures based on crops, WHD realized that because growers often change crops, this approach was not measuring compliance for the same group of growers over time and discontinued using these measures. In addition to frequently changing its performance measures, WHD does not report on many of the measures. While WHD specified a number of performance measures each year in its planning documents, it included less than one-third of them in its annual performance reports. Of the 131 FLSA-related performance measures, WHD reported on 40 of them (29 percent) in its annual performance reports. WHD officials attributed this lack of reporting to departmental space limitations in annual reports. Moreover, although WHD reported on 40 of its performance measures from 1999 to 2007, it reported on only 6 of them for more than 1 year. The agency met 30 of its goals (75 percent) for the measures on which it reported, and meeting the goals was among the reasons WHD officials cited for discontinuing the use of some measures. However, nearly half of the measures WHD met were designed to establish baselines for understanding the current state of compliance or an agency process; they were not meant to measure agency progress. Overall, the lack of consistent reporting further complicates the ability of those within and outside the agency to assess how well WHD’s efforts have improved compliance with FLSA. While WHD is responsible for protecting some of the basic rights of U.S. workers by enforcing FLSA, it does not know how effectively it is doing so. As with all government agencies, WHD must determine how to strategically manage its limited resources to help ensure the most efficient and effective outcomes. Although WHD has been challenged by reductions in its investigative staff, it has not used all available information to promote compliance, such as the studies in which it has invested that could inform how it targets employers for WHD-initiated investigations. In addition, it has not fully leveraged available tools, such as hotlines, office phone lines, and partnerships, that could extend its reach or tracked penalties and collection of back wages to know their impact on compliance. Furthermore, by not consistently measuring and reporting its progress in meeting the unchanging goal of ensuring FLSA compliance, the agency is unable to account for its progress more than a decade after GPRA implementation. To more effectively plan and conduct its compliance activities, we recommend that the Secretary of Labor direct the Administrator of WHD to enter all complaints and actions taken in response to complaints in its WHISARD database, and use this information as part of its resource allocation process; establish a process to help ensure that input from external stakeholders, such as employer associations and worker advocacy groups, is obtained and incorporated as appropriate into its planning process; incorporate information from its commissioned studies in its strategic planning process to improve targeting of employers for investigation; and identify ways to leverage its existing tools by improving services provided through hotlines, office phone lines, and partnerships, and improving its tracking of whether penalties are assessed when repeat or willful violations are found and whether back wages and penalties assessed are collected. To provide better accountability in meeting its goal of improving employer compliance, we recommend that the Secretary of Labor direct the Administrator of WHD to establish, consistently maintain, and report on its performance measures for FLSA. We held a meeting with WHD officials on June 20, 2008, in which we discussed our findings and recommendations in detail. At that meeting, they provided comments on our recommendation regarding obtaining input from external stakeholders. We adjusted the recommendation to indicate that they consider stakeholder input only as appropriate. They also indicated that their priorities do not currently include entering information on all complaints received from workers. However, their database would allow them to enter this information. In addition, we provided a copy of our draft statement to WHD, but the agency declined to comment on it prior to the hearing. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions you or other members of the Committee may have. For further information, please contact Anne-Marie Lasowski at (202) 512-7215. Individuals making key contributions to this testimony include Revae Moran, Danielle Giese, Amy Sweet, Miles Ingram, Susan Aschoff, Sheila McCoy, John G. Smale, Jr., Jerome Sandau, and Olivia Lopez. To identify the trends in WHD’s FLSA investigations and other compliance activities from fiscal year 1997 to 2007, we obtained and analyzed data from WHD’s Wage and Hour Investigator Support and Reporting Database (WHISARD). The data included information on WHD’s enforcement actions, back wages, penalties, partnerships, and outreach activities. All data we reported were assessed for reliability and determined to be sufficiently reliable for the purposes of this statement. In addition, we gathered quantitative and qualitative information from agency officials on factors that may have influenced these trends, including staff resources. To assess the effectiveness of WHD’s planning and implementation of compliance activities and whether these activities led to improvements in FLSA compliance, we analyzed WHD’s annual performance plans and reports in light of GAO’s work and guidance on strategic planning and performance management for regulatory agencies. In addition, we examined performance assessments conducted by outside experts at WHD’s request. Finally, for all of these research objectives, we interviewed WHD officials at the national and regional level and external organizations representing employers and employees affected by WHD’s compliance activities and visited WHD and state offices in California, Georgia, New Hampshire, Texas, and Wisconsin. We selected these states using several criteria that would provide a mix of characteristics, including the concentration of hourly workers earning at or below the federal minimum wage in each state; the number of formal agreements between WHD and state or local organizations; and geographic diversity. We also made test calls to WHD’s local and national hotlines. In addition, we reviewed all relevant laws and regulations. We conducted this performance audit from August 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Over 130 million workers are protected from substandard wages and working conditions by the Fair Labor Standards Act (FLSA). This act contains specific provisions to ensure that workers are paid the federal minimum wage and for overtime, and that youth are protected from working too many hours and from hazardous conditions. The Department of Labor's Wage and Hour Division (WHD) is responsible for enforcing employer compliance with FLSA. To secure compliance, WHD uses enforcement actions, partnerships with external groups, and outreach activities. In response to a congressional request, we examined (1) the trends in FLSA compliance activities from fiscal years 1997 to 2007, (2) the effectiveness of WHD's efforts to plan and conduct these activities, and (3) the extent to which these activities have improved FLSA compliance. From fiscal years 1997 to 2007, the number of WHD's enforcement actions decreased by more than a third, from approximately 47,000 in 1997 to just under 30,000 in 2007. According to WHD, the total number of actions decreased over this period because of three factors: the increased use of more time-consuming comprehensive investigations, a decrease in the number of investigators, and screening of complaints to eliminate those that may not result in violations. Most of these actions (72 percent) were initiated from 1997 to 2007 in response to complaints from workers. The remaining enforcement actions, which were initiated by WHD, were concentrated in four industry groups: agriculture, accommodation and food services, manufacturing, and health care and social services. WHD's other two types of compliance activities--partnerships and outreach--constituted about 19 percent of WHD's staff time based on available data from 2000 to 2007. WHD did not effectively take advantage of available information and tools in planning and conducting its compliance activities. In planning these activities, WHD did not use available information, including key data on complaints and input from external groups such as employer and worker advocacy organizations, to inform its planning process. Also, in targeting businesses for investigation, WHD focused on the same industries from 1997 to 2007 despite information from its commissioned studies on low wage industries in which FLSA violations are likely to occur. As a result, WHD may not be addressing the needs of workers most vulnerable to FLSA violations. Finally, the agency does not sufficiently leverage its existing tools, such as tracking the use and collection of penalties and back wages, or using its hotlines and partnerships, to encourage employers to comply with FLSA and reach potential complainants. The extent to which WHD's activities have improved FLSA compliance is unknown because WHD frequently changes both how it measures and how it reports on its performance. When agencies provide trend data in their performance reports, decision makers can compare current and past progress in meeting long-term goals. While WHD's long-term goals and strategies generally remained the same from 1997 to 2007, WHD often changed how it measured its progress, keeping about 90 percent of its measures for 2 years or less. Moreover, WHD established a total of 131 performance measures throughout this period, but reported on 6 of these measures for more than 1 year. This lack of consistent information on WHD's progress in meeting its goals makes it difficult to assess how well WHD's efforts are improving compliance with FLSA. |
Effective February 28, 1994, the Brady Handgun Violence Prevention Act (Brady) requires firearms licensees, such as licensed firearms dealers, to, among other things, request a presale background check on handgun purchasers. Brady calls for implementation in two phases. Under phase I, or the interim provisions, the checks are to be conducted by the chief law enforcement officer (CLEO) in the purchaser’s residence community to determine, on the basis of available records, if the individual is legally prohibited from buying the firearm under the provisions of federal, state, or local law. The sale may not be completed for 5 business days unless the dealer receives an approval from the CLEO before that time. If the CLEO does not contact the dealer within the 5-day period, the dealer may make the sale unless the dealer has reason to believe the transaction would be unlawful. Under the phase II permanent provisions effective November 30, 1998, the 5-day waiting period requirement terminates and presale inquiries for all firearms sales will be made only to a national background check system that will be operated by the Federal Bureau of Investigation (FBI). Since early 1987, Congress has considered various versions of legislation restricting access to handguns. These legislative efforts were labeled “Brady” bills—referring to James Brady, the Reagan administration press secretary who was disabled by a gunshot wound sustained during an attempted assassination of the President. Many of the early legislative efforts called for a waiting period for handgun purchases. The waiting period was designed, in most instances, to allow for the “opportunity” to conduct background checks, not the imposition of a mandatory background check requirement. Often, this opportunity meant that a copy of the application form was to be sent to the appropriate local law enforcement agency. In addition, the waiting period was described as providing a cooling-off period to deter impulse purchases. Brady opponents objected to the waiting period and offered amendments or substitute legislation typically calling for systems that would allow point-of-sale background checks to screen out criminals and not delay or otherwise interfere with the rights of law-abiding citizens to buy and own handguns. The current two-phased approach, first introduced in 1991 and described by its original sponsors as a compromise, (1) includes a waiting period that allows CLEOs time to conduct the background check required of themand (2) provides for the eventual point-of-sale background check system. To do this, Brady amends the Gun Control Act of 1968, which contains the principal federal restrictions on commerce in firearms and ammunition. Since passage of the 1968 act, the Bureau of Alcohol, Tobacco and Firearms (ATF) has licensed and regulated manufacturers, importers, dealers, and pawnbrokers in firearms. Under the 1968 act, as amended, those licensees (hereinafter referred to as gun dealers) are prohibited from selling firearms or ammunition to anyone they know or have reasonable cause to believe (1) has been convicted of (or is under indictment for), in any court, a crime punishable by more than 1 year in prison; (2) is a fugitive; (3) is an unlawful user of a controlled substance; (4) has been adjudicated as a mental defective or has been committed to a mental institution; (5) is an illegal alien; (6) is a dishonorably discharged veteran of the Armed Forces; or (7) is a person who has renounced U.S. citizenship. Under the 1968 act, persons purchasing a firearm from a licensed dealer are required to certify their eligibility, but no background checks or other verification of the information supplied is required. In contrast, while including the 1968 prohibitions and also requiring buyers to certify their eligibility, Brady is the first federal legislation providing for presale background checks to verify such eligibility. “restrains such person from harassing, stalking, or threatening an intimate partner of such person or child of such intimate partner or person, or engaging in other conduct that would place an intimate partner in reasonable fear of bodily injury to the partner or child.” Under Brady’s interim provisions, a prospective handgun purchaser must complete a form—generally referred to as the Brady form (see app. I)—giving his or her name, date of birth, and residence address and certifying that he or she is not a member of various categories prohibited from buying a firearm. Then, within 1 business day, the gun dealer must provide notice of the form’s contents to the CLEO of the area in which the buyer’s residence is located. The CLEO must then “make a reasonable effort” to ascertain within 5 business days whether the sale would violate federal, state, or local law, including research in whatever state and local record-keeping systems are available and the FBI-operated National Crime Information Center files (see fig. 1.1). The CLEO may allow the sale to proceed at any time during the waiting period by advising the gun dealer that the applicant has not been determined to be a prohibited person. Alternatively, if not notified to the contrary, the gun dealer may assume that the purchaser is not disqualified and complete the sale upon expiration of the 5-day period. However, if the search reveals that the applicant is ineligible to receive a handgun, the CLEO is to notify the dealer (without providing the reason) that the sale is denied. The CLEO may also instruct the dealer to refer the buyer to the CLEO if the buyer has questions or otherwise challenges the denial. Generally, such questions or challenges are sometimes referred to as “administrative appeals,” even though practices are somewhat less formal than this term implies. For instance, by providing the law enforcement officer additional documentation, a buyer may be able to reverse a denial that initially resulted from inaccurate or incomplete information in the databases searched. Brady also provided a remedy for erroneous denial of a firearm. Generally, any person denied a firearm due to the provision of erroneous information or who was not prohibited from receipt of a firearm may bring action to direct the correction of the erroneous information or that the transfer be approved. In any such action, the court may allow the prevailing party a reasonable attorney’s fee as part of the costs. Finally, under Brady’s interim provisions, certain specified transactions in states that screen handgun purchasers—e.g., through a permit system or some other procedure for conducting criminal background checks—are exempt from Brady’s waiting period. States that operate an alternative system that meets certain standards have been designated as Brady-alternative states by ATF. As of February 28, 1995, 24 states had systems in place that ATF determined were acceptable alternatives to Brady. Residents, dealers, and law enforcement officials in the other 26 states—the so-called “Brady states”—are subject to Brady’s waiting period requirements (see app. II). “With 285,000 licensees and only 240 ATF inspectors to check their premises and the records that they keep to ensure compliance, it would take approximately 10 years for us to inspect all the gun dealers.” More recently, the number of licensed dealers has begun to decline—to about 220,000 by the end of March 1995—partly as a result of the increase in the license fee required by the Federal Firearms License Reform Act of 1993. Another contributing factor is the 1994 Crime Act, which required that gun dealers certify compliance with state and local law as a condition for a license. On the other hand, even if ATF had more resources to inspect gun dealers, there are legislated limits on the frequency of compliance inspections. For instance, under 18 U.S.C. 923, absent reasonable cause or a warrant, ATF can inspect or examine a licensed dealer “not more than once during any 12-month period” to ensure compliance with record-keeping requirements. Finally, while gun dealers are required to maintain a copy of completed Brady forms for at least 5 years, the dealers are not required to report information from the forms to federal authorities. Brady allows CLEOs to retain forms for individuals denied a purchase but requires that all other forms be destroyed within 20 days. CLEOs also are not required to maintain or report data. In fact, various statutory provisions restrict the use of firearms-related information and prohibit the establishment of systems to register firearms, firearms owners, or firearms transactions. Thus, no data were readily available that would allow for monitoring trends in handgun purchases and denials or otherwise judge the impact of Brady. In July 1995, the Department of Justice issued a report on guns and crime in the United States. Among other information, the report noted that: • Over 40 million handguns have been produced in the United States since 1973. Most guns are not used to commit crimes. Further, most crime is not committed with guns. However, most gun crime is committed with handguns. • During 1993, there were 4.4 million murders, rapes, robberies, and aggravated assaults in the United States, and more than one-fourth of these violent crimes involved the use of a gun. • From 1985 through 1994, the FBI received an annual average of over 274,000 reports of stolen guns. By definition, all stolen guns are available to criminals. • At the request of police agencies, ATF’s National Tracing Center will trace firearms back to their original point of sale. More than three-quarters of the 83,000 guns used in crime that ATF traced for law enforcement agencies in 1994 were handguns. Policymakers recognize that even a perfect felon identification system may not keep felons from obtaining firearms and that Brady may not directly result in measurable reductions of gun-related crimes. For example, Brady does not apply to transactions between nonlicensed individuals. Tens of millions of handguns are already in private hands. Thus, the apparently sizable numbers of handgun transactions that take place between private individuals, such as at gun shows and even “on the street,” are not subject to Brady’s requirements. In fact, the purpose of Brady is to prevent convicted felons and other ineligible persons from purchasing firearms from licensed dealers. Opponents of Brady point to a 1991 survey of state prison inmates, which showed that 73 percent of those who had ever possessed a handgun did not purchase it from a gun dealer. Generally, opponents contend that it is a mistake to claim Brady prevents criminals from obtaining handguns since anyone denied a purchase from a licensed dealer can easily obtain a gun from another source and will almost certainly do so. Also, denied applicants may have friends or spouses without a criminal record make the purchases from dealers for them. On the other hand, Brady proponents use the same study to counter that 27 percent of those inmates surveyed obtained their firearms from licensed gun dealers and argue that no criminals should be able to buy guns from licensed dealers. Proponents acknowledge that criminal records checks alone will not prevent felons from obtaining firearms but could reduce dealer sales to disqualified persons; complement other crime control measures, such as stiffer mandatory sentences for firearms offenses; and clamp down on illegal gun trafficking. Our self-initiated review of the first full year of Brady implementation was designed to determine the following: • How frequently were the 5-day waiting period and background checks resulting in criminals and other ineligible individuals being denied the opportunity to purchase handguns from federally licensed dealers? (See ch. 2.) • To what extent had handgun purchase denials resulted in federal follow-up enforcement actions (e.g., arrests and prosecutions) against convicted felons and other ineligible purchasers who falsely completed the Brady form? (See ch. 2.) • What were the effects of the various legal challenges to Brady? For instance, we were particularly interested in whether background checks of handgun purchasers were being conducted in those jurisdictions represented by CLEOs who had filed lawsuits challenging the constitutionality of Brady. If no background checks were being conducted in certain jurisdictions, we wanted to determine why and what alternative arrangements were permissible or practical. (See ch. 3.) To obtain a broad understanding of these phase I implementation issues, we contacted a number of relevant governmental and private organizations. For example, we interviewed ATF headquarters and district officials responsible for promulgating Brady regulations and providing training and guidance to CLEOs and federally licensed gun dealers. We obtained additional national perspectives by contacting the following industry and special interest organizations: Americans for Effective Law Enforcement; the Citizens Committee for the Right to Keep and Bear Arms; the Coalition to Stop Gun Violence; Gun Owners of America; Handgun Control, Inc.; the International Association of Chiefs of Police; the Law Enforcement Alliance of America; the National Rifle Association; and the National Sheriffs’ Association. To obtain information on how frequently the 5-day waiting period and background checks were resulting in denials, we contacted local law enforcement agencies in several Brady states. Our results are not projectable to the universe of denials nationwide. We did not use a nationally projectable sample because (1) it would have involved contacting hundreds of law enforcement agencies nationwide, (2) Brady was less than 1 year old when we began our data gathering, and (3) Brady did not impose any record-keeping requirements on CLEOs. We judgmentally selected 20 state and local law enforcement agencies in 12 (46 percent) of the 26 Brady states. Selection factors—which are discussed in more detail below—included data availability, jurisdictional variety, denial rate variety, and geographic dispersion. In seven of the Brady states we contacted, a state agency conducted background checks for all jurisdictions within the state. In the other five Brady states (13 jurisdictions), local agencies were responsible for the background checks. Also, as noted in table 1.1, 14 of the 20 agencies we contacted were surveyed earlier by ATF for Treasury’s interim report on Brady’s impact. Brady does not require any reports from CLEOs or gun dealers. In fact, Congress has passed various statutory provisions that restrict the use of firearms-related information and prohibit the establishment of systems to register firearms, firearms owners, or firearms transactions. Thus, no data were readily available for monitoring national trends in handgun purchases and denials. Consequently, we relied on the voluntary cooperation and judgment of selected state and local law enforcement officials to provide data on the number and results of Brady background checks performed in their respective jurisdictions. We did not attempt to determine whether the denials were appropriate. For its initial Brady report, ATF had already developed cooperative working relationships with 16 CLEOs in 8 states. Thus, after first checking with ATF officials, we selected 14 of those 16 jurisdictions to build upon the already established relationships. We did not select Gwinnett County, Georgia, and Providence, Rhode Island, because those jurisdictions do not maintain cumulative data. Two of the 16 CLEOs selected by ATF have statewide (Kentucky and Ohio) responsibilities for performing background checks of prospective handgun buyers. In addition to selecting these two states, to provide broader coverage we also selected the other Brady states that have a centralized agency with statewide responsibility for performing background checks—Arizona, Arkansas, Nevada, South Carolina, and West Virginia. Finally, because press accounts listed the Fort Worth, Texas, Police Department as having one of the highest handgun denial rates in the nation, we included that jurisdiction in our review, which resulted in a total of 20 jurisdictions. Then, from each of the 20 applicable law enforcement agencies, we obtained available data on the number of Brady handgun purchase forms processed and the number denied during the first year of Brady implementation, February 28, 1994, through February 28, 1995. We used this information to calculate jurisdiction-specific denial rates, as well as an overall denial rate for the 20 jurisdictions. Although we did not verify the accuracy of the data obtained, during our on-site visits to three jurisdictions—Arkansas; South Carolina; and Fort Worth, Texas—and in numerous follow-up telephone calls with the other 17 jurisdictions, we discussed the procedures for gathering and compiling the data and have no reason to believe the data are unreliable. However, the denial rates we calculated are not projectable beyond the jurisdictions covered. In contacting the law enforcement officials in these jurisdictions, we also inquired about the availability and completeness of databases to conduct background checks. Our inquiries included questions covering criminal history databases, as well as possible data sources covering drug users, illegal aliens, and other categories of ineligible purchasers. Regarding criminal history databases, for example, we were interested in what course of action was taken if the background search found incomplete records—particularly records showing a felony arrest but not showing a disposition. Also, besides quantifying, we were interested in categorizing and analyzing the various reasons used by law enforcement officials in the 20 jurisdictions to deny handgun purchases. However, we found that only 15 of the jurisdictions maintained records (some more detailed than others) showing reasons for denials. Thus, our categorization and analysis of denial reasons is limited to these 15 jurisdictions—6 states, 3 counties, 2 parishes, and 4 cities. Moreover, only four of these jurisdictions—two states, a county, and a city—had sufficiently detailed information to allow us to quantify the number of felony-related denials involving violent crime convictions or indictments. Regarding follow-up enforcement actions on convicted felons and others who falsely complete Brady handgun purchase forms, we interviewed DOJ officials and reviewed documents prepared by DOJ officials responsible for establishing law enforcement policy guidance. From DOJ officials, as well as from ATF headquarters officials, we obtained available information on the number of cases referred to U.S. Attorneys by ATF field offices, the number declined for prosecution by U.S. Attorneys, and the number actually prosecuted by U.S. Attorneys. We then analyzed summary information provided by ATF on the prosecuted cases. The summary information covered the nature of the charges, the individuals’ past criminal histories, and any resulting convictions and sentences. For example, we were interested in whether the defendants were charged only with lying on the Brady form, or whether form falsification was an ancillary charge added in with other charges. Similarly, we were interested in whether the defendants had criminal histories showing convictions for violent felonies. Finally, we were interested in the types of sentences received by convicted defendants. In studying implications of the various legal challenges to Brady, we first reviewed the applicable federal district court decisions. Then, to determine the Department of Justice’s position on the legal challenges, we interviewed the Acting Assistant Attorney General, as well as his Special Counsel. Also, we interviewed staff from ATF’s headquarters and Office of Chief Counsel as well as ATF officials in field offices encompassing jurisdictions in which CLEOs have challenged Brady. In so doing, we obtained information and views on (1) whether background checks have been or are being performed in those jurisdictions in which CLEOs have challenged Brady; (2) what ATF’s statutory and/or operational responsibility is with respect to CLEOs and their performance of Brady background checks; (3) ATF’s role with respect to the designation of alternate CLEOs to perform the Brady background checks; and (4) what actions, if any, ATF has taken regarding Brady background checks on prospective handgun buyers in the jurisdictions involved in the lawsuits. We conducted our review in Arkansas; Georgia; South Carolina; Texas; and Washington, D.C., from July 1994 through August 1995 in accordance with generally accepted government auditing standards. The Justice Department, as well as Treasury and ATF jointly, provided written comments on a draft of this report. These comments are included in appendixes V and IV. We incorporated technical and clarifying comments in the report where appropriate and discussed the more substantive comments at the ends of chapters 2 and 3. To assess Brady’s results, we calculated handgun purchase denial rates and tried to determine if follow-up enforcement actions were being taken. We and ATF surveyed jurisdictions to determine denial rates. ATF calculated an average denial rate of 4.7 percent in 16 jurisdictions for the first 3 months of Brady implementation and 3.5 percent in 30 jurisdictions for the first year of Brady. We calculated an average denial rate of 4.3 percent in 20 jurisdictions for the first year of Brady. In following up on reasons for denials, we determined that (1) most of the jurisdictions in our survey relied only on criminal history records and (2) comprehensive data on background check results were not available. We were not able to quantify follow-up enforcement actions due to the way cases were coded in DOJ’s databases, but we were able to determine that as of July 1995, at least seven Brady-related cases were successfully prosecuted. Comprehensive data on the number of handgun purchase applications and denials under Brady were not available. Brady contains no reporting requirements, so neither gun dealers nor law enforcement officers are required to accumulate and report statistics on the number of handgun purchase applications processed or denied. In fact, with respect to the protection of individual privacy rights, Brady contains certain prohibitions on the use of Brady-related background information as well as prohibiting the establishment of a registry of firearms, firearms owners, or firearms transactions. Under Brady, after approving a handgun sale, the CLEO who conducted the background check must destroy all purchaser-related information, including the copy of the handgun purchase application form, ATF Form 5300.35 (see app. I for a copy of the form). Moreover, Brady does not require either CLEOs or gun dealers to record and report Brady-related statistics. As a result, the accumulation of data on the volume of and the reasons for handgun purchase denials is left to the discretion of the applicable CLEOs. Consequently, attempts to study the results or impact of Brady are largely dependent upon the voluntary cooperation of the CLEOs responsible for conducting the background checks. To develop a systematic approach for monitoring Brady’s impact on the acquisition and use of firearms, in September 1994 the Justice Department’s Bureau of Justice Statistics (BJS) entered into an agreement with the Regional Justice Information Service (REJIS). Under the terms of the agreement, REJIS is designing an information system, called the Firearms Inquiries Statistical System, to routinely collect data from volunteer samples of the estimated 22,000 local law enforcement officers within the Brady states and from state criminal history repositories, the FBI, and ATF. The primary objectives of this information system are to (1) identify, describe, and categorize the procedures used to implement Brady; (2) measure results of Brady in terms of the number of applications accepted and denied, the reasons for the denials, and the actions taken as a result of the denials; and (3) create a database to permit analyses of the use of firearms in the commission of crimes. BJS officials anticipate that initial output under the system will be available in early 1996. In the interim, ATF has conducted two limited-scope surveys in selected Brady states. The results of these surveys, as well as the results of our similarly limited-scope survey, are discussed in the following sections. ATF’s initial survey of Brady’s results covered approximately the first 3 months of implementation. In conducting the survey, ATF contacted state and local law enforcement officers representing 16 jurisdictions—2 states, 7 counties, 2 parishes, and 5 cities. For handgun purchase applications processed by the respective law enforcement officers within these 16 jurisdictions, ATF found that the overall denial rate was 4.7 percent. The report on ATF’s second survey of Brady’s results was issued on the first anniversary of the act’s effective date. In conducting this survey, which provided data covering the period March 1994 through January 1995, ATF contacted law enforcement officers representing 30 jurisdictions—7 states, 9 counties, 1 parish, 12 cities, and Puerto Rico. As table 2.1 shows, for handgun purchase applications processed by the respective law enforcement officers within these 30 jurisdictions, ATF found that the overall denial rate was 3.5 percent. “In its survey, ATF identified 15,506 handgun denials pursuant to the Brady Law. Of this number, 2,048 rejections were administratively appealed. Of these, 1,620 were resolved administratively, but ATF does not have information concerning the dispositions. ATF reports that two of the 15,506 denials were successfully appealed in court.” “ATF does not have any information concerning the basis for the denials or the reasons for any reversals of initial denials. As you are no doubt aware, under the Brady Law, the responsibility for determining whether an applicant seeking to purchase a pistol is eligible to do so rests with local Chief Law Enforcement Officers (CLEOs). ATF informs us that many CLEOs maintain no statistical data concerning the specific basis for a Brady denial and lack the resources for doing so. Accordingly, ATF has never requested the submission of such information and, in fact, lacks the authority to require its collection.” In conducting our survey to obtain data covering the first full year of Brady Act implementation, we contacted state and local law enforcement officers representing 20 jurisdictions—the 7 Brady states that have centralized background check procedures, 6 counties, 2 parishes, and 5 cities. As table 2.2 shows, for handgun purchase applications processed by the respective law enforcement officers within these 20 jurisdictions, we found that the overall denial rate was 4.3 percent. Our survey of 20 jurisdictions and ATF’s survey of 30 jurisdictions for its One-Year Progress Report (see table 2.1) include 11 jurisdictions covered in both surveys—the 7 Brady states that have centralized background check procedures; 2 counties (Dekalb County, Georgia, and Harris County, Texas); and 2 cities (Houston, Texas, and Seattle, Washington). Our data differ from ATF data, in part, because we surveyed a longer period—see, for example, the differences reported for Arizona and Kentucky. However, for Harris County and the city of Houston, we also include numerous denials for applications erroneously sent to these law enforcement agencies; these denials were not reported by ATF. Finally, the numbers of denials we report for Arkansas; Dekalb County, Georgia; and Ohio are lower than ATF’s numbers, in part, because our denial data were adjusted for successful appeals. Only 15 of the 20 jurisdictions we surveyed maintained records (some more detailed than others) showing reasons for denials. During the period covered by our survey (February 28, 1994, through February 28, 1995), the respective law enforcement officers within these 15 jurisdictions conducted background checks involving a total of 384,301 handgun purchase applications and denied 18,570, an overall denial rate of 4.8 percent. Figure 2.1 shows the denial rates across the 15 jurisdictions we contacted. Table 2.3 shows the number of denials by category for each of these 15 jurisdictions. Our review of total denials (18,570) for the 15 jurisdictions showed that 9,043, or 48.7 percent, were based on criminal history records (see table 2.3). Of the 9,043 criminal history denials, 8,299 (91.8 percent) were for either a felony indictment; a felony arrest (with no final disposition shown, e.g., dismissal, acquittal, or conviction); a felony conviction; or an outstanding felony warrant (see table III.1). Next, we attempted to determine how many of the felony-related denials involved violent crimes—aggravated assault, murder, rape, and robbery—as defined by the FBI. However, only 4 of the 15 jurisdictions had sufficiently detailed information for this analysis. Table 2.4 provides for these jurisdictions the number of violent crimes and violent crimes as a percentage of felony-related denials, total denials, and total applications. Among the 15 jurisdictions, we found differences regarding actions taken in response to records showing a felony arrest but not showing a disposition. Law enforcement officers in 4 jurisdictions denied a total of 365 handgun purchase applications based on records showing a felony arrest but not showing a disposition (see summary table III.1). The four jurisdictions are Arkansas (table III.3); Clayton County, Georgia (table III.4); Nevada (table III.9); and Abilene, Texas (table III.12). Generally, in such situations, the law enforcement officials told us it was incumbent upon the applicants to contact the appropriate law enforcement agency and provide evidence of a purchase-qualifying resolution of the arrest. Some of the other jurisdictions do not follow the practice of making denials on the basis of felony arrest records alone. For example, an official with the South Carolina State Law Enforcement Division told us that if a purchase-disqualifying disposition cannot be determined within 5 business days, the handgun sale is allowed to proceed. The official added that as of the end of March 1995, the Division had only one case in which (1) the disposition of a felony charge against a prospective handgun buyer could not be determined within 5 business days, (2) the applicant was allowed to purchase a handgun, and (3) case disposition information subsequently showed that the purchase should have been denied. Law enforcement officials from the Division reportedly retrieved the handgun from the purchaser. Misdemeanor warrants accounted for 452 (2.4 percent) of the 18,570 denials (see table III.1). These 452 denials represent 5.0 percent of the 9,043 criminal history denials. Of the 15 jurisdictions providing data on reasons for handgun purchase denials, 7 denied handgun purchases on the basis of outstanding misdemeanor warrants—4 states (Arizona, Arkansas,Kentucky, and South Carolina); 1 parish (Bossier Parish, Louisiana); and 2 cities (Fort Worth and Pasadena, Texas). Three of these 7 jurisdictions accounted for 380 (84.1 percent) of the misdemeanor warrant denials—Arizona had 272 (table III.2), the city of Fort Worth had 58 (table III.13), and the city of Pasadena had 50 (table III.16). In each of these jurisdictions, law enforcement officers told us that while neither state nor local laws prohibit misdemeanants from purchasing handguns, these persons are considered fugitives from justice, a prohibited category under Brady. In commenting on a draft of this report, Treasury and ATF said “because a person may be a fugitive from justice with respect to a misdemeanor warrant, it could not be concluded that the person was erroneously denied a handgun without checking the facts of his or her case.” Our review of total denials (18,570) for the 15 jurisdictions showed that 753, or 4.1 percent, were based on the other ineligible categories under Brady—fugitives from justice, unlawful drug users or addicts, individuals adjudicated mentally defective or committed, persons dishonorably discharged from the armed services, illegal aliens, and individuals who have renounced their U.S. citizenship (see tables 2.3 and III.1). These ineligible purchasers, sometimes referred to as the “other-than-felons” categories, are particularly difficult for CLEOs to identify. In 1990, for instance, a study sponsored by DOJ reported that few databases contain information on these categories of individuals. The lack of databases containing information on the other Brady ineligible categories restricts the ability of law enforcement officers to identify prospective handgun buyers who fall into one of these categories. For instance, law enforcement officers in 11 of the 15 jurisdictions told us that they rely solely on the national and/or state criminal history databases to obtain information on the other Brady ineligible categories. According to several officers, information concerning the other Brady ineligible categories is only coincidentally included in the criminal history databases. For example, Arkansas officials made a “mental defective” denial because criminal history records showed that an individual charged with battery and criminal property damage had been adjudicated “not guilty by reason of insanity.” Thus, while Brady specifies a number of other ineligible categories, most law enforcement officers have no way to check purchasers’ backgrounds with respect to these disqualifiers. In a few instances, information on these categories may be found in criminal history records. The following sections present more specifics on these disqualifying categories in the 15 jurisdictions we analyzed. Nonfelon fugitives from justice accounted for 160 (21.2 percent) of the 753 total denials in the other Brady ineligible categories (see table III.1). The 160 denials were made in 5 jurisdictions; however, the city of Houston (Texas) with 57 denials accounted for 35.6 percent of these denials (see table III.15). According to a Houston Police Department official, when background checks identify a fugitive, the information is passed on to the Department’s Fugitive Division to first verify that the warrant is still active and, if so, to serve the warrant. In the other four jurisdictions, officers told us that it is their respective agency’s policy to first confirm that the warrant is still active and, if it is, either serve it or inform the originating agency, which is then responsible for any enforcement action. Applicants classified as unlawful drug users or addicts accounted for 357 (47.4 percent) of the 753 denials in the other Brady ineligible categories (see table III.1). All 357 denials were in the Texas jurisdictions of Abilene (table III.12) and Houston (table III.15). According to law enforcement officers in these jurisdictions, the denials for unlawful drug use were based on criminal history records showing that the prospective buyers had arrests for minor drug offenses. Prospective handgun buyers classified as having been adjudicated mentally defective or committed accounted for 38 (5.0 percent) of the 753 denials in the other Brady ineligible categories (see table III.1). These 38 denials were made in 8 jurisdictions. The states of Arkansas and Nevada and the City of Houston, Texas, cumulatively denied 10 handgun purchases solely on the basis of mental problems noted in the prospective buyers’ criminal history records. Two counties (Clayton County, Georgia, and Harris County, Texas) denied a total of 10 handgun purchases on the basis of local court records. The state of Ohio and the city of Fort Worth, Texas, denied a total of 13 handgun purchases on the basis of state or county mental health records. For example, six handgun purchases were denied in Ohio on the basis of state mental hospital records checks (see table III.10, note b); and seven purchases were denied by the Fort Worth, Texas, Police Department on the basis of county mental health center records (see table III.13, note d). In the remaining jurisdiction, South Carolina, five denials were the result of relatives of the prospective handgun buyers contacting the state police and submitting physicians’ statements confirming that the prospective buyers previously had been committed to a mental institution (see table III.11, note e). Prospective handgun buyers classified as having been dishonorably discharged from the U.S. Armed Forces accounted for 49 (6.5 percent) of the 753 denials in the other Brady ineligible categories (see table III.1). These 49 denials were made in 6 jurisdictions—3 states (Nevada, Ohio, and South Carolina); 1 county (Harris County, Texas); and 2 cities (Houston and Pasadena, Texas). In each of these jurisdictions, law enforcement officers told us that these denials were based on criminal history records showing arrests for being absent without leave from the military. Illegal aliens accounted for 149 (19.8 percent) of the 753 denials in the other Brady ineligible categories (see table III.1). The 149 denials were made in 6 jurisdictions—2 states (Nevada and South Carolina); 1 Texas county (Harris County); and 3 Texas cities (Abilene, Fort Worth, and Houston)—on the basis of searches of criminal history records. The Houston Police Department accounted for 112 (75.2 percent) of the 149 denials (see table III.15). Beyond denying the handgun sales to the illegal aliens, the Houston Police Department took no other follow-up enforcement or referral action. The other three Texas jurisdictions followed this same procedure. In only the two state jurisdictions did law enforcement officers tell us that they notify the Immigration and Naturalization Service when illegal aliens are identified. In the 15 jurisdictions we analyzed, we found no denials based on renounced U.S. citizenship (see table III.1). “restrains such person from harassing, stalking, or threatening an intimate partner of such person or child of such intimate partner or person, or engaging in other conduct that would place an intimate partner in reasonable fear of bodily injury to the partner or child.” As discussed in chapter 1, Brady’s interim provisions require prospective handgun purchasers to certify that they are not a member of various categories prohibited from possessing or receiving a firearm. The categories contained in Brady reflect but do not reference those categories found at section 922(g) as they existed before the 1994 Crime Act. Thus, according to ATF’s Associate Chief Counsel (Firearms and Explosives), even though the 1994 Crime Act amended section 922(g), Brady itself was not amended to add the court order prohibition. The official told us that ATF had provided the Department of the Treasury with a list of legislative proposals, including a proposed technical amendment to Brady. Further, the Associate Chief Counsel told us that as a practical matter, ATF has been educating law enforcement officers about the “restraining order” disqualifying category and that applicants can be denied on this basis, even though ATF Form 5300.35 (see app. I) is awaiting modification pending passage of the technical amendment. In the 15 jurisdictions we analyzed, 145 (0.8 percent) of the 18,570 handgun purchase denials were based on the 1994 Crime Act (see table 2.3 and table III.1). The 145 denials were made in 3 jurisdictions—1 denial in Clayton County, Georgia (table III.4); 142 denials in Kentucky (table III.6); and 2 denials in Nevada (table III.9). According to an official with the Kentucky State Police—representing the jurisdiction with 97.9 percent of the denials in this category—the 142 denials in Kentucky were based on domestic violence orders, which are similar to restraining orders but expire (under Kentucky law) after 1 year. Traffic offenses accounted for 1,413 (7.6 percent) of the 18,570 denials (see tables 2.3 and III.1). Of the 15 jurisdictions we analyzed, 2 (both in Texas) accounted for all of the 1,413 traffic-related denials. The Houston Police Department accounted for 908 (64.3 percent) of the denials (see table III.15); and the Fort Worth Police Department accounted for the remaining 505 denials, or 35.7 percent (see table III.13). In addition to the 1,413 denials in Houston and Fort Worth, 4 other jurisdictions denied handgun purchases to prospective handgun buyers who had outstanding misdemeanor warrants, of which an indeterminable number were for traffic offenses, according to officials from these jurisdictions. Local law enforcement officials told us that these denials were made because individuals with outstanding warrants (including warrants involving traffic offenses) were considered to be fugitives from justice. Denials based on administrative or other reasons accounted for 7,216 (38.9 percent) of the 18,570 handgun purchase denials in the 15 jurisdictions we analyzed (see table 2.3). These 7,216 denials were based on a variety of reasons, as table III.1 shows, but the large majority involved application forms sent to the wrong law enforcement agency. It is worth noting that these denials are not based on arbitrary reasons, however, because Brady authorizes only the CLEO of the place of residence of the purchaser to approve the sale. Incomplete forms are also to be denied. Of the 7,216 administrative or other denials, 7,012 (97.2 percent) were the result of gun dealers sending handgun purchase applications to the wrong law enforcement agency. Three Texas jurisdictions accounted for all the denials in this category—the city of Fort Worth had 434 denials (table III.13); Harris County had 2,608 denials (table III.14); and the city of Houston had 3,970 denials (table III.15). Our review of denial records at the Fort Worth Police Department indicated that many of the misdirected applications may have resulted from jurisdictional confusion. For example, we found that the vast majority of the Fort Worth Police Department’s 434 denials in this category involved individuals with addresses near but not within the incorporated limits of the city. Although the number of missent Brady forms might suggest something more than confusion or carelessness, our analyses did not show any clear patterns, except that the levels of missent forms remained relatively constant throughout the year. In June 1995, we shared our Fort Worth analyses with ATF headquarters and applicable field office officials who told us that ATF’s response to jurisdictional confusion is to disseminate clarifying information to licensed gun dealers. “When a person falsely completes a Brady form and a timely check determines that the person is ineligible to purchase a handgun, in the discretion of the prosecutor and police, an effort may be made to arrest and prosecute the person. This may involve inviting the person to pick up the handgun and arresting the person as s/he picks it up or even staking out the dealership at which the gun is scheduled to be picked up in the case of a dangerous fugitive. In the case where the handgun is actually transferred to a prohibited person because the criminal history data check is untimely, seeking a search and/or arrest warrant and prosecuting the individual should be considered.” “Federal prosecutors ought to pay particular attention to intelligence information known to state and local law enforcement agencies in this regard. When individuals suspected of other violent and/or drug trafficking conduct are attempting to purchase handguns and are ineligible to do so, the investigation and prosecution of such individuals ought to be regarded as a priority.” “ . . . there was no means by which . . . falsification [of handgun purchase forms] would routinely be brought to the attention of at least the U.S. Attorneys. The Brady bill, now, puts into place, with the help of the ATF working with their local police officers and law enforcement, a means by which I think we will start getting more referrals with respect to false statements on gun applications.” “tatistics maintained by the Department’s Executive Office of United States’ Attorneys reflect that, since enactment of the Brady Law, a total of 162 prosecutions have been initiated in which the making of a false statement in connection with the acquisition or attempted acquisition of a firearm (18 U.S.C. Section 922(a)(6)) was the principal charge. It is not possible to determine readily the number of these prosecutions that were initiated as the result of the falsification of statements on Brady forms, as opposed to the falsification of statements on other federal firearms acquisition forms. . . . In addition, this number does not reflect cases in which charges may have been brought under Section 922(a)(6) as part of a larger prosecution involving other, possibly more serious charges, since some of the computer systems in operation in U.S. Attorneys’ offices are able to track only the lead charge. More detailed information would require a review of the case file in each of the Section 922(a)(6) prosecutions reported by the United States Attorneys, a task that would be unduly burdensome to undertake. “Such statistics are not a meaningful measure of the effectiveness of the Brady Law. . . . he statute was not primarily intended as a prosecutive mechanism but rather as a means of keeping handguns out of the hands of convicted felons, fugitives, and other prohibited persons. From an enforcement perspective, the Brady Law fully serves its purpose when it succeeds in thwarting the acquisition of a firearm by such individuals. By that standard, the success of the Brady Law is reflected by the fact that, since its enactment, approximately 41,000 applications for the purchase of handguns have been denied.” In response to our inquiries about referrals and prosecutions issues, the Acting Assistant Attorney General—who is a senior DOJ official responsible for monitoring Brady implementation—reinforced the view that the act was intended primarily to deter or prevent unauthorized individuals from obtaining handguns from federally licensed firearms dealers. DOJ has noted that because prosecutions for false statements on handgun purchase applications are inefficient and ineffective in advancing this purpose, the number of prosecutions is not a good measure of Brady’s effectiveness or usefulness. In addition, with regard to the prospect of prosecuting Brady-generated cases, the Special Counsel to the Acting Assistant Attorney General stated that no new resources were provided to U.S. Attorney Offices, which already must make resource allocation decisions to address competing demands, including the emphasis in recent years on prosecuting drug kingpins and pursuing other complex, significant cases. Similar views were expressed by ATF officials in response to our inquiries. For instance, the Special Agent in Charge of the Firearms Enforcement Branch (ATF headquarters) told us that Brady is achieving its primary purpose of preventing felons from being able to purchase handguns from gun dealers. However, most U.S. Attorneys do not view the act as being a prosecutorial tool to use frequently, irrespective of the volume of referrals and potential cases involving falsified Brady forms. In April 1995, ATF headquarters staff queried the agency’s field offices to obtain an estimate of the total number of Brady-related cases referred by ATF to U.S. Attorneys Offices. The resulting cumulative estimate was that as of February 1995, a total of 250 such cases had been referred. Of the 250 referrals, 217 had been declined for prosecution, according to a DOJ Special Counsel. The DOJ official added that as of April 1995, the other referrals were still being evaluated with respect to whether fuller investigations were merited. Later, we inquired again about the prosecutive status of the open referrals. The Special Agent in Charge of ATF’s Firearms Enforcement Branch told us that as of July 1995, at least seven persons nationally had been successfully prosecuted for making false statements on the Brady handgun purchase form. This official provided us the supporting details for these prosecutions, which are presented in table 2.5. As table 2.5 shows, four federal judicial districts account for the seven Brady-related prosecutions. None of the prosecutions involved prospective gun purchasers with previous convictions for violent offenses. However, three of the cases did involve individuals who lied on the Brady handgun purchase form about drug-related felony convictions. Table 2.5 also shows that the subsequent Brady prosecutions of these individuals resulted in prison or custody sentences of 12 to 24 months. The other four cases—related gun-trafficking cases prosecuted within the Southern District of West Virginia—involved individuals who had falsified state identification cards and the Brady handgun purchase form to portray themselves as residents of West Virginia when, in fact, they were residents of New York. In all four cases, the defendants pled guilty. Three of the four defendants were sentenced to 2 years’ probation, and the fourth was sentenced to 6 months’ home confinement and 3 years’ probation. No comprehensive, national data existed on handgun purchase applications and denials for the first year of Brady; however, limited data from ATF’s and our surveys suggested that the denial rates were around 4 percent in selected jurisdictions analyzed. In the 15 jurisdictions we analyzed, about half of the denials were to individuals with felony or misdemeanor criminal histories. Denials based on the other Brady ineligible categories accounted for only 4.1 percent of the total denials in the 15 jurisdictions. Almost 40 percent of the total denials from our survey were because the gun dealers sent the Brady forms to the wrong CLEOs. However, all of these denials occurred in only 3 of the 15 jurisdictions, and it is unknown whether any of these purchases would have been denied if the forms had been sent to the proper CLEOs. Although we were not able to quantify the number of Brady-related prosecutions, available information suggested that the number is relatively small nationally. DOJ views Brady as more of a deterrent than a prosecutive mechanism, and ATF stated that most cases referred by ATF field offices to U.S. Attorneys have been declined. In its written comments on a draft of this report, DOJ provided updated information on its efforts to develop databases for identifying nonfelony classes of ineligible purchasers—fugitives, unlawful drug users or addicts, individuals adjudicated mentally defective or committed, persons dishonorably discharged, illegal aliens, and persons who have renounced U.S. citizenship. Our work did not specifically address the status of efforts to develop these databases, which will be important components of the national instant background check system under the phase II permanent provisions (effective November 30, 1998) of Brady. DOJ also provided clarifying information regarding arrests and prosecutions for falsely completing the handgun purchase application form. DOJ’s view is that Brady’s main purpose is to prevent certain categories of persons from obtaining handguns from federally licensed gun dealers. Given this main purpose, DOJ said that our report affords too much attention to evaluating the success of Brady in generating prosecutions for falsely completing the Brady handgun purchase form, which is not a good measure of Brady’s effectiveness or usefulness. We agree with DOJ that Brady’s main purpose is to prevent ineligible persons from purchasing handguns from federally licensed dealers. Most of this chapter—and all of appendix III—deal with this topic. On the other hand, one objective of our review was to determine the extent to which handgun purchase denials had resulted in federal follow-up enforcement actions. In this regard, we believe the prosecution-related information in our report is relevant, accurate, and presented in a balanced manner. Treasury and ATF provided a combined set of comments on the draft. Treasury stated that it is erroneous to treat Brady forms sent to the wrong CLEO as denials. We treated them as denials because the CLEOs in our review treated them as denials. We agree with Treasury that “simply because the notice was sent to the wrong CLEO does not mean that the purchaser did not receive the handgun.” Treasury also commented that even though certain handgun transactions are not subject to Brady’s provisions, nonetheless the law is an important first step in reducing illegal transfers to private individuals. Although there is widespread support for Brady in the law enforcement community, several legal challenges and the status of federal authority to penalize or redesignate nonperforming CLEOs have hampered enforcement of the act in some jurisdictions. Several sheriffs and a sheriff’s association have challenged the constitutionality of Brady’s interim background check provision, and most won their cases at the federal district court level. However, one of the three federal appeals courts considering the constitutionality of Brady has held that the act is constitutional. DOJ has determined that it lacks the authority to penalize or redesignate CLEOs who choose not to check backgrounds of handgun purchasers. DOJ has noted, however, that injunctive relief, for example, may be an option to compel local law enforcement officials to fulfill their responsibilities under the act. In two jurisdictions where CLEOs had not performed presale background checks, ATF’s National Tracing Center data did not show any crime-related handgun purchases from licensed dealers. Generally, the law enforcement community has strongly supported Brady. For example, a leading proponent of the act’s provisions is the International Association of Chiefs of Police. During the extended debate leading to eventual passage of Brady, the Association expressed support for a 5-day waiting period to allow law enforcement officers an opportunity to conduct background checks on all prospective handgun purchasers. Despite the generally widespread support of the law enforcement community, eight sheriffs and one sheriff’s association have initiated court cases challenging the constitutionality of Brady, particularly the phase I provision directing state or local law enforcement officers to make a reasonable effort to conduct background checks. The first eight cases are separate filings by individual sheriffs—each having jurisdictional responsibility for one county or parish in his respective state—and the ninth and most recent case was filed by the Wyoming Sheriff’s Association. As of July 1995, federal district courts had rendered decisions in six of the nine cases, and all six cases were on appeal to federal circuit courts. In five of the six decided cases, the courts have held Brady’s phase I background check provision to be unconstitutional as a violation of the Tenth Amendment. The first decision in the several challenges to Brady was Printz v. United States. In that May 1994 decision, for example, the Federal District Court for Montana ruled that the phase I background check provision substantially commandeers state executive officers and indirectly commandeers the legislative processes of the states to administer an unfunded federal program. The court observed that the CLEOs are indirectly required to allocate their resources to implement Brady instead of using those resources to address problems important to their constituents. In so ruling, the court rejected the federal government’s argument that the phase I background check provision was discretionary. The only federal district court ruling to date to hold that Brady’s phase I background check provision is consistent with the Tenth Amendment involves the case filed by a county sheriff in Texas (Koog v. United States). In that case, also decided in May 1994, the Federal District Court for the Western District of Texas reasoned that Brady confers great discretion on the CLEO to determine what is a reasonable background search and that no search may be required if the circumstances dictate. The court concluded that Brady imposes only minimal duties on CLEOs. From the government’s perspective, five of the six district court decisions were adverse rulings in that the phase I background check provision was deemed unconstitutional; therefore, DOJ has appealed the decisions. Even in these decisions, however, the courts found the remainder of Brady’s provisions severable and that they, therefore, remained operative. The Vermont court, for example, noted that CLEOs could perform background checks if they voluntarily chose to do so. In September 1995, the U.S. Court of Appeals for the Ninth Circuit upheld the constitutionality of Brady, saying the federal government can require state and local law enforcement agencies to check the records of prospective handgun buyers. The court reasoned that Brady’s provision that law enforcement agencies “make a reasonable effort to ascertain” the legality of a handgun purchase is a minimal burden that the federal government can impose on state and local law enforcement agencies. The court accordingly reversed the judgments of the Arizona and Montana district courts, which had held Brady unconstitutional as a violation of the Tenth Amendment (see table 3.1). At the time of our review, no background checks had been conducted in two of the nine jurisdictions where the CLEOs had challenged Brady. However, indications were that background checks had been conducted in the other seven jurisdictions. The situation regarding each of these seven jurisdictions was as follows: • While district court decisions were pending in three of the cases, Brady requirements were still being implemented by the plaintiff sheriffs in the respective jurisdictions—Otero County, New Mexico; Alamance County, North Carolina; and the counties in Wyoming. • Although the sheriff of Forrest County, Mississippi, was relieved of the requirement to conduct the Brady background checks by the district court’s ruling, he said he continued to perform the background checks so that eligible purchasers do not have to wait 5 business days. • Also, as noted above, the sheriff of Val Verde County, Texas, lost his case in district court and, thus, was still conducting background checks. • State-level agencies assumed responsibility for conducting background checks in Graham County, Arizona, and Orange County, Vermont. Effective October 1, 1994, the Arizona Department of Public Safety assumed a centralized role in conducting background checks for all residents of that state. In Vermont, when the Orange County sheriff refused to conduct background checks, the Vermont Department of Public Safety voluntarily assumed this responsibility in July 1994. On the other hand, even though Brady has been in effect since February 28, 1994, indications were that no background checks on handgun purchasers have been conducted in the other two jurisdictions—Iberia Parish, Louisiana, and Ravalli County, Montana. The following sections provide more details about the situations in these two jurisdictions. In March 1995, we contacted a Group Supervisor in ATF’s New Orleans Area Office, whose geographic operating responsibilities include Iberia Parish, Louisiana. According to the ATF Group Supervisor: • After passage of Brady in November 1993, officials from the Louisiana Attorney General’s Office and the Louisiana State Police met to determine which law enforcement agency or agencies would be designated to perform background checks of prospective purchasers of handguns. The Louisiana State Police officials said their agency was not interested in serving as the CLEO for implementing Brady Act background checks. Thus, the Attorney General’s Office and the State Police officials agreed that the sheriff of each parish should serve as CLEO. • Shortly thereafter, the Iberia Parish sheriff told dealers and ATF that he would not be performing background checks because he had insufficient resources to do so. In early 1994, ATF staff visited the sheriff to discuss his decision not to perform Brady background checks, but the sheriff still insisted that he had insufficient resources and would not be performing background checks. The ATF Group Supervisor told us that ATF had no authority to designate or require another law enforcement agency to perform the background checks in Iberia Parish and that as of March 1995, no agency had volunteered to take on the added responsibility. Later that month, we spoke with the sheriff of Iberia Parish. He told us that his office had never conducted any Brady background checks and that he did not know how many, if any, Brady forms had been received by his office. In February 1995, we contacted an inspector in ATF’s Portland Area Office, whose geographic responsibilities include Ravalli County, Montana. According to the ATF inspector: • ATF’s Portland Area Office is staffed with only eight inspectors but is responsible for four states, one of which is Montana. Generally, inspectors spend most of their time on higher priority efforts and have no time for inspections related to Brady Act implementation. • Thus, ATF staff do not know whether the Ravalli County sheriff is conducting (or has ever conducted) any Brady background checks. In August 1995, we contacted the Sheriff of Ravalli County, and he told us he had never conducted Brady background checks and had no plans to do so. DOJ’s Office of Legal Counsel has interpreted Brady’s criminal penalty provisions to be inapplicable to state or local law enforcement officers in performance of their duties under the act and that the government, therefore, lacks the authority to prosecute such officers for violations of the act. A majority of the district courts considering the issue have either recognized or endorsed such interpretation. Moreover, responsible ATF and DOJ officials told us that neither Treasury nor DOJ has authority to redesignate CLEOs in situations where the initially designated CLEOs fail to perform their expected duties. “The history of the Act indicates that Congress did not envision its criminal sanctions applying to CLEOs. “This reasoning is reinforced by the great solicitude paid to law enforcement officials in other provisions of the Act. It would be incongruous to insulate the CLEO against liability for damages . . . for providing erroneous information that prevents a sale and then turn around and subject him or her to criminal fine or imprisonment for failure to perform ministerial acts. Our conclusion is further supported by the impracticality, if not impossibility, of prosecuting a chief law enforcement officer for failing to make a ‘reasonable effort.’ The use of the term ‘reasonable effort’ reflects Congress’ apparent intent to vest discretion in CLEOs by providing a flexible statutory requirement. This elasticity, though common in civil statutes, is unusual in criminal laws because it does not clearly define a punishable act. It would be difficult to prosecute a CLEO for failing to make a ‘reasonable effort’, and such prosecution could be subject to a Fifth Amendment due process challenge. In light of the fact that applying criminal penalties to the ‘reasonable effort’ requirement would be both unusual and arguably unconstitutional, we find it difficult to believe that Congress intended the ‘reasonable effort’ to be criminally enforceable.” In summary, DOJ’s Office of Legal Counsel concluded that 18 U.S.C. Section 924(a)(5) does not apply to state officials, and the U.S. government, therefore, lacks the authority to prosecute state or local law enforcement officials for not conducting Brady background checks. This position has been recognized and endorsed by several of the district court decisions discussed above. For example, in determining the Forrest County, Mississippi, sheriff’s standing to sue, the federal district court noted that it believed the Department of Justice “is correct in its interpretation” of Brady’s penalty provisions. DOJ has noted, however, that injunctive relief may be an option. At the time of our review, DOJ had not sought injunctive relief. Under Brady, ATF has no specific authority to designate alternate CLEOs for conducting background checks. In our follow-up inquiries at ATF and DOJ headquarters, responsible officials told us that neither Treasury nor DOJ has authority to redesignate CLEOs when the initially designated CLEOs choose not to perform background checks. Moreover, even regarding the initial designations of CLEOs for purposes of Brady, federal agencies had no statutory authority and also played no substantive role in the process, except for disseminating guidance and encouraging cooperation in implementing the new law. Rather, state and local officials were expected to determine who would be designated as CLEOs for purposes of conducting presale background checks. We tried to determine whether there have been any negative effects resulting from the absence of presale background checks of handgun purchasers residing in the two jurisdictions discussed above—Iberia Parish, Louisiana, and Ravalli County, Montana. We wanted to determine, for example, whether any gun-related crimes had been committed—with a handgun purchased on or after February 28, 1994 (the effective date of Brady)—by any resident of these jurisdictions and, if so, whether the purchaser had a criminal history or other disqualifier identifiable by a routine background check. In response to our suggestion, ATF’s National Tracing Center performed a computerized search of tracing requests received from law enforcement agencies. The search was designed to determine if any of the tracing requests involved crime scene handguns that had been purchased in either of the two jurisdictions after Brady’s effective date. In structuring the computerized search, the National Tracing Center focused on all federally licensed dealers with postal address ZIP codes applicable to the two jurisdictions. As of July 25, 1995, the Center’s search revealed that a total of seven crime-tainted handguns had been purchased from dealers within the two jurisdictions. Six of the handguns had been purchased in Iberia Parish and one in Ravalli County. However, all seven purchases were made before Brady went into effect on February 28, 1994. Thus, this search of tracing requests did not specifically identify any crime-related effects stemming from the lack of background checks in the two jurisdictions. On the other hand, since no background checks had been conducted in these two jurisdictions there is no assurance that ineligible persons did not purchase handguns from licensed dealers. Moreover, the Tracing Center’s search covered only one jurisdiction in each state. Thus, the search did not cover the possibility that residents of one county or parish may have purchased handguns in another county or parish in their state. The effects of the legal challenges to Brady are not entirely clear because the cases are being appealed. The federal district courts have ruled in five of six cases decided as of July 1995 that CLEOs cannot be required to perform background checks. However, the decisions found the remainder of Brady’s provisions severable and that they, therefore, remained operative. In September 1995, the U.S. Court of Appeals for the Ninth Circuit upheld the constitutionality of Brady, reversing the judgments of the Arizona and Montana district courts. The appeals court reasoned that the phase I background check provision is a minimal burden that the federal government can impose on state and local law enforcement agencies Background checks were being conducted in seven of the nine jurisdictions where CLEOs had challenged Brady. In the other two jurisdictions, no checks were being conducted. We did not determine whether this lack of background checks resulted in handgun purchases by ineligible individuals. In its written comments, DOJ said the fact that local law enforcement officials are not subject to criminal prosecution does not mean there is no way to compel them to fulfill their responsibilities under Brady. DOJ commented that injunctive relief may be an option. We have added this point to our discussion. In their combined written comments, Treasury and ATF suggested that we add language indicating that the federal district court decisions were limited to the plaintiff sheriffs only and that other CLEOs in surrounding jurisdictions were still subject to Brady’s phase I background check provision. While the McGee and Romero district court decisions were limited to the plaintiff sheriffs, the Mack, Printz, and Frank district court decisions did not contain such a limitation. For example, the Frank decision enjoined “the United States from enforcing that provision in the District of Vermont,” the Mack decision ordered “that defendant United States of America and its agents are permanently enjoined from further enforcing 18 U.S.C. § 922(s)(2),” and the Printz decision enjoined “the United States from enforcing said provision.” Also, Treasury and ATF commented that the number of jurisdictions affected by the legal challenges to Brady is very small compared to overall enforcement of the act. Further, we and the agencies pointed out that the federal appeals court decision overturned two of the five district court decisions against Brady, and the appeals were pending in the remaining three cases as of October 1995. | GAO reviewed the implementation of the Brady Handgun Violence Prevention Act, focusing on the: (1) extent to which the waiting period and background checks required for handgun purchases have prevented ineligible persons from legally purchasing handguns; (2) extent to which denials have resulted in follow-up enforcement actions against those submitting false purchase information; and (3) effects of various legal challenges to the Brady Act. GAO found that: (1) of the law enforcement agencies surveyed, handguns were denied to about 4.3 percent of applicants; (2) application denials varied by jurisdiction because law enforcement officials did not use standardized criteria for their decisions; (3) most denials resulted from misdemeanor warrants or administrative reasons, such as gun dealers, sending applications to the wrong law enforcement agency; (4) in four jurisdictions, 4.9 percent of denials resulted from convictions or indictments for violent crimes, such as aggravated assault, murder, rape, or robbery; (5) most law enforcement officers relied solely on criminal history records in conducting their background checks because no other information sources were available, but some officers routinely checked for mental history disqualifications; (6) the number of Brady Act prosecutions was relatively small due to the low priority of follow-up enforcement actions at the Department of Justice (DOJ); (7) federal officials believe that the Brady Act is achieving its primary goal of preventing felons from legally purchasing handguns; (8) the effects of legal challenges to the Brady Act will not be known until all appeals are decided; and (9) DOJ believes that it lacks the authority to take action against law enforcement officers who do not conduct background checks. |
Our report on the latest round of ASP testing found that DHS increased the rigor of ASP testing in comparison with previous tests and that a particular area of improvement was in the performance testing at the Nevada Test Site, where DNDO compared the capability of ASP and current-generation equipment to detect and identify nuclear and radiological materials. For example, unlike in prior tests, the plan for the 2008 performance test stipulated that there would be no system contractor involvement in test execution. Such improvements addressed concerns we previously raised about the potential for bias and provided credibility to the results. Nevertheless, based on the following factors, we continue to question whether the benefits of the new portal monitors justify the high cost: The DHS criteria for a significant increase in operational effectiveness. Our chief concern with the criteria is that they require a marginal improvement over current-generation portal monitors in the detection of certain weapons-usable nuclear materials when ASPs are deployed for primary screening. DNDO considers detection of such materials to be a key limitation of current-generation portal monitors. We are particularly concerned about the marginal improvement required of ASPs because the detection threshold for the current-generation portal monitors does not specify a level of radiation shielding that smugglers could realistically use. DOE and national laboratory officials told us that DOE’s threat guidance used to set the current detection threshold is based not on an analysis of the capabilities of potential smugglers to take effective shielding measures but rather on the limited sensitivity of PVTs to detect anything more than certain lightly shielded nuclear materials. DNDO officials acknowledge that both the new and current-generation portal monitors are capable of detecting certain nuclear materials only when unshielded or lightly shielded. The marginal improvement in detection of such materials required of ASPs is particularly notable given that DNDO has not completed efforts to fine-tune PVTs’ software and thereby improve sensitivity to nuclear materials. DNDO officials expect they can achieve small improvements in sensitivity, but DNDO has not yet funded efforts to fine-tune PVTs’ software. In contrast to the marginal improvement required in detection of certain nuclear materials, the primary screening requirement to reduce the rate of innocent alarms could result in hundreds of fewer secondary screenings per day, thereby reducing CBP’s workload and delays to commerce. In addition, the secondary screening criteria, which require ASPs to reduce the probability of misidentifying special nuclear material by one-half, address the inability of relatively small handheld devices to consistently locate and identify potential threats in large cargo containers. Preliminary results of performance testing and field validation. The preliminary results presented to us by DNDO are mixed, particularly in the capability of ASPs used for primary screening to detect certain shielded nuclear materials. Preliminary results show that the new portal monitors detected certain nuclear materials better than PVTs when shielding approximated DOE threat guidance, which is based on light shielding. In contrast, differences in system performance were less notable when shielding was slightly increased or decreased: Both the PVTs and ASPs were frequently able to detect certain nuclear materials when shielding was below threat guidance, and both systems had difficulty detecting such materials when shielding was somewhat greater than threat guidance. With regard to secondary screening, ASPs performed better than handheld devices in identification of threats when masked by naturally occurring radioactive material. However, differences in the ability to identify certain shielded nuclear materials depended on the level of shielding, with increasing levels appearing to reduce any ASP advantages over the handheld identification devices. Other phases of testing uncovered multiple problems in meeting requirements for successfully integrating the new technology into operations at ports of entry. Of the two ASP vendors participating in the 2008 round of testing, one has fallen behind due to severe problems encountered during testing of ASPs’ readiness to be integrated into operations at ports of entry (“integration testing”); the problems may require that the vendor redo previous test phases to be considered for certification. The other vendor’s system completed integration testing, but CBP suspended field validation after 2 weeks because of serious performance problems resulting in an overall increase in the number of referrals for secondary screening compared with existing equipment. DNDO’s plans for computer simulations. DNDO does not plan to complete injection studies—computer simulations for testing the response of ASPs and PVTs to simulated threat objects concealed in cargo containers—prior to the Secretary of Homeland Security’s decision on certification even though delays to the ASP test schedule have allowed more time to conduct the studies. According to DNDO officials, injection studies address the inability of performance testing to replicate the wide variety of cargo coming into the United States and the inability to place special nuclear material and other threat objects in cargo during field validation. DNDO had earlier indicated that injection studies could provide information comparing the performance of the two systems as part of the certification process for both primary and secondary screening. However, DNDO subsequently decided that performance testing would provide sufficient information to support a decision on ASP certification. DNDO officials said they would instead use injection studies to support effective deployment of the new portal monitors. Lack of an updated cost-benefit analysis. DNDO has not yet updated its cost-benefit analysis to take into account the results of the latest round of ASP testing. An updated analysis that takes into account the results from the latest round of testing, including injection studies, might show that DNDO’s plan to replace existing equipment with ASPs is not justified, particularly given the marginal improvement in detection of certain nuclear materials required of ASPs and the potential to improve the current-generation portal monitors’ sensitivity to nuclear materials, most likely at a lower cost. DNDO officials said they are currently updating the ASP cost-benefit analysis and plan to complete it prior to a decision on certification by the Secretary of Homeland Security. Our report recommended that the Secretary of Homeland Security direct DNDO to (1) assess whether ASPs meet the criteria for a significant increase in operational effectiveness based on a valid comparison with PVTs’ full performance potential and (2) revise the schedule for ASP testing and certification to allow sufficient time for review and analysis of results from the final phases of testing and completion of all tests, including injection studies. We further recommended that, if ASPs are certified, the Secretary direct DNDO to develop an initial deployment plan that allows CBP to uncover and resolve any additional problems not identified through testing before proceeding to full-scale deployment. DHS agreed to a phased deployment that should allow time to uncover ASP problems but disagreed with GAO’s other recommendations, which we continue to believe remain valid. The challenges DNDO has faced in developing and testing ASPs illustrate the importance of following existing DHS policies as well as best practices for investments in complex homeland security acquisitions and for testing of new technologies. The DHS investment review process calls for executive decision making at key points in an investment’s life cycle and includes many acquisition best practices that, if applied consistently, could help increase the chances for successful outcomes. However, we reported in November 2008 that, for the period from fiscal year 2004 through the second quarter of fiscal year 2008, DHS had not effectively implemented or adhered to its investment review process due to a lack of senior management officials’ involvement as well as limited monitoring and resources. In particular, of DHS’s 48 major investments requiring milestone and annual reviews under the department’s investment review policy, 45 were not assessed in accordance with this policy. In addition, many major investments, including DNDO’s ASP program, had not met the department’s requirements for basic acquisition documents necessary to inform the investment review process. As a result, DHS had not consistently provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. Among other things, our November 2008 report recommended that the Secretary of Homeland Security direct component heads, such as the Director of DNDO, to ensure that the components have established processes to manage major investments consistent with departmental policies. DHS generally concurred with our recommendations, and we noted that DHS had begun several efforts to address shortcomings in the investment review process identified in our report, including issuing an interim directive requiring DHS components to align their internal policies and procedures by the end of the third quarter of fiscal year 2009. In January 2009, DHS issued a memorandum instructing component heads to create acquisition executives in their organizations to be responsible for management and oversight of component acquisition processes. If fully implemented, these steps should help ensure that DHS components have established processes to manage major investments. Based on our body of work on ASP testing, one of the primary lessons to be learned is to avoid the pitfalls in testing that stem from a rush to procure new technologies. GAO has previously reported on the negative consequences of pressures imposed by closely linking testing and development programs with decisions to procure and deploy new technologies, including the creation of incentives to postpone difficult tests and limit open communication about test results. We found that testing programs designed to validate a product’s performance against increasing standards for different stages in product development are a best practice for acquisition strategies for new technologies. In the case of ASPs, the push to replace existing equipment with the new portal monitors led to a testing program that until recently lacked the necessary rigor. Even for the most recent round of testing, DNDO’s schedule consistently underestimated the time required to conduct tests, resolve problems uncovered during testing, and complete key documents, including final test reports. In addition, DNDO’s original working schedule did not anticipate the time required to update its cost-benefit analysis to take into account the latest test results. The schedule anticipated completion of testing in mid-September 2008 and the DHS Secretary’s decision on ASP certification between September and November 2008. However, testing is still not completed, and DNDO took months longer than anticipated to complete the final report on performance testing. As previously mentioned, a number of aspects of the latest round of ASP testing increased the rigor in comparison with earlier rounds and, if properly implemented, could improve the rigor in DHS’s testing of other advanced technologies. Key aspects included the following: Criteria for ensuring test requirements are met. The test and evaluation master plan established criteria requiring that the ASPs meet certain requirements before starting or completing any test phase. For example, the plan required that ASPs have no critical or severe issues rendering them completely unusable or impairing their function. The criteria provided a formal means to ensure that ASPs met certain basic requirements prior to the start of each phase of testing. DNDO and CBP adhered to the criteria even though doing so resulted in integration testing taking longer than anticipated and delaying the start of field validation. Participation of the technology end user. The participation of CBP (the end user of the new portal monitors) provided an independent check, within DHS, of DNDO’s efforts to develop and test the new portal monitors. For example, CBP added a final requirement to integration testing before proceeding to field validation to demonstrate ASPs’ ability to operate for 40 hours without additional problems and thereby provide for a productive field validation. In addition, the participation of CBP officers in the 2008 round of performance testing allowed DNDO to adhere more closely than in previous tests to CBP’s standard operating procedure for conducting a secondary inspection using the handheld identification devices, thereby providing for an objective test. Participation of an independent test authority. The DHS Science and Technology Directorate, which is responsible for developing and implementing the department’s test and evaluation policies and standards, will have the lead role in the final phase of ASP testing and thereby provide an additional independent check on testing efforts. The Science and Technology Directorate identified two critical questions, related to ASPs’ operational effectiveness (i.e., detection and identification of threats) and suitability (e.g., reliability, maintainability, and supportability), and drafted its own test plan to address those questions. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Ned Woodward (Assistant Director), Joseph Cook, and Kevin Tarmann made key contributions to this testimony. Dr. Timothy Persons (Chief Scientist), James Ashley, Steve Caldwell, John Hutton, Omari Norman, Alison O’Neill, Amelia Shachoy, and Rebecca Shea also made important contributions. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Key findings. Prototypes of advanced spectroscopic portals (ASP) were expected to be significantly more expensive than current-generation portal monitors but had not been shown to be more effective. For example, Domestic Nuclear Detection Office (DNDO) officials’ preliminary analysis of 10 ASPs tested at the Nevada Test Site found that the new portal monitors outperformed current-generation equipment in detecting numerous small, medium-size, and threatlike radioactive objects and were able to identify and dismiss most naturally occurring radioactive material. However, the detection capabilities of both types of portal monitors converged as the amount of source material decreased. Recommendations. We recommended that the Secretary of Homeland Security work with the Director of DNDO to analyze the benefits and costs of deploying ASPs before any of the new equipment is purchased to determine whether any additional detection capability is worth the additional cost. We also recommended that the total program cost estimate for the radiation portal monitor project be revised after completion of the cost-benefit analysis. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Combating Nuclear Smuggling: DHS’s Decision to Procure and Deploy the Next Generation of Radiation Detection Equipment Is Not Supported by Its Cost-Benefit Analysis. GAO-07-581T. Washington, D.C.: March 14, 2007. Key findings. DNDO’s cost-benefit analysis issued in response to our March 2006 recommendation did not provide a sound analytical basis for DNDO’s decision to purchase and deploy ASPs. We identified a number of problems with the analysis of both the performance of the new portal monitors and the costs. With regard to performance, DNDO did not use the results of its own tests and instead relied on assumptions of the new technology’s anticipated performance level. In addition, the analysis focused on identifying highly enriched uranium (HEU) and did not consider how well the new portal monitors can correctly detect or identify other dangerous radiological or nuclear materials. With regard to costs, DNDO did not follow the DHS guidelines for performing cost-benefit analyses and used questionable assumptions about the procurement costs of portal monitor technology. Recommendations. We recommended that DHS and DNDO conduct a new cost-benefit analysis using sound analytical methods, including actual performance data and a complete accounting of all major costs and benefits as required by DHS guidelines, and that DNDO conduct realistic testing for both ASPs and current-generation portal monitors. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Program. GAO-07-347R. Washington, D.C.: March 9, 2007. Key findings. DNDO had not collected a comprehensive inventory of testing information on current-generation portal monitors. Such information, if collected and used, could improve DNDO’s understanding of how well portal monitors detect different radiological and nuclear materials under varying conditions. In turn, this understanding would assist DNDO’s future testing, development, deployment, and purchases of portal monitors. Recommendations. We recommended that the Secretary of Homeland Security, working with the Director of DNDO, collect reports concerning all of the testing of current-generation portal monitors and review the test reports in order to develop an information database on how the portal monitors perform in both laboratory and field tests on a variety of indicators, such as their ability to detect specific radiological and nuclear materials. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Key findings. We found that tests conducted by DNDO in early 2007 were not an objective and rigorous assessment of the ASPs’ capabilities. Specifically, we raised concerns about DNDO using biased test methods that enhanced the apparent performance of ASPs; not testing the limitations of ASPs’ detection capabilities—for example, by not using a sufficient amount of the type of materials that would mask or hide dangerous sources and that ASPs would likely encounter at ports of entry; and not using a critical Customs and Border Protection (CBP) standard operating procedure that is fundamental to the performance of handheld radiation detectors in the field. Recommendations. We recommended that the Secretary of Homeland Security delay Secretarial certification and full-scale production decisions on ASPs until all relevant tests and studies had been completed and limitations to tests and studies had been identified and addressed. We further recommended that DHS determine the need for additional testing in cooperation with CBP and other stakeholders and, if additional testing was needed, that the Secretary of DHS appoint an independent group within DHS to conduct objective, comprehensive, and transparent testing that realistically demonstrates the capabilities and limitations of ASPs. Combating Nuclear Smuggling: DHS’s Program to Procure and Deploy Advanced Radiation Detection Portal Monitors Is Likely to Exceed the Department’s Previous Cost Estimates. GAO-08-1108R. Washington, D.C.: September 22, 2008. Key findings. Our independent cost estimate suggested that from 2007 through 2017 the total cost of DNDO’s 2006 project execution plan (the most recent official documentation of the program to equip U.S. ports of entry with radiation detection equipment) would likely be about $3.1 billion but could range from $2.6 billion to $3.8 billion. In contrast, we found that DNDO’s cost estimate of $2.1 billion was unreliable because it omitted major project costs, such as elements of the ASPs’ life cycle, and relied on a flawed methodology. DNDO officials told us that the agency was no longer following the 2006 project execution plan and that the scope of the agency’s ASP deployment strategy had been reduced to only the standard cargo portal monitor. Our analysis of DNDO’s summary information outlining its scaled-back plan indicated the total cost to deploy standard cargo portals over the period 2008 through 2017 would be about $2 billion but could range from $1.7 billion to $2.3 billion. Agency officials acknowledged the program requirements that would have been fulfilled by the discontinued ASPs remained valid, including screening rail cars and airport cargo, but the agency had no plans for how such screening would be accomplished. Recommendations. We recommended that the Secretary of Homeland Security direct the Director of DNDO to work with CBP to update the projection execution plan to guide the entire radiation detection program at U.S. ports of entry, revise the estimate of the program’s cost and ensure that the estimate considers all of the costs associated with its project execution plan, and communicate the revised estimate to Congress so that it is fully apprised of the program’s scope and funding requirements. Combating Nuclear Smuggling: DHS Needs to Consider the Full Costs and Complete All Tests Prior to Making a Decision on Whether to Purchase Advanced Portal Monitors. GAO-08-1178T. Washington, D.C.: September 25, 2008. Key findings. In preliminary observations of the 2008 round of ASP testing, we found that DNDO had made progress in addressing a number of problems we identified in previous rounds of ASP testing. However, the DHS criteria for significant increase in operational effectiveness appeared to set a low bar for improvement—for example, by requiring ASPs to perform at least as well as current-generation equipment when nuclear material is present in cargo but not specifying an actual improvement. In addition, the ASP certification schedule did not allow for completion of computer simulations that could provide useful data on ASP capabilities prior to the Secretary’s decision on certification. Finally, we questioned the replacement of current-generation equipment with ASPs until DNDO demonstrates that any additional increase in security would be worth the ASPs’ much higher cost. Combating Nuclear Smuggling: DHS’s Phase 3 Test Report on Advanced Portal Monitors Does Not Fully Disclose the Limitations of the Test Results. GAO-08-979. Washington, D.C.: September 30, 2008. Key findings. DNDO’s report on the second group of ASP tests in 2007 (the Phase 3 tests) did not appropriately state test limitations. As a result, the report did not accurately depict the results and could potentially be misleading. The purpose of the Phase 3 tests was to conduct a limited number of test runs in order to identify areas in which the ASP software needed improvement. While aspects of the Phase 3 report addressed this purpose, the preponderance of the report went beyond the test’s original purpose and made comparisons of the performance of the ASPs with one another or with currently deployed portal monitors. We found that it would not be appropriate to use the Phase 3 test report in determining whether the ASPs represent a significant improvement over currently deployed radiation equipment because the limited number of test runs did not support many of the comparisons of ASP performance made in the report. Recommendations. We recommended that the Secretary of DHS use the results of the Phase 3 tests solely for the purposes for which they were intended—to identify areas needing improvement—and not as a justification for certifying whether the ASPs warrant full-scale production. If the Secretary intends to consider the results of the Phase 3 tests in making a certification decision regarding ASPs, we further recommended that the Secretary direct the Director of DNDO to revise and clarify the Phase 3 test report to more fully disclose and articulate the limitations present in the Phase 3 tests and clearly state which insights from the Phase 3 report are factored into any decision regarding the certification that ASPs demonstrate a significant increase in operational effectiveness. Finally, we recommended that the Secretary direct the Director of DNDO to take steps to ensure that any limitations associated with the 2008 round of testing are properly disclosed when the results are reported. Combating Nuclear Smuggling: DHS Improved Testing of Advanced Radiation Detection Portal Monitors, but Preliminary Results Show Limits of the New Technology. GAO-09-655. Washington, D.C.: May 21, 2009. Key findings. We reported that the DHS criteria for a significant increase in operational effectiveness require a large reduction in innocent alarms but a marginal improvement in the detection of certain weapons-usable nuclear materials. In addition, the criteria do not take the current- generation portal monitors’ full potential into account because DNDO has not completed efforts to improve their performance. With regard to ASP testing, we found that DHS increased the rigor in comparison with previous tests, thereby adding credibility to the test results, but that preliminary results were mixed. The results showed that the new portal monitors performed better than current-generation portal monitors in detection of certain nuclear materials concealed by light shielding approximating the threat guidance for setting detection thresholds, but differences in sensitivity were less notable when shielding was slightly below or above that level. Testing also uncovered multiple problems in ASPs meeting the requirements for successful integration into operations at ports of entry. Finally, we found that DNDO did not plan to complete computer simulations that could provide additional insight into ASP capabilities and limitations prior to certification even though delays to testing allowed more time to conduct the simulations. Recommendations. We recommended that the Secretary of Homeland Security direct the Director of DNDO to assess whether ASPs meet the criteria for a significant increase in operational effectiveness based on a valid comparison with current-generation portal monitors’ full performance potential and revise the schedule for ASP testing and certification to allow sufficient time for review and analysis of results from the final phases of testing and completion of all tests, including computer simulations. If ASPs are certified, we further recommended that the Secretary of Homeland Security direct the Director of DNDO to develop an initial deployment plan that allows CBP to uncover and resolve any additional problems not identified through testing before proceeding to full-scale deployment. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Homeland Security's (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are key elements in the nation's defenses against such threats. DHS has sponsored testing to develop new monitors, known as advanced spectroscopic portal (ASP) monitors, to replace radiation detection equipment being used at ports of entry. DNDO expects that ASPs may offer improvements over current-generation portal monitors, particularly the potential to identify as well as detect radioactive material and thereby to reduce both the risk of missed threats and the rate of innocent alarms, which DNDO considers to be key limitations of radiation detection equipment currently used by Customs and Border Protection (CBP) at U.S. ports of entry. However, ASPs cost significantly more than current generation portal monitors. Due to concerns about ASPs' cost and performance, Congress has required that the Secretary of Homeland Security certify that ASPs provide a significant increase in operational effectiveness before obligating funds for full-scale ASP procurement. This testimony addresses (1) GAO findings on DNDO's latest round of ASP testing, and (2) lessons from ASP testing that can be applied to other DHS technology investments. These findings are based on GAO's May 2009 report GAO-09-655 and other related reports. GAO's report on the latest round of ASP testing found that DHS increased the rigor in comparison with previous tests and thereby added credibility to the test results. However, GAO's report also questioned whether the benefits of the ASPs justify the high cost. In particular, the DHS criteria for a significant increase in operational effectiveness require only a marginal improvement in the detection of certain weapons-usable nuclear materials, which DNDO considers a key limitation of current-generation portal monitors. The marginal improvement required of ASPs is particularly notable given that DNDO has not completed efforts to fine-tune current-generation equipment to provide greater sensitivity. Moreover, the preliminary test results show that ASPs performed better than current-generation portal monitors in detection of such materials concealed by light shielding approximating the threat guidance for setting detection thresholds, but that differences in sensitivity were less notable when shielding was slightly below or above that level. Finally, DNDO has not yet updated its cost-benefit analysis to take into account the results of the latest round of ASP testing and does not plan to complete computer simulations that could provide additional insight into ASP capabilities and limitations prior to certification even though test delays have allowed more time to conduct the simulations. DNDO officials believe the other tests are sufficient for ASPs to demonstrate a significant increase in operational effectiveness. GAO recommended that DHS assess ASPs against the full potential of current-generation equipment and revise the program schedule to allow time to conduct computer simulations and to uncover and resolve problems with ASPs before full-scale deployment. DHS agreed to a phased deployment that should allow time to uncover ASP problems but disagreed with the other recommendations, which GAO believes remain valid. The challenges DNDO has faced in developing and testing ASPs illustrate the importance of following best practices for investments in complex homeland security acquisitions and for testing of new technologies. GAO recently found that many major DHS investments, including DNDO's ASP program, had not met the department's requirements for basic acquisition documents necessary to inform the investment review process, which has adopted many acquisition best practices. As a result, DHS had not consistently provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. A primary lesson to be learned regarding testing is that the push to replace existing equipment with the new portal monitors led to an ASP testing program that until recently lacked the necessary rigor. Even for the most recent round of testing, DNDO's schedule consistently underestimated the time required to conduct tests and resolve problems uncovered during testing. In contrast, GAO has previously found that testing programs designed to validate a product's performance against increasing standards for different stages in product development are a best practice for acquisition strategies for new technologies. Aspects that improved the latest round of ASP testing could also, if properly implemented, provide rigor to DHS's testing of other advanced technologies. |
When disasters such as floods, tornadoes, or earthquakes strike, state and local governments are called upon to help citizens cope. Assistance from the Federal Emergency Management Agency (FEMA) may be provided if the President, at a state governor’s request, declares that an emergency or disaster exists and that federal resources are required to supplement state and local resources. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5121 and following) authorizes the President to issue major disaster or emergency declarations and specifies the types of assistance that the President may authorize. The scope of authorized assistance is smaller for emergencies than for major disasters. Generally, FEMA’s public assistance program (also called the “infrastructure” program) provides financial and other assistance to restore or rebuild disaster-damaged facilities that serve a public purpose.Under the Stafford Act, FEMA may make public assistance grants to state and local governments and certain nonprofit organizations for the repair of a range of facilities, including government buildings, water distribution systems, parks and recreational facilities, and public utilities. Generally, the grants are to cover not less than 75 percent of eligible costs. The act also provides that FEMA’s grants for permanent restoration may include work designed to mitigate the effects of future disasters—in effect, to lessen or prevent future damages by making the facilities better able to withstand disaster events. As of August 1995, FEMA had obligated a total of over $6.5 billion (constant 1995 dollars) in public assistance grants for major disasters declared during fiscal years 1989 through 1994. Generally, FEMA provides public assistance grants to repair or restore the facilities of states, municipalities, and other local government entities. In addition, grants may go to private, nonprofit organizations that own and operate certain types of damaged facilities. The grants are made for three general purposes: debris removal, emergency protective measures, and permanent restoration. Emergency protective measures are activities undertaken to save lives and protect the public’s health and safety; examples include search and rescue operations, security measures, the provision of temporary transportation or communication facilities, and demolition and removal of damaged structures that pose a safety threat to the general public. Generally, permanent restoration work is aimed at restoring a facility to perform its pre-disaster function; however, it may include work designed to mitigate the effects of future disasters. FEMA categorizes facilities eligible for permanent restoration as follows: Roads and bridges—non-federal-aid roads, highways, and bridges. Water control facilities, including dams, levees, drainage channels, shore protection devices, and pumping facilities. Buildings and equipment, including the contents of buildings as well as equipment such as vehicles (for example, fire trucks or police cars). Utilities. Parks, recreational, and other facilities, including playground equipment, swimming pools, boat docks and piers, bath houses, tennis courts, picnic tables, golf courses, and some trees and landscape features. In addition, FEMA makes public assistance grants to cover a portion of the cost of administering grants for the above purposes. The amounts of grants for administrative expenses are determined by a formula that takes into account the total amount of public assistance grants provided to grantees following the disaster. As shown in table 1.1, the amounts that FEMA has obligated for public assistance have increased substantially for the disasters and emergencies declared in recent years, exceeding $2 billion for fiscal year 1994 alone. In constant 1995 dollars, FEMA obligated over $6.5 billion in public assistance for 246 disasters and emergencies declared during fiscal years 1989 through 1994, as compared with about $1 billion for 151 disasters and emergencies declared during the preceding 6 fiscal years. FEMA could not readily provide data showing the amounts obligated for the permanent restoration of each category of facilities; however, FEMA provided data showing the projected cost for each category. The projected costs are FEMA’s best estimates of what the total obligations will be when all activities associated with the disaster are completed. As shown in table 1.2, for disasters that occurred during fiscal years 1989 through 1994, the public assistance category with the highest projected cost was the permanent restoration of public buildings and equipment—over $2.6 billion, or about one-third of the total projected public assistance costs for these disasters. To help verify the scope of work needed for individual public assistance projects and the estimated costs, FEMA contracts with technical specialists such as architect/engineering firms. For disasters declared during fiscal years 1989 through 1994, FEMA has obligated about $71.4 million for this purpose. Under section 404 of the Stafford Act, FEMA may provide additional grants to mitigate the damage from future disasters, for example, to strengthen or retrofit undamaged public facilities in the disaster area. These grants can cover up to 75 percent of the cost of the mitigation effort. For the disasters declared during fiscal years 1989 through 1994, FEMA has obligated about $275.3 million for this purpose. Also, when a disaster is declared, FEMA may make “mission assignments” directing other federal agencies to perform work. Mission assignments may be made for a number of purposes, including those related to restoring public services or facilities; for example, FEMA may assign the U.S. Army Corps of Engineers the mission of debris removal in a disaster area. According to FEMA, mission assignments are primarily for public assistance work. For disasters declared in fiscal years 1989 through 1994, FEMA obligated about $1.07 billion for mission assignments. According to a 1995 Senate report, for fiscal years 1990 through 1993 FEMA obligated nearly two-thirds of all mission assignment dollars to the Corps of Engineers. A number of factors may help explain the trend toward increasing costs, including an increase in the number of declared disasters and emergencies and the incidence of unusually large disasters. The period encompassing fiscal years 1989 through 1994 included some very destructive and costly disasters, including hurricanes Andrew and Iniki in 1992, the Midwest floods of 1993, and the Northridge (California) earthquake in 1994. FEMA estimates that the total public assistance costs of the Northridge earthquake alone will exceed $3.4 billion. Additionally, more facilities have gradually become eligible for public assistance. The Stafford Act is an expansion of the first permanent authority (P.L. 81-875) enacted in 1950 to provide disaster assistance on a continuing basis without the need for congressional action. Over the years, the Congress has generally increased eligibility for public assistance through legislation that expanded the categories of assistance and/or specified persons or organizations eligible to receive assistance. In some cases, legislation also imposed requirements as a condition of eligibility. (App. I provides a chronology of major legislative changes affecting public assistance eligibility.) Also, FEMA has made regulatory changes that may have expanded the federal cost of the public assistance program, according to a July 1995 report by FEMA’s Inspector General. These included changes in (1) the building codes applicable to the repair and restoration of damaged buildings and (2) the damage threshold governing the decision on whether to repair or replace a damaged facility. Under authorities other than the Stafford Act, federal agencies provide financial assistance for the permanent repair or restoration of certain public facilities—an important factor in determining the eligibility of some of FEMA’s public assistance. The Federal Highway Administration’s (FHWA) emergency relief program funds 80 percent of the costs of permanently restoring federal-aid roads or highways (90 percent for Interstate highways) that have been seriously damaged by natural disasters. By law, FHWA can provide up to $100 million in emergency relief funding to a state for each natural disaster or catastrophic failure (such as a bridge collapse) that is found eligible; however, the Congress has passed special legislation lifting the cap for specific disasters. In fiscal years 1989 through 1994, FHWA obligated over $2.5 billion for its emergency relief program. The Department of Agriculture’s (USDA) Emergency Watershed Protection program funds, among other things, a portion of the costs of repairing certain nonfederal levees and other water control works damaged by flooding. The program is applicable to small-scale, localized disasters as well as those of national magnitude. For fiscal years 1989 through 1994, USDA received about $494 million in appropriations for this program. Also, the U.S. Army Corps of Engineers funds 80 percent of the costs to repair qualifying flood-damaged nonfederal levees. To qualify for the Corps’ funding, levees must be publicly sponsored by entities such as levee districts and municipalities. The Corps obligated about $54.5 million for the program during fiscal years 1989 through 1994. The Department of Housing and Urban Development (HUD) provides financial assistance to public housing authorities for the modernization (physical improvement) of public housing. Distributed by formulas, the funds may be used to meet modernization needs resulting from natural and other disasters and from emergencies. In addition, HUD administers a $75 million reserve fund (established in 1992) specifically for disaster- and emergency-related modernization needs. HUD could not readily provide the amount of formula funds used for repairing disaster-damaged public housing; about $62 million from the emergency reserve fund was allocated during fiscal years 1993 through 1994. Also, funds provided under HUD’s Community Development Block Grant (CDBG) program may be used for disaster recovery. CDBG funds may be used for some purposes similar to those for which FEMA’s public assistance funds are used, including clearing debris, providing extra security, reconstructing essential utility facilities, and, in some cases, repairing or reconstructing government buildings. HUD officials could not provide accurate data on the amount of CDBG funds used for disaster assistance.Program appropriations for fiscal years 1989 to 1994 ranged from about $3.1 billion to about $4.4 billion. The Chairman, Subcommittee on VA, HUD and Independent Agencies, Senate Committee on Appropriations, asked GAO to review FEMA’s criteria for determining eligibility for public assistance, determine how FEMA ensures that public assistance funds are expended only for eligible items, and identify changes in eligibility that could lower the costs of federal public assistance in the future. To review FEMA’s criteria for determining eligibility, we reviewed the Stafford Act and related regulations, FEMA’s public assistance manual and policy memorandums, and other relevant documents. We interviewed public assistance officials at FEMA’s Washington, D.C., headquarters, including the Engineering Branch Chief, Infrastructure Support Division. We also interviewed officials at FEMA’s regional office in San Francisco and its disaster field office in Pasadena, California. We selected these field locations because of their responsibility for administering the public assistance program for the Northridge earthquake and other significant disasters. At the field locations, we documented the steps involved in approving projects and reviewed files pertaining to specific public assistance projects. We also interviewed officials of FEMA’s Office of Inspector General (OIG) and reviewed OIG reports pertaining to public assistance generally and to specific disasters. In addition, we incorporated information obtained in telephone interviews with public assistance officials in each of FEMA’s 10 regions (see below). To determine how FEMA ensures that funds are expended only for eligible items, we reviewed FEMA’s written guidance and procedures for disbursing funds to public assistance grantees. At FEMA’s headquarters, the California locations, and FEMA’s regional office in Atlanta, Georgia, we interviewed financial officials, including FEMA’s Deputy Chief Financial Officer; public assistance personnel of the Response and Recovery Directorate; and state personnel. We also examined the relevant financial standards and requirements posed by the Office of Management and Budget. Because FEMA’s process involves audits, we interviewed OIG officials, including the Deputy Inspector General, and obtained OIG and contractors’ audit reports of public assistance projects. We also obtained from the OIG information about the extent of its audit coverage. We also interviewed and obtained documents from the Price-Waterhouse auditors contracted by FEMA. To identify the changes in eligibility criteria that could potentially reduce the costs of public assistance in the future, we examined published literature and reports, including reports by FEMA’s Inspector General. We also surveyed public assistance officials in each of FEMA’s 10 regional offices to obtain their ideas for reducing the future costs of public assistance, including the rationale for each proposal and the likely impacts. (Because of limitations on the availability of FEMA’s financial data, we were generally unable to estimate the potential impacts on public assistance expenditures.) We specifically surveyed these officials because they work closely with the program on a day-to-day basis and are knowledgeable about the application of FEMA’s public assistance criteria. To balance their perspectives, we also asked the National Emergency Management Association—an organization of state emergency management officials—and the Association of State Floodplain Managers to comment on the proposals cited by the FEMA regional officials. (A list of the proposals not discussed elsewhere in this report and additional details on our methodology are contained in app. II.) FEMA provided historical data on its financial obligations and cost projections. We did not independently verify the accuracy of this information. In March 1995, we testified that because FEMA’s Disaster Relief Fund (which accounts for the majority of the agency’s funds) has not been subject to audit, there is no assurance that the fund’s financial data are accurate. In July 1995, FEMA’s Inspector General reported that FEMA’s accounting system lacks the internal controls and discipline necessary to ensure the integrity of financial data. We provided a draft of this report to FEMA for its review and comment. FEMA provided comments in a letter from the Director; this letter and our response are in appendix III. We modified the report where appropriate in response to the comments. Our review was conducted from August 1995 through March 1996 in accordance with generally accepted government audit standards. The Stafford Act provides a general framework for federal assistance programs for public losses sustained in disasters. Within that framework, FEMA provides the basic criteria for determining the work that is eligible for public assistance funding. While applying the criteria may inevitably entail subjectivity, we found ambiguities in the criteria that created difficulties in determining (1) the extent to which the permanent restoration of disaster-damaged facilities is eligible for funding and (2) the eligibility of the facilities of private nonprofit applicants. We also found that until recently, FEMA had not systematically updated or disseminated policy changes to the regional officials involved in making eligibility determinations. The decisions on eligibility effectively determine the level of federal spending for public assistance, affecting the amounts of grants and of FEMA’s and applicants’ administrative costs. Additionally, without clear, up-to-date criteria, inconsistent or inequitable eligibility determinations and time-consuming appeals by grantees and subgrantees may be more likely to occur. The importance of clear criteria is heightened because in large disasters FEMA often uses temporary personnel with limited training to help prepare and process applications. FEMA and other officials have recognized a general need for clearer criteria and improved policy dissemination to help determine eligibility for public assistance. For disasters declared in fiscal years 1989 through 1994, FEMA projects that public assistance grants for permanent repairs and restorations will total over $5.2 billion (in 1995 dollars). The decisions made on the eligibility of work on facilities are based on the general criteria for determining federal public assistance and on the criteria specific to such facilities. In order to apply these criteria, FEMA officials may have to make subjective judgments because the criteria lack specificity and/or concrete examples. FEMA requires that potential applicants prepare a list of all damaged sites and equipment or inventory lost, provide photos or site sketches, and provide information on insurance coverage and applicable codes and standards. A survey team—consisting of FEMA, other federal, state, and/or local officials—inspects each damage site and reviews the applicable records to determine the extent of the disaster damage, the scope of the eligible work, and the estimated cost of that work. This information is recorded on a damage survey report (DSR). DSRs are reviewed by FEMA officials located at the relevant regional offices or, in the case of larger disasters, at disaster field offices near the disaster areas. The reports are reviewed to verify the scope of work, cost calculations, adequacy of documentation, eligibility, and compliance with special requirements, such as those for floodplain management and hazard mitigation. (For facilities that are approved, the DSR serves as the basis for obligating FEMA funds.) Once it is determined that an applicant is eligible for federal public assistance, the next step is to identify what work is eligible for such assistance. There are three general criteria that apply to all types of work for all applicants: The work must be required as a direct result of the declared disaster. Primarily, damages that occur during the “incident period,” or are the direct result of events that occurred during the incident period, are considered for eligibility. Also potentially eligible are (1) protective measures and other preparation activities performed within a reasonable time in advance of the event and (2) damages that occur after the close of the period that can be tied directly to the declared event. For example, a landslide caused by heavy rains may not occur for some time after the rains have stopped. The damages must have occurred and the work or activity must be performed within the designated disaster area. A presidential disaster declaration authorizes federal assistance in the affected state; FEMA determines which counties within the state will receive assistance and the type(s) of assistance. Other political subdivisions, such as a city or special district, may be designated, but the county is the most common unit of designation. The work or expense must be the legal responsibility of the applicant. Generally, ownership of a facility is sufficient to establish responsibility for repairs to the facility. According to FEMA regional officials, applying the criteria for public assistance can be difficult. Among the more problematic issues is determining the standards (building codes) that are applicable to repair/restoration work, which in turn affect decisions on whether facilities should be repaired or replaced. Generally, FEMA’s criteria define eligible work as that needed to restore the facility on the basis of the design of the facility as it existed immediately before the disaster and in accordance with certain other conditions; in some instances, grant funds may be used to replace a facility entirely or for an alternative facility. One condition is that permanent restoration work must comply with the applicable standards. Such standards must apply to the type of repair or restoration being performed (for example, there may be different standards for repair and for new construction); be appropriate to the pre-disaster use of the facility; be in writing and formally adopted by the applicant before the project is apply uniformly to all similar types of facilities within the jurisdiction of the owner of the facility; and if in effect at the time of the disaster, have been enforced during the time they were in effect. Furthermore, to be considered “applicable,” the standards must be in a formally adopted written ordinance of the jurisdiction in which the facility is located, or be a state or federal requirement. The standards do not necessarily have to be in effect at the time of the disaster; if the applicant adopts new standards before FEMA has approved the damage survey report for the permanent restoration of a facility in the jurisdiction, the work done to meet these standards may be eligible for public assistance. As discussed in chapter 4, FEMA regional officials cited a need to better define the authority with the ability to adopt and approve standards. They suggested that clarifying the language in the regulations to define who has the authority to adopt and approve standards might reduce the costs and the confusion that surrounds this issue. Also, to be applicable to the eligible facility, FEMA requires that the standards must be applied to all similar types of facilities. However, there are no criteria that (1) specify a time period during which the newly adopted standards must be in place after the eligible facility is funded or (2) define “similar” facilities. According to a public assistance official at FEMA headquarters, FEMA can determine whether or not post-disaster standards proposed for the restoration of a facility are “reasonable” before making a funding decision. However, he said that there are no written criteria to determine reasonableness. He added that FEMA assumed that public scrutiny during the adoption process would discourage unreasonable standards, because the standards have to be applicable to all similar facilities whether owned by the public or private sector. However, this approach is not without problems. For example, the official noted that a standard adopted for hospitals after the Northridge earthquake, although not necessarily unreasonable, provided for extensive upgrading for relatively little damage, and there was not the self-policing effect that FEMA had expected. FEMA inspectors involved in assessing the damages following the Northridge earthquake said that determining the applicability of standards appears especially problematic in the case of earthquakes. FEMA and its applicants have had significant disagreements about the applicability of standards to eligible facilities following both the Loma Prieta and Northridge earthquakes. A source of contention between FEMA and some applicants surrounds the applicability of triggers—integral parts of the building codes signaling the point at which various upgrades, or the replacement of an entire facility, must be undertaken. Further complicating this problem is the fact that in the case of earthquakes, some structural damage may not be apparent upon first inspection. Determining what standards are applicable to permanent restoration affects whether or not facilities will be replaced entirely. FEMA provides that if repairing a facility (in accordance with standards applicable to repairs) would cost 50 percent or more of the cost of replacing the facility to its pre-disaster design (in accordance with the standards applicable to new construction), then the facility is eligible for replacement in accordance with the new construction standards. Before 1970, private nonprofits were not eligible for public assistance. As detailed in appendix I, the Congress enacted legislation over the next few years that expanded the number and types of private nonprofit organizations eligible for assistance. Public assistance for private nonprofits has averaged about $60 million annually during the 1990s. In addition to making specific facilities, such as schools and hospitals, eligible, Public Law 100-707 (enacted in 1988) established a category of “other” eligible private nonprofit organizations, defined as “other private nonprofit facilities which provide essential services of a governmental nature to the general public” (42 U.S.C. 5122). When developing regulations to implement the legislation, FEMA relied on an accompanying report to define the “other” category. The report’s examples included museums, zoos, community centers, libraries, shelters for the homeless, senior citizens’ centers, rehabilitation facilities, and shelter workshops. FEMA’s regulations incorporated the list of examples from the House report but recognized that other, similar facilities could be included. FEMA experienced problems in applying this regulation because, among other things, the wide range of services provided by state and local governments made it difficult to determine whether the services of a private nonprofit facility were of a governmental nature. In 1993, FEMA amended its regulations to limit eligible “other” private nonprofit facilities to those specifically included in the House report and those facilities whose primary purpose is the provision of health and safety services. It can be difficult to determine the eligibility of these other private nonprofit facilities. The Federal Coordinating Officer (the person in charge of FEMA’s recovery efforts) for the Northridge earthquake said that clearer eligibility criteria are needed to determine whether private nonprofit facilities may qualify as “community centers.” Specifically, there has been much debate over the extent to which a facility must be open to the public in order to be eligible. In the past, FEMA gave many facilities the benefit of the doubt and funded them, even though it appeared that these facilities were not really open to the public. FEMA’s Inspector General has cited examples of private nonprofits that do not appear to provide essential government services, yet received FEMA public assistance. In a July 1995 report, the Inspector General pointed out three private nonprofits that the agency found eligible following the January 1994 Northridge earthquake and concluded that they did not appear to provide essential government services: A contemporary dance foundation received public assistance to repair damage to its building caused by the earthquake because it provided a dance program for underprivileged children. As of the beginning of April 1996, FEMA had obligated about $120,000 in public assistance funds to the foundation. A small performing arts theater received public assistance for earthquake damages because it offered discount tickets to senior citizens and provided acting workshops for youth and seniors. As of the beginning of April 1996, FEMA had obligated about $1.5 million in public assistance to the theater. An institute, used primarily as retreat center for youth of a particular religion, but also open to other youth and senior citizens’ groups of other religions, received public assistance for earthquake damage. As of the beginning of April 1996, FEMA had obligated about $4.8 million in public assistance funds to the institute. To supplement its regulations and to help public assistance personnel interpret them, FEMA developed a manual entitled “Public Assistance: Guide for Applicants.” The manual (hereafter referred to as the public assistance manual), published as a draft in September 1992, has not been revised and is thus not entirely consistent with the 1993 regulation’s definition of “other” private nonprofit facilities. Furthermore, the manual does not define “essential services” or “governmental nature,” nor does it make clear the extent to which the facilities must be used to provide services to the general public in order to be eligible. At least partly in response to its experience following the Northridge earthquake, FEMA has revised the definition of community center so that the primary purpose of a facility must be “community oriented.” A FEMA headquarters official told us, in early April 1996, that the agency is again developing a definition for community centers, but because of problems in developing the definition, it may be some time before the definition is ready to be issued. Eligibility decisions effectively determine the level of federal spending for public assistance, affecting the amounts of grants and of FEMA’s and applicants’ administrative costs. Without clear criteria, inconsistent or inequitable eligibility determinations and time-consuming appeals by grantees and subgrantees may be more likely to occur. The importance of clear criteria is heightened because in large disasters FEMA often uses temporary personnel with limited training to help prepare and process applications. Eligibility decisions effectively determine the level of federal spending for public assistance. Determining whether a facility is eligible, and the appropriate scope of work, can affect the expenditure of millions of federal dollars. For example, determining which standards are applicable to earthquake-damaged facilities found eligible for assistance may have enormous federal cost implications. Originally, FEMA determined that one hospital damaged in the Northridge earthquake was eligible for $3.9 million for repairs and an additional $2.9 million for cost-effective seismic upgrading. California, as the grantee, argued that the hospital was eligible for $64 million because an alternate set of standards was applicable. (Ultimately, as described in ch. 4, FEMA and the state reached an agreement whereby FEMA will provide $29 million.) The Inspector General noted that the total costs of some Northridge projects far exceeded the actual repair costs because of the upgrades and other items “triggered” by the standards found to be applicable. Furthermore, to the extent that the lack of clear criteria contributes to the number of appeals, FEMA’s administrative costs are increased. Any decision on eligibility for assistance may be appealed by a potential recipient. If necessary, the applicant can formally appeal to three levels: the FEMA Regional Director, the Associate Director of the Response and Recovery Directorate, and the Director of FEMA. Each appeal is processed through the state for review and comment before being forwarded to FEMA. The Inspector General’s report pointed out that it is not unusual for the appeals process to take more than 2 years to complete and concluded that the federal government could save considerable staff time and money if the appeals process were shortened. According to FEMA officials, between fiscal year 1990 and the end of fiscal year 1995, there were 882 first-level appeals of public assistance eligibility determinations. FEMA headquarters had begun logging in second- and third-level appeals in January 1993 and could not quantify the number of such appeals before then. Between January 1993 and the end of March 1996, there have been 104 second-level appeals and 30 third-level appeals. Although FEMA may always expect some appeals, clearer guidance on applying eligibility criteria could help reduce their number. In the case of the Northridge earthquake recovery effort, disagreements over applicable standards have caused additional expenses for both FEMA and some applicants. For example: FEMA has funded applicants’ costs for architectural and engineering evaluations to help ascertain the degree of structural damage. In cases in which FEMA officials disagreed with the evaluations, FEMA has incurred additional expense by conducting its own architect and engineering studies. Applicants have pointed out that they have found an increasing need to hire contractors who specialize in interpreting the FEMA public assistance program. A lack of clarity in the eligibility criteria was cited as the reason for disagreements still outstanding between FEMA and the state of California nearly 6 years after the Loma Prieta earthquake. Finally, in part because of the large dollar implications, the lack of clarity in FEMA’s criteria may encourage potential applicants to make the most of opportunities for assistance. In our 1992 report on the recovery from the Loma Prieta earthquake, we discussed a lack of criteria on hazard mitigation, historic buildings, and private nonprofit applicants. At the time, FEMA regional officials told us that, lacking specific guidelines to implement the criteria, they sought to “moderate the drain on federal disaster funds, while local applicants sought to maximize assistance.” Similarly, in the course of preparing our June 1994 report on the potential impediments to rebuilding after the Northridge earthquake, federal officials said that state and local governments often try to maximize federal contributions. Ambiguities in the existing criteria for public assistance echo a lack of clear criteria for determining that disaster damage warrants federal assistance—i.e., a presidential disaster declaration—which we have reported on previously. As a prerequisite to federal disaster assistance under the Stafford Act, a governor must take “appropriate response actions” and provide information on the nature and amount of state and local resources committed to alleviating the results of the disaster; the President then decides whether federal assistance is needed to supplement state and local resources. However, the act does not identify the criteria for evaluating governors’ requests. FEMA’s Inspector General reported in 1994 that (1) neither a governor’s findings nor FEMA’s analysis of capability is supported by standard factual data or related to published criteria and (2) FEMA’s process does not always ensure equity in disaster decisions because the agency does not always review requests for declarations in the context of previous declarations. We previously reported that disclosing the process for evaluating requests would help state and local governments determine the circumstances that warrant federal assistance. The need for clearer, more definitive FEMA criteria dealing with eligibility for public assistance takes on added importance because of FEMA’s use of temporary personnel with limited training to help prepare and process DSRs, which are used in determining the scope of work eligible for funding. The Federal Coordinating Officer for the Northridge earthquake told us that better criteria and guidelines ultimately result in better DSRs. (As discussed in ch. 3, FEMA has limited control over funds following DSR approval; consequently, criteria and/or training that would help improve DSR preparation may help ensure that funds are used only for eligible items.) The number of large disasters during the 1990s has resulted in a great number of DSRs. For example, after the Northridge earthquake, over 17,000 DSRs were prepared; after the 1993 Midwest floods, over 48,000 DSRs were prepared in nine states. The combination of inexperienced personnel forced to do staggering amounts of work in a limited amount of time highlights the need for clear and comprehensive criteria. FEMA regional officials working on the recovery from the Northridge earthquake pointed out a need to develop training for FEMA inspectors. They said that the lack of training directly results in poor quality DSRs that may cause overpayments or underpayments to public assistance recipients. They added that increased training is also needed to ensure the standardization of eligibility determinations across the country. The lack of standardization could cause inconsistent determinations because, in large disasters such as Northridge, FEMA may send in staff from different regions of the country. Our July 1992 report on the recovery from the Loma Prieta earthquake pointed out that FEMA’s customary reliance on emergency reserve staff, who usually stayed only a few months, led to discontinuity and inefficiency. The applicants complained that each time a new FEMA representative took over a case, that person had to duplicate the agency’s previous efforts to examine the damage, review the documentation, and learn the complexities. Similarly, a FEMA summary of disaster response and recovery operations after flooding in Kansas in the summer of 1993 pointed out that attempts to expedite public assistance inspections met with immediate roadblocks because, in part, FEMA’s pool of available inspectors was quickly exhausted and because the training of inexperienced inspectors consumed numerous staff days that could have been used more productively to prepare damage surveys. FEMA officials told us that early on in a disaster, a number of people with very different levels of experience are involved in the damage survey process. FEMA and California officials told us that training is often not adequate, resulting in DSRs that are deficient and that hinder FEMA officials in making determinations about project eligibility. Officials involved in inspecting the damage sites from the Northridge earthquake said that early in the recovery effort, they made incorrect decisions on eligibility. One inspector told us that some damage survey reports prepared soon after the earthquake included work that had been specifically ruled ineligible after the Loma Prieta earthquake 5 years earlier. For determining eligibility for public assistance, FEMA’s written guidance supplementing the regulations are the draft public assistance manual and policy memorandums. A FEMA task force developed the regulations following the Stafford Act. According to a FEMA regional official who was a member of the task force, the regulations were intended to be supplemented with guidance, examples, and training to clarify the eligibility criteria and help ensure their consistent application; however, this supplementation has not occurred as envisioned. According to a FEMA headquarters official, the agency has not been able to complete the public assistance manual since 1992 because of the significant workload caused by the large number of disasters. A FEMA contractor responsible for reviewing DSRs noted that various decisions made in determining eligibility following a disaster have not been systematically codified or otherwise made easily available to FEMA personnel to serve as a precedent. FEMA inspectors told us that there have been a number of policy changes throughout the course of the recovery from the Northridge earthquake, but there is no one central source where the changes are recorded. They added that some agreements, made by personnel who have since rotated, were never put into writing. Also, the Federal Coordinating Officer for Northridge told us that some policy decisions have been informal and unwritten. FEMA, state, and local officials have generally identified the need for (1) clearer criteria to help determine eligibility for public assistance and (2) better training for inspectors. The Inspector General’s July 1995 report pointed out that there is a demand for increasingly specific criteria because significant and numerous changes in eligibility in the public assistance program over the past 25 years have created substantial, time-consuming, and expensive disagreements with applicants. The Inspector General found that virtually every applicant interviewed complained that the federal criteria governing the program were not sufficiently specific; as a result, the applicants contended that neither they nor FEMA staff can easily and consistently determine eligibility and appropriate costs. FEMA officials told us that a major problem in the Northridge earthquake recovery effort has been the difficulty of determining what is eligible for FEMA funding. At a January 1996 hearing, the Director of FEMA said that in previous disasters, FEMA staff worked without having policies in place that addressed public assistance. He added that determining what is and is not eligible for assistance has been difficult. He said that FEMA is developing criteria to address these areas. A FEMA headquarters official added that FEMA plans on completing the public assistance manual before the end of fiscal year 1996. The eligibility criteria will not differ significantly from those in the draft manual; however, according to the official, FEMA plans to begin updating and supplementing the manual immediately after it is issued. The Northridge Federal Coordinating Officer noted that FEMA has recently taken steps to improve policy dissemination. He offered as examples (1) a compendium of policy material compiled by one FEMA regional office, which FEMA headquarters is circulating to the other regions; (2) the development of a new system of disseminating policy memorandums, including a standardized format and numbering system; and (3) the dissemination by headquarters of the results of second- and third-level appeals to all regional offices. FEMA has also identified a need to better train inspectors. In March 1996, a training division official at FEMA headquarters said that the agency held the first session of a new training course in February 1996. For the remainder of fiscal year 1996 and fiscal year 1997, the division projects an additional 13 courses. However, the official added that currently a major restriction in reaching these projections is a lack of qualified instructors. The targeted audiences are full-time FEMA staff and disaster assistance reservists, as well as employees of other federal agencies, such as the Corps of Engineers, that assist FEMA on an as-needed basis to inspect damage. The bulk of the course is devoted to the subjects of eligibility for the public assistance program and DSR operations. Clearer and more comprehensive criteria, supplemented with specific examples and systematically disseminated, could help ensure that eligibility determinations are consistent and equitable and could help control the costs of future public assistance. To the extent that the criteria are more restrictive, the costs of public assistance in the future could be less than they would otherwise be. In the 1990s, the potential adverse effects of a lack of clear criteria have become more significant because of (1) an increase in large, severe disasters and (2) the need to use temporary employees with limited training in the process of inspecting damage and preparing damage survey reports. We recommend that the Director of FEMA issue criteria that more clearly and comprehensively identify what facilities and work are eligible for public assistance and develop a system for disseminating these and future changes in criteria to FEMA regional staff. The Director should specifically clarify the criteria for determining the extent to which the permanent restoration of disaster-damaged facilities is eligible for funding and the eligibility of private nonprofit facilities. In accordance with a governmentwide effort to simplify federal grant administration, FEMA relies on the states—in their role as grantees—to ensure that expenditures are limited to eligible items. The states certify to FEMA at the completion of each subgrantee’s project and the closeout of each disaster that all disbursements of public assistance grants have been in accordance with approved DSRs. Additional controls over disbursements include audits of subgrantees by (1) independent auditors pursuant to the Single Audit Act of 1984 and (2) FEMA’s Office of Inspector General (OIG), with possible augmentation by state audit agencies. Audits by the Inspector General have identified disbursements for ineligible items—that is, for items not authorized by approved DSRs. We believe that the Inspector General’s findings, in light of FEMA’s reliance on the states for financial controls after DSRs are approved, reinforce the need for clearer criteria to guide the process of determining eligibility for public assistance funds. In October 1988, as part of a governmentwide effort to standardize federal grant administration, FEMA implemented the “Uniform Administrative Requirements for Grants and Cooperative Agreements to State and Local Governments.” For the public assistance program, the states became FEMA’s only grantees, and all other recipients—including state agencies, local governments, and eligible private nonprofits—became subgrantees of the states. (Previously, all public assistance recipients had dealt directly with FEMA.) Under the uniform requirements, it is the states’ responsibility, rather than FEMA’s, to ensure that all costs applied against FEMA funding are eligible. The states, as grantees, must comply with the applicable regulations and FEMA’s guidance to ensure that federal funds are properly used and accounted for. Among other things, the uniform requirements provide that the states must (1) develop a plan to administer the program, (2) establish appropriate budget and accounting records and procedures, and (3) comply with the applicable circulars from the Office of Management and Budget (OMB). For example, Circulars A-87 and A-122 set forth cost principles for state and local governments and nonprofit organizations, and Circular A-110 has special rules for grants to hospitals, educational institutions, and nonprofit organizations. When FEMA approves a DSR, it obligates an amount equal to the estimated federal share of the project’s cost. The obligation makes these funds available to the state to draw upon as needed by the subgrantees. (For “small” projects—those with an estimated cost of less than $46,800—the entire amount may be provided by the state to the subgrantee immediately.) Generally, subgrantees request disbursements when bills for projects are due. If a subgrantee wishes to modify a project after a DSR is approved or experiences cost overruns, it must apply through the state to FEMA for an amended or new DSR. This procedure gives FEMA the opportunity to review the supporting documentation justifying the modification and/or cost overrun. FEMA’s regulations state that after all recovery activities for a particular disaster have been completed, the disaster is ready for closeout. (Before closeout, the disaster is considered to be “open.”) One aspect of the closeout is the state’s certification that all disbursements have been proper and eligible under the approved DSRs. FEMA does not specify what actions the state should take to enable it to make the certification. The agency’s public assistance manual states that inspections and audits can be used, and that the state plan should include procedures for complying with the administrative aspects of 44 C.F.R. parts 13 (grants management) and 206 (public assistance). The manual also notes that FEMA has no reporting requirements for the subgrantees but expects the grantees to impose reporting requirements on the subgrantees so that the grantees can submit the necessary reports. Most disasters stay open for several years before reaching the closeout stage. FEMA officials involved in the closeout process in the San Francisco, Atlanta, and Boston regions said that they review the states’ closeout paperwork to verify the accuracy of the reported costs, but they rely on the states to ensure the eligibility of costs. In commenting on a draft of this report, the Director of FEMA stated that FEMA conducts final inspections and project reviews to verify the actual eligible costs for large projects “in which the grantee is required to make an accounting to FEMA of eligible costs.” FEMA public assistance program officials generally believe that the states’ reviews are adequate to ensure that disbursements are made only for eligible items; conversely, FEMA’s Deputy Inspector General advised us that the quality of the states’ closeout reviews varies considerably from state to state and that he does not rely on the closeout reviews as adequate assurance that all costs charged against the DSR were proper, especially when many states’ disaster recovery personnel view themselves as advocates for the subgrantees. In addition to certifications by the states, independent audits can serve as a further check on the eligibility of items funded by public assistance grants. Audits of public assistance funds can be done by independent auditors in compliance with the Single Audit Act of 1984, by the FEMA OIG, and/or by the states’ audit organizations. However, the coverage of individual projects appears to be limited. FEMA may obtain additional assurances about the use of its funds from the audits of subgrantees conducted as part of the single audit process. State and local governments and nonprofit organizations that receive federal funds of $100,000 or more in a year must have a single audit that includes an audit of the entity’s financial statements and additional testing of the entity’s federal programs. The auditors conducting the single audits must test the internal controls and compliance with the laws and regulations for the programs that meet specified dollar criteria. Those criteria result in the largest programs, in terms of expenditures, being tested. The entities that receive $25,000 to $100,000 in federal assistance in a year have the option of having a single audit or an audit in accordance with the requirements of each program that the entity administers. The entities that receive federal assistance of less than $25,000 in a year are exempt from federally mandated audits. To the extent that subgrantees meet the audit criteria and FEMA’s programs meet the testing criteria, FEMA can obtain assurances about the use of its funds. However, in the absence of such audit coverage, FEMA must rely on the grant recipients to exercise effective monitoring activities or conduct its own monitoring efforts. For the 4-year period ending September 30, 1995, FEMA’s OIG received 219 Single Audit Act and OMB Circular A-133 audit reports, 17 of which questioned a total of $1.1 million in disaster assistance expenditures. Most of the reports received are audits of the states rather than of communities or other subgrantees. However, as we noted in a recent report, while Single Audit Act reports on grantees are required to be sent to the funding federal agency, reports on subgrantees are not so required, and many federal agencies thus do not receive reports on subgrantees even when they are prepared. FEMA’s OIG audits the recipients of public assistance funds on a selective basis and has identified inappropriate disbursements to recipients. For reports issued in the 6 fiscal years ending September 30, 1995, the OIG has questioned over $83 million in subgrantees’ public assistance costs. The OIG attempts to audit any disaster when asked to by the appropriate FEMA regional office, as staffing availability permits. However, the staff available to perform the audits is limited; the OIG has 8 full-time and 17 temporary or part-time employees in two district field offices. A great many subgrantees, and even entire disasters, are not audited by the OIG. Officials in the OIG’s Eastern District Office could not estimate their audit coverage but said that the number of subgrantees and DSRs they review varies from disaster to disaster. They felt that although many recipients, and even entire disasters, were not audited, a more significant percentage of the dollars was audited by focusing on where the large sums of money went. For example, although the officials had looked at only about 20 of the several hundred public assistance subgrantees for Hurricane Hugo, they believed those subgrantees represented about $200 million of the $240 million in public assistance costs (but could not confirm this estimate without a time-consuming review of their records). Officials in the OIG’s Western District Office said that less than 10 percent of the disasters receive some sort of OIG audit coverage. Overall, they believe that probably less than 1 percent of DSRs are covered. The states may also perform audits of specific subgrantees. Currently, California is the only state that has an arrangement with FEMA’s OIG to do audits that meet generally accepted auditing standards. (Audit coverage in California is disproportionately important relative to the other states, because in recent years California has received far more public assistance funds than any other state.) However, these audits have been temporarily discontinued while the responsibility for and control over such audits is negotiated between two state agencies. OIG officials said that the Office has attempted to negotiate similar audit coverage from other states, but none of them have agreed to do so, generally citing the difficulty of hiring and paying for the audit staff and keeping a sustained audit effort under way in light of the sporadic nature of FEMA’s disaster assistance. For the 6-year period from October 1, 1989, through September 30, 1995, FEMA’s OIG has reported on 203 subgrantees of public assistance funds, questioning over $83 million in federal funds charged against approved subgrantee DSRs. According to OIG officials, ineligible cost claims constitute most of the problems identified. These include claims of non-disaster-related damage, the use of labor and other rates that exceed FEMA-approved rates, the improper calculation of fringe benefits, inadequate documentation, and improper overtime charges. These types of improper charges can be discovered only through close scrutiny of the records as is provided in audits. Examples of questioned costs that did not conform to the approved DSRs included the following: One Florida community received $12.7 million to repair its electrical distribution system damaged by Hurricane Andrew. This amount included over $6 million in materials, all of which was paid for with a public assistance grant from FEMA. However, the auditors found that not all of the materials purchased were used in the repairs; much of it remained in inventory. City officials agreed that the funds should be refunded. FEMA subsequently deobligated $1.2 million, over 9 percent of the total grant. A state utility in Puerto Rico was awarded $3.3 million in FEMA funds to cover damages and debris removal for several disasters, of which it had received $2.3 million at the time of the audit. The auditors subsequently found that the utility had a $64 million fund to cover uninsured losses. FEMA program officials agreed to send bills to collect the disbursed amount and deobligate the remainder of the approved award. A city in Indiana received $2.9 million in FEMA funds for debris removal, emergency services, and repairs resulting from an ice storm. However, the auditors found that the city had submitted claims for only $2.5 million of the $2.9 million provided by FEMA. FEMA’s OIG advised us that the nearly $400,000 difference was returned. Public assistance program officials in FEMA’s 10 regional offices identified a variety of options that, if implemented, could reduce the costs of the public assistance program. Among the options recommended most strongly were improving the appeals process; eliminating eligibility for some facilities that generate revenue, lack required insurance, or are not delivering government services; and limiting the impact of building codes and standards. Implementing these options might require amending the Stafford Act and/or FEMA’s regulations. Because available records did not permit quantifying the impact of each option on public assistance expenditures in the past, and because future costs will be driven in part by the number and scope of declared disasters, the impact on future public assistance costs is uncertain. We asked public assistance officials in FEMA’s 10 regional offices for their perspectives on the program. We sought their opinions because they are involved in the day-to-day operations of the public assistance program, giving them a high degree of expertise. Using a telephone survey, we asked the FEMA officials to identify options that could potentially reduce the costs of public assistance. In a follow-up mail questionnaire, we asked the respondents to rate each option to indicate how strongly they recommended implementing each. We also asked the respondents to elaborate on the options they recommended most strongly and to identify the potential obstacles to implementing them, where appropriate. We asked the National Emergency Management Association (NEMA), which represents state emergency management officials, to respond to the options that the FEMA officials generated because implementing many of the options would affect the states. Following are the options that the FEMA respondents rated most highlywhen considering changes to the eligibility criteria that could reduce the public assistance program’s costs. In addition to describing each option, we provide, where appropriate, examples related by the officials, the dissenting views of FEMA respondents, and NEMA’s views. We did not independently verify the accuracy of information that the officials cited in their examples. (A list of other options generated by FEMA regional officials appears in app. II.) Responding officials highly rated two options concerning the appeals process: Limit funding for temporary relocation facilities during appeals, because the appeals process can take several years. This option would be comparable to the insurance industry’s practice of calculating the maximum allowable costs for temporary relocation. Limit the number of appeals to one or two. The rationale provided for this option was that cost savings could be achieved by limiting both the length of time for which relocation costs are funded and the types of facilities eligible for relocation costs. FEMA currently funds the costs of temporarily relocating applicants to suitable quarters while their damaged or destroyed facilities are being restored. If a project is being appealed, the length of time that FEMA funds relocation costs may be extended until the appeal is resolved. One obstacle identified to implementing this option is that objective criteria defining appropriate time frames and usage would need to be developed. In its July 1995 report, the FEMA OIG noted that it is not unusual for the appeals process to take more than 2 years to complete. We found appeals taking more than 5 years. The OIG report stated that since relocation costs are not capped or limited to a specific time, they may provide a disincentive for applicants to resolve disputes. Two respondents suggested that temporary facilities often are used for years. Applicants may then use this time to maximize the gains from a lease-purchase agreement or to extend the length of time they are eligible to receive funding for relocation costs. In its July 1995 report, the OIG reported that following the Loma Prieta earthquake, repairs to the Oakland City Hall were in dispute for over 5 years. FEMA’s share of the temporary relocation costs for this time period was $31 million. The OIG reported that the relocation costs, although not necessarily linked exclusively to the appeals process but also to the disputes over damage survey reports, could have been decreased if limits had been placed on the time frame. For example, California State University at Northridge was in temporary quarters for 18 months after the earthquake; FEMA funded 90 percent of the monthly relocation costs of $300,000. During the 18 months, none of the University’s primary buildings with earthquake damage were repaired because of disagreements with FEMA about the required repairs. NEMA did not endorse this option. NEMA pointed out that the focus of concern about continuing costs during the appeals process should not be on eliminating or limiting relocation costs but rather on complying with the timelines for the appeals process established in FEMA’s regulations. The other appeals-related recommendation suggests that the appeals process could be truncated. As noted in chapter 2, FEMA’s regulations authorize three levels of appeal. The first appeal is to the FEMA Regional Director with jurisdiction over the geographical area in which the disaster occurs. If the Regional Director denies the appeal, the second appeal is to the Associate Director for Response and Recovery at FEMA headquarters. A final appeal may be submitted to the FEMA Director. The responding officials generally recommended limiting the number of appeals to two—one to the Regional Director and the other to either the Associate Director or the Director. The respondents stated that two appeal stages should be sufficient to fairly consider appeals. (Before 1988, appeals were limited to two stages: the Regional Director and the Associate Director.) According to the July 1995 OIG report, considerable federal staff time and money would be saved if the process was shortened. However, one FEMA respondent strongly disagreed. He stated that, in some instances, FEMA regional staff do not require as detailed a review as that required by FEMA headquarters staff during the second stage of the appeals process. In his opinion, the increased documentation requirements and field visits result in a more objective opinion than that achieved during the first stage of the appeals process. He added that because few appeals reach the third level, there is no need to eliminate it completely. (As noted in ch. 2, between January 1993 and the end of March 1996, FEMA logged 30 third-level appeals.) Furthermore, he estimated that nearly all appeals that go beyond the first level support the region’s decision, but the increased documentation requirements confirm the region’s perspective and better support that the decision was reached objectively. Conversely, NEMA endorsed further consideration of this option. The responding officials recommended eliminating eligibility for revenue-generating private nonprofit organizations, such as utilities, hospitals, and universities, because these types of facilities may not serve the general public and may have alternate sources of income sufficient to repair disaster-related damage. As noted in chapter 2, a wide range of private nonprofit organizations have received public assistance funding, including day-care facilities, community centers, utilities, hospitals, and educational facilities. In July 1995, the OIG reported that since the passage of the Stafford Act, FEMA has provided nearly $400 million in public assistance for private nonprofit organizations. FEMA funded nearly 90 percent of that amount to utilities, hospitals, and schools. These types of facilities often generate revenue. The respondents stated that such revenue-generating facilities potentially have alternate sources of income to independently repair disaster-related damages. For instance, schools can increase tuition, and utilities can raise rates or obtain loans. One rationale for this option is that revenue-generating private nonprofit organizations may not provide a service accessible to the general public since they often charge competitive fees for service. The respondents cited Stanford University and Los Angeles’ Cedars Sinai Hospital as examples of private nonprofit organizations that have alternative sources of income and that may not serve the general public. One responding official disagreed that this eligibility criterion should be changed. He stated that some revenue-generating private nonprofit organizations generate revenue to meet their operational costs and may not have sufficient revenue to cover disaster-related costs. NEMA did not endorse this option, observing that utilities and hospitals provide vital services both during responses to disasters and during nondisaster times. NEMA also noted that because a private nonprofit organization generates revenue does not necessarily mean that it would not face a financial hardship in recovering from a disaster. If these private nonprofit organizations were eliminated from eligibility, the general public would still bear the brunt of the recovery expenses through higher fees for the services provided by the facilities. This approach would, according to NEMA officials, simply shift the burden from the federal government back to the general public. The regional respondents recommended eliminating eligibility consideration for disaster assistance—either completely or by transferring it to the Department of Agriculture (USDA)—for water control projects that do not provide public benefits, for example, those that primarily protect and/or drain unimproved private property—typically farmland—and that are owned by one or not more than a few farmers. They recommended transferring eligibility for federal funding for water control projects, such as drainage and levee districts, to USDA because the projects tend to be agricultural or rural facilities, generally established to protect farmland from flooding. The USDA’s Natural Resources Conservation Service has offices in most counties and works regularly with the drainage and levee districts. (Furthermore, as noted in ch. 1, USDA’s existing Emergency Watershed Protection program funds, among other things, a portion of the cost of repairing certain nonfederal levees and other water control works damaged by flooding.) Therefore, according to one respondent, it is more logical for USDA’s Natural Resources Conservation Service, which has the historical maintenance and operational expertise that FEMA lacks, to provide assistance for these water control projects. One respondent suggested that while it may not be apparent that federal cost savings would occur by transferring eligibility consideration to another federal agency, the potential for cost savings does exist. He explained that USDA has limited funding for repairing water control projects and therefore has a priority system. While FEMA provides funding to all eligible water control projects, USDA might not necessarily be able to provide funding to all that have suffered damage. The respondent pointed out that while savings might be recognized, some special districts that are currently eligible might lose their eligibility for FEMA’s assistance. Several respondents mentioned that special districts would prefer FEMA’s assistance to USDA’s assistance because, for instance, FEMA generally provides larger amounts of funding than USDA and provides the funding more rapidly. An alternate option raised by some respondents was to eliminate eligibility for federal grants for special districts that do not provide a public service. In some instances, special water control districts are established by one or not more than a few farmers to protect their own farmland. Several respondents suggested eliminating eligibility for those special districts that could not demonstrate that they provided public benefits, such as protecting improved property. An example of improved property is an area where there are a substantial number of residences, such as urban areas. Two examples of special districts in urban areas are (1) in Arizona, where there are countywide flood control districts, and (2) the Denver Urban Drainage District, which integrates water-related activities between all jurisdictions surrounding greater Denver. One respondent explained that special districts in rural areas generally do not address health and safety threats because the drainage ditches are usually miles from residential areas. The financial impact of funding disaster assistance for special water control districts can be great. For example, in Iowa alone, following the Midwest floods of 1993, the federal share for the 80 drainage districts that applied for FEMA’s assistance was about $7.5 million. One obstacle that the respondents identified to eliminating eligibility for special districts that do not provide a public service would be establishing an objective and clear definition of “special district” and “providing a public service.” NEMA concurred that special districts that do not provide a public service could be eliminated for eligibility but stressed the need for clear definitions. The Association of State Floodplain Managers, which represents over 3,000 state and local floodplain managers, also concurs with this option provided that it applies solely to districts that deal with agricultural protection. They also cited the need of a clear definition of “special district.” As noted in chapter 2, building codes and standards significantly affect the costs of public assistance; the decision on which standards are “applicable” to a permanent restoration project greatly influences its cost. Seismic code upgrades have proven to be particularly costly. Over the years, one issue that has been debated is whether to reconstruct to the codes and standards in place at the time of the disaster or to higher codes and standards to mitigate against future damage. The respondents cited three interrelated options concerning codes and standards as strong candidates for change: Limit federal funding to the eligible cost of upgrading only the parts of the structure damaged by the disaster. Applicants would bear the expense of upgrading undamaged parts of the structure. Tighten the wording on codes and standards to define what entity, such as a state or local government, has the authority to adopt and approve codes and standards. Limit the time after the disaster during which new codes can be adopted. The respondents suggested that only the damaged portions of facilities should be eligible for upgrading. The regulations authorize the upgrading of facilities to current codes and standards when the pre-disaster condition of the facilities does not conform with current standards. According to the FEMA OIG’s July 1995 report, FEMA program officials estimate that the majority of upgrading costs are more than 500 percent of the cost of repairing actual disaster damage. In many cases, the total eligible costs far exceed the actual repair costs because of triggers that require upgrades to major systems throughout the structure as well as costly items such as asbestos removal. The FEMA respondents suggested that code upgrades should be limited to the parts of the structure damaged by the disaster. The expense of upgrading undamaged parts would be borne by the applicants. Upgrading significantly raises the cost of public assistance in large disasters: In the Northridge earthquake, seismic standards, in some instances, required upgrading undamaged portions of disaster-damaged structures. NEMA did not support the implementation of this option, pointing out that limiting repairs to the damaged portions of facilities would not be a cost-effective approach to spending federal tax dollars. NEMA stated that the federal government must comply with codes and standards and cannot pick and choose what parts to recognize. For example, the undamaged portions of a structure are generally part of the force-resisting system. If that system is not upgraded to the same standards as the rest of the system, there is a likelihood of a weak link that would fail in future disasters. One FEMA respondent generally agreed with the NEMA perspective. He stated that FEMA should enforce local codes and ordinances when there is a history that those codes and ordinances were being enforced prior to the disaster. FEMA respondents cited a need to better define who has the authority to adopt and approve codes and standards. As noted in chapter 2, to be considered “applicable,” written building codes and standards must be formally adopted by the jurisdiction in which the facility is located, or be a state or federal requirement. The codes and standards do not necessarily have to be in effect at the time of the disaster. Following the Northridge earthquake, a decision on assistance for restoring damaged hospitals was delayed for 2 years because of a dispute over which standards were applicable: those promulgated by the California Office of Statewide Health Planning and Development (the Health Office) or the standards in the California Building Code. FEMA officials stated that the Health Office did not have the authority to amend the state code. FEMA determined that one hospital was eligible for $3.9 million, the amount required to repair the building in a manner consistent with the state code. As the grantee, California argued that the hospital was eligible for $64 million; FEMA, after reviewing the request for additional funding, offered $6.8 million for repairs and upgrades. The Health Office’s standards would have required demolishing and replacing the hospital. On December 6, 1995, the FEMA Director announced that the agency would provide funding for the hospitals using discretionary authority to fund mitigation measures. On March 12, 1996, FEMA announced that it would provide nearly $1 billion in federal funds to repair or replace four hospitals damaged by the Northridge earthquake. The hospital cited above will receive $29.3 million. The respondents to our survey suggested that clarifying the language in the regulations to define what entity has the authority to adopt and approve codes and standards might reduce the confusion that surrounds this issue and the costs. FEMA’s regulations state that building standards can be adopted by the applicant up to the time FEMA approves a project. In some instances, especially catastrophic disasters such as earthquakes, projects are not approved for years. According to FEMA, such delays may be attributable to insurance questions, environmental reviews, or reviews required by the National Historic Preservation Act. FEMA respondents suggested that the regulations should be revised to limit the length of time after the disaster during which codes can be adopted. The respondents had varying views on what the time limit should be, but they generally agreed that some limit would be useful. The suggestions ranged from about 1 month to about 1 year after a disaster occurs. The respondents generally agreed that the limit should give sufficient time to allow the codes in place at the time of the disaster to be evaluated and strengthened to mitigate against future damage but should not provide an opportunistic window for applicants to gain the maximum amount of federal funding. According to one respondent, providing more than a few months for the applicant to enforce new codes provides too long a period of opportunism. In his opinion, the costs of FEMA’s public assistance for the Northridge earthquake and the 1993 Midwest flood were higher because the applicants adopted new standards after the events, but before FEMA approved specific projects. The respondent explained that because FEMA lacks a clear and consistent internal policy on codes and standards, its interpretation of eligibility is subjective and not completely accountable. Another respondent suggested limiting the time because codes are always changing, which makes it difficult to determine which codes are applicable. The codes may change as a result of a number of factors, including changes in technology and the identification of new degrees or kinds of hazards. NEMA endorsed this option for further consideration. However, one FEMA respondent did not completely concur. He stated that although he was not opposed to a time limit, that limit would have to allow communities sufficient time to fully explore and adopt the most appropriate codes for their highest risks. Furthermore, he stated that FEMA should develop acceptable minimum codes for each type of peril. As in the flood insurance program, public entities should be expected to build to those codes and carry sufficient insurance. If the public entities did not comply, they would be penalized, e.g., the amount of the award would be reduced by the amount of insurance coverage that should have been provided, or no DSRs would be signed until new codes were adopted. Several respondents recommended revising FEMA’s regulations to disallow the adoption of codes after the disaster occurs. Funding would be limited to repairing the damaged facility to comply with the codes and standards in effect at the time of the disaster occurrence. Respondents recommended two options related to insurance: Require insurance for public entities when insurance is reasonably available. Reduce or eliminate eligibility for facilities that are not at least partially covered by reasonably available hazard insurance. The regulations provide that FEMA will provide assistance only once before the applicant is required to purchase and maintain insurance against future loss. Applicants are required to commit to purchase and maintain insurance in the amount equal to the eligible damage if the damage exceeds $5,000. The regulations state that future assistance will be contingent upon this commitment. In some instances, FEMA has waived the insurance requirement and has provided funding as a result of damage from a recurring similar disaster. The responding officials recommended adherence to the regulations, which require that applicants purchase and maintain insurance after FEMA provides initial funds. One respondent recommended that in those cases where insurance has not been purchased after FEMA has provided funds and similar disaster-related damage recurs, FEMA should subtract the limit of available insurance from its grant. Another said that because FEMA has authorized waivers to the insurance requirement, public entities may lack the incentive to purchase insurance. One responding official stated that he did not believe this eligibility criterion needed revising because he was not aware of waivers being authorized. The respondents suggested reducing or eliminating eligibility for facilities for which at least partial earthquake, fire, and extended hazard insurance is reasonably available, even if full coverage is not. State insurance commissioners are authorized to determine whether or not insurance is reasonably available. If the commissioner deems insurance not to be reasonably available, FEMA waives the requirement for insurance coverage on public facilities. The respondents recommended requiring partial coverage rather than waiving the requirement for full coverage. In discussing this option, the responding officials also suggested that the criteria for flood insurance and insurance against damage from disasters other than floods be applied consistently. The Stafford Act requires the purchase of flood insurance as a condition of receiving public assistance in flood-prone areas. If a facility is located in a flood-prone area, is damaged by flooding, and is not covered by flood insurance, the amount of assistance that would be available from FEMA is reduced. However, the Stafford Act does not require insurance against damage by disasters other than floods until after FEMA has already provided funding under a prior disaster declaration. The responding officials suggested that where coverage is reasonably available, public entities should be required to have insurance coverage for all types of disasters before a disaster occurs rather than after FEMA has provided funding. NEMA endorsed for further consideration the options of eliminating waivers and requiring partial coverage. The respondents identified three interrelated options that would restrict or eliminate eligibility for facilities that are used for purposes other than the direct delivery of public services: Eliminate eligibility for facilities that are owned by redevelopment agencies and are awaiting investment by a public-private partnership. Such facilities are usually abandoned and unoccupiable. Restrict eligibility of public facilities to those being actively used for public purposes at the time of the disaster. Eliminate eligibility for publicly owned facilities that are being rented out to generate income. For example, facilities owned by local governments and rented to the private sector for use as warehouses, restaurants, stadiums, etc., would not be eligible. The respondents contended that some facilities, such as those that are abandoned or leased to a private vendor who is generating income from them, should not receive FEMA funding. They suggested that revenue-producing properties and investment properties could be insured by their owners. One issue raised was that the Congress did not contemplate eligibility for redevelopment properties because they are speculative properties, serve no public purpose at the time of the disaster, and are generally unoccupiable or abandoned. The respondents provided this example: The Williams Building had been owned by the San Francisco Redevelopment Agency since the mid-1980s when it was damaged by the Loma Prieta earthquake. At the time of the earthquake, more than half of the building was vacant. The portion that was not rented would have required considerable repair to lure prospective tenants. Although no essential government services were being provided in the facility, FEMA funded nearly $7 million for this building, including $2 million for structural stabilization. Currently, the building is unusable. The Redevelopment Agency has requested, and FEMA has approved, the option of using eligible funds for an alternate project. NEMA stated that eliminating eligibility for facilities owned by redevelopment agencies may be reasonable, especially if the facilities were abandoned at the time of the disaster. The respondents generally agreed that public facilities that are leased to the private sector, which in turn generates income that may not be returned to the government, should be ineligible for public assistance. Examples of such facilities include warehouses, restaurants, and stadiums. According to the OIG’s July 1995 report, such facilities have the ability to generate funds, independent of tax revenues, for the repair of disaster damage. The respondents recommended eliminating eligibility for public facilities that are leased to concessionaires who generate income because they, like redevelopment properties, do not provide a critical government service. In addition, they stated that the concessionaires often generate sufficient income to carry insurance against disaster losses or to repair damages. Several examples follow of public facilities that were leased to the private sector but received public assistance from FEMA: The Port of Oakland operates 30 ship berths that are leased to private operating companies. It also has authority for the Oakland International Airport. Total disaster funding following the Loma Prieta earthquake was over $35 million. Pier 45 was owned by the Port of San Francisco and leased out to private fish-processing companies. It was also leased out for occasional activities, such as the Italian Festival, attended by thousands of people. Although no essential public services were provided on Pier 45, FEMA funded about $9 million to repair the facility, which was leased to private vendors who generated income. The Gilroy Old City Hall is owned by the City of Gilroy but was not used as the city hall. It had been converted to a restaurant and meeting facility. At the time of the earthquake, the restaurant was not being used because of ongoing renovations. The total funding from FEMA for Gilroy’s Old City Hall as a result of the Loma Prieta earthquake was more than $2 million. The Los Angeles Coliseum serves as a major source of entertainment for the greater Los Angeles community. The facility hosts revenue-generating events, such as professional sports events. It suffered extensive structural and cosmetic damage as a result of the Northridge earthquake, and damage survey reports have been written for about $91 million. One respondent strongly disagreed with this option. He stated that it is becoming increasingly common for local governments to lease facilities to concessionaires as a means of reducing the cost of delivering government services and increasing tax revenues. He stated that concessionaires should be responsible for carrying insurance on the contents of their business enterprise but not on the facility itself. NEMA generally concurred that facilities that do not provide a public service should be ineligible. However, the President of NEMA noted that clear definitions and guidelines would need to be developed to distinguish between eligible and ineligible facilities. The respondents recommended eliminating or reducing eligibility for facilities when the lack of reasonable pre-disaster maintenance contributes to the scope of damage from a disaster. According to these officials, in some cases eligible applicants have not adequately maintained facilities before a disaster occurs, due, for example, to budget shortfalls. These facilities may be more likely to be damaged as a result of a disaster. The issue raised is whether taxpayers should pay for repairs to facilities that are structurally deficient before the disaster. One respondent said that there is a nationwide trend for local governments to insufficiently maintain facilities. As a result, when disaster occurs, the damage sustained to those facilities is more serious and therefore more costly to repair had the facilities been maintained. For example, one respondent noted that during a hurricane of moderate intensity, an entire roof of a facility blew off because it had been improperly attached. Other nearby facilities were not damaged. Had the roof on the seriously damaged facility been properly maintained, the need for federal assistance might have been reduced, if not eliminated. NEMA officials and one FEMA official noted the need for clear definitions and sufficient guidelines to objectively determine eligibility. Under FEMA’s regulations, applicants are eligible to receive credit toward the local share of the costs of public assistance for volunteer labor and donated equipment and material. The respondents recommended eliminating credit for these items, with the rationale that there is no cost to the applicant. The responding officials stated that it is difficult to establish reasonable costs (dollar values) to be applied to this credit. For example, one stated that experience has shown that the volunteer credit allowance has proven to be a very time-consuming process and relies almost exclusively upon the subgrantees’ estimates of the number of volunteers involved, hours worked, and material utilized. As the subgrantees incur no out-of-pocket cost, they do not accurately track volunteer labor and donated material and equipment. Therefore, they are often unable to provide with accuracy the required documentation to support their claims. One respondent noted that the volunteer allowance provides an opportunity for a duplication of federal funding in cases where direct costs and materials are commingled with volunteer labor and donated material and equipment, since it is difficult to distinguish between the two. FEMA respondents indicated that this allowance was most liberally applied during the Midwest floods. Floods, because of their longer-term nature in flat areas, lend themselves to volunteer labor, such as sandbagging, which occurred extensively during the Midwest floods. FEMA’s records indicate that nearly $1.4 million was obligated for volunteer credits in Iowa in response to the Midwest floods. FEMA officials explained that this allowance is not unique to FEMA. It is contained in OMB Circular A-87, which authorizes all executive agencies to use the value of donated services to meet cost-sharing requirements. The allowance generally may not be modified by an individual agency. One respondent acknowledged that the allowance does result in increased federal administrative costs, but he stated that the public benefit of assisting some cash-strapped local governments to meet their share of costs outweighs the increase in administrative costs. As noted in chapter 2, FEMA’s policy authorizes replacing disaster-damaged public facilities when the repair cost exceeds 50 percent of the replacement cost. The responding officials suggested raising the percentage of damage required for FEMA to replace a structure (rather than repair it) to a higher threshold, for example, 80 percent. The respondents said that the 50-percent threshold is not based on prudent use of federal tax dollars. For instance, the undamaged portions of bridges may be replaced. Bridges have two abutments—one at each end. If one abutment needs to be replaced as a result of disaster damage, the costs will likely border on the 50-percent threshold. In that case, the entire bridge will be replaced. However, if the threshold was higher, only the damaged abutment would be replaced—not both abutments. Other organizations have higher replacement thresholds—for example, insurance companies, according to one respondent. FEMA’s Inspector General noted that when insurance companies calculate the costs of repair versus replacement, they determine that if repair is less expensive than replacement, the facility is repaired. Other federal agencies also have higher thresholds. For example, the Department of Transportation and HUD require that replacement be more cost-effective than repair. The Inspector General identified, as an option for reducing the costs of public assistance, revising FEMA’s regulations to raise the threshold repair cost that triggers the replacement of a public facility. One respondent offered a different perspective. He stated that FEMA had already taken steps to control replacement costs when the agency clarified this policy in June 1995. The revised policy states that the 50 percent should be calculated on the actual costs of the disaster damage—exclusive of the cost of, for example, seismic upgrading, plumbing, heating, asbestos removal, mitigating against future damage, and other nonstructural repairs. Before the policy was clarified, these types of costs had been considered in repair cost calculations. According to this respondent, the clarified policy does not require additional revision because, although it does not address the 50-percent threshold, it will likely save substantial federal outlays. The Association of State Floodplain Managers saw merit in raising the percentage provided it does not apply to buildings insurable under the National Flood Insurance Program. In commenting on a draft of this report, FEMA noted that revising the damage threshold for public assistance eligibility would have no effect on the requirements of the National Flood Insurance Program or local floodplain regulations. NEMA did not completely concur with revising the 50-percent replacement rule, stating that the rule is a cost-effectiveness test for deciding if federal money is better spent in repairing or replacing a damaged facility. NEMA warned that arbitrarily raising the threshold would result in an invalid test of cost-effectiveness and suggested that a true measure would be a sliding scale taking into account the age of the facility, the economy of the surrounding community, and the function of the facility. As noted above, NEMA endorsed for further consideration many of the options most strongly recommended by FEMA respondents. However, the President of NEMA questioned whether public costs would be reduced by the options identified by FEMA respondents, noting that costs could be shifted from the federal level to the state level and not necessarily reduced. NEMA proposed that considerable savings in the federal costs of public assistance could be realized by reducing the federal administrative structures. NEMA also endorsed for further consideration the following options, identified but not most strongly recommended by FEMA respondents: Eliminate eligibility for postdisaster beach renourishment, such as pumping sand from the ocean to reinforce the beach. Limit the scope of emergency work to the legislative intent. (NEMA believes that assistance for debris removal and emergency protective measures has been used for permanent repairs.) Eliminate eligibility for revenue-producing recreational facilities, e.g., golf courses and swimming pools. The rationale that NEMA provided for eliminating eligibility for postdisaster beach renourishment is that it is prohibitively expensive, provides only temporary relief, and encourages the development of oceanfront property, which makes that property vulnerable to future flooding. Seven of the 10 FEMA respondents also recommended implementing this change. One noted that, like other water control projects, beach renourishment could be handled by USDA or the Corps of Engineers. As noted in chapter 1, the regulations provide for the eligibility of emergency work and permanent restoration work. The purpose of emergency work, i.e., debris removal and protective measures, is to eliminate or lessen immediate threats to life, public health, and safety. Permanent restoration work is a longer-term process that involves restoring the damaged facilities to their pre-disaster condition. NEMA stated that the scope of emergency work is not always interpreted consistently. According to NEMA, one obstacle to implementing this option is that “temporary” would need to be clearly defined and the legislative intent would need to be thoroughly explored. NEMA advised that federal regulations must not conflict with or limit the authority of the code enforcement agency in the legally binding determination of temporary repair. Six of the 10 FEMA respondents concurred that this option should be implemented. The option of eliminating the eligibility of revenue-producing recreational facilities involves the issue that recreational facilities may not represent an essential component of a community because they may not serve a purpose related to health and safety. According to the July 1995 OIG report, recreational facilities, such as golf courses and tennis courts, could be said to fall into the “nice to have” category since many fully functional communities do not have them. Furthermore, as discussed earlier, revenue-generating facilities may have an alternate source of income for repairing disaster-related damages. NEMA noted that one obstacle to eliminating revenue-producing recreational facilities is that a clear definition of “revenue-producing facility” would need to be developed. Other eligible government facilities besides recreational ones produce revenue and could be determined ineligible without a clear definition. In addition, according to NEMA, in certain instances, a revenue-producing recreational facility may play a critical role in the economic redevelopment of a stricken area. Five of the 10 FEMA respondents also supported implementing this option. FEMA has already eliminated from eligibility private nonprofit organizations providing recreational services since they do not provide an essential governmental service. FEMA public assistance officials identified a number of options that they believe could help reduce future public assistance costs. A number of their recommendations are consistent with options proposed by FEMA’s Inspector General, with GAO’s past work, and with our current review. Furthermore, the options highlight a number of instances in which the existing eligibility criteria need to be clarified or strengthened with additional guidance, as we recommended in chapter 2. We recommend that the Director of FEMA determine whether the options identified in this chapter should be implemented and, if so, take actions to implement them, including, if necessary, proposing changes to legislation and/or FEMA’s regulations. | Pursuant to a congressional request, GAO reviewed the Federal Emergency Management Agency's (FEMA) public assistance program, focusing on: (1) its procedures for determining eligibility for public assistance; (2) its efforts to ensure that funds are spent in accordance with authorized work; and (3) options to modify FEMA eligibility criteria. GAO found that: (1) while FEMA must fund the restoration of eligible facilities in accordance with applicable building codes, it is unclear whether the building codes that existed before the disaster or at the time of restoration should apply; (2) the eligibility of certain private, nonprofit facilities that provide essential governmental services to the general public is unclear; (3) without clear eligibility criteria, FEMA cannot control program costs or ensure consistent eligibility determinations; (4) eligibility determinations were not systematically codified and disseminated to FEMA personnel; (5) FEMA relies on states, independent audits, and its inspector general to ensure that federal funds are only spent on eligible restorations; and (6) changing various eligibility requirements in accordance with FEMA program officials' recommendations could reduce FEMA public assistance program costs. |
CCRCs represent one form of managed care for the elderly. Many CCRCs have managed both acute medical and long-term care services for the elderly for decades. CCRCs plan, administer, and often provide these services, in combination with housing and other services, frequently in a campus-like setting. The number of residents in a CCRC varies, but averages about 300, most of whom are elderly people leading active lifestyles and living in independent housing units. Some residents receive personal care, such as assistance in bathing and dressing, either in their own residential units or in special assisted living units, and some receive skilled nursing facility care. Residents may also receive physician, laboratory, and other care on site. Expenses for these and other medical services are reimbursable by Medicare on the same basis as for the elderly who do not live in CCRCs. CCRCs assess prospective residents’ health and financial status to ensure a fit with services offered and required fees. Residents commonly pay an entry fee to join the community and a monthly fee thereafter. These fees vary considerably depending on factors such as the level of CCRC financial risk for long-term care services, the size of the residential unit chosen, whether fees are for single individuals or couples, and the kinds of additional services and amenities provided. (See app. II for a description of the different financial risks CCRCs assume.) In the 11 CCRCs we visited—all of which assume residents’ risk for long-term care costs—entry fees ranged from a low of $34,000 for a studio apartment for one individual to a high of $439,600 for a two-bedroom home for a couple. Monthly fees in the 11 communities ranged from $1,383 for an individual to $4,267 for a couple. The CCRCs we visited use a variety of practices for health promotion, disease prevention, and early detection of health problems to help residents maintain their health and functioning. These practices are part of an approach to care that encourages CCRC residents to adopt or maintain a lifestyle that is believed to promote good health. Providing activities and services, usually on site, encourages residents to take advantage of them. Many of the CCRCs we visited promote good health for their residents by encouraging exercise, proper nutrition, and social involvement. Encouraging regular exercise is a common practice that CCRCs we visited use to maintain or improve residents’ health and functioning. CCRC efforts include having swimming pools and fitness equipment on site, providing staff for exercise programs, and sponsoring lectures and information on the value of exercise. Exercise classes and activities include aerobics, flexibility and strength exercises, swimming, yoga, lawn bowling, and square dancing. Residents may participate through a formal program or on an informal basis. Several CCRCs also strongly encourage walking. The campus-like designs of some CCRCs encourage walking by locating residential buildings within walking distance of commonly used services. Some campuses also incorporate nature trails or other attractive walks. Another common health promotion practice at CCRCs we visited is the encouragement of proper nutrition. Residents at many of these CCRCs are offered three meals a day in common dining rooms, which encourages adequate consumption of healthy foods. Some CCRCs require residents to have at least one of their meals each day in these settings. For other meals, residents may cook at home or eat elsewhere. The foods offered and nutrition information provided encourage residents to eat appropriately for weight and other health considerations. Special diets may be provided. At most of the CCRCs we visited dieticians are often available for consultation and can help residents develop individual diet plans. CCRC officials told us that on-site dietary counseling and nutritionally balanced meals in congregate, attractively decorated dining areas help encourage adequate nutrition and healthy eating habits. Encouraging residents to interact socially is also a common practice among the CCRCs we visited. CCRC officials told us that they encourage interaction because social isolation is associated with poorer health and functioning among the elderly. They also said that the physical layout of CCRCs fosters social interaction and is an integral part of the CCRC model. Residents live next door to each other and may see each other frequently through visits or while eating in congregate settings, checking mail, and engaging in a wide range of CCRC activities. Recreational, educational, cultural, and volunteer activities are frequently initiated, planned, and organized by residents. Officials said that arranging and participating in these kinds of activities are an important part of residents’ social interaction in the community. Activities may include on-campus lectures, movies, musical performances, woodworking, flower arranging, photography, and civic and charitable activities. Many of the CCRCs we visited attempt to maintain their residents’ health and functioning through disease prevention and early detection of health problems. These activities are carried out by nurses, social workers and physicians who may be either affiliated with or independent of the CCRC. Most CCRCs we visited encourage immunizations against common preventable diseases, such as flu and pneumonia, to reduce illness and possible fatalities. They may encourage immunization in a number of ways, including inoculation clinics, seminars, distribution of printed materials, and reminders from medical staff when a resident makes an outpatient visit or has a medical examination. Most of the CCRCs we visited encourage early detection of health problems through periodic medical exams and other health assessments. CCRC officials told us that these exams and assessments help staff and residents to be more proactive in using effective medical treatments and changing lifestyles to slow or reverse the loss of good health and function. A combination of physicians, nurse practitioners, and social workers may conduct elements of these exams and assessments, which may include periodic inventories of prescription drugs used by a resident to assess potential unwanted side effects from drug interactions, examination of an individual’s ease in walking or getting out of a chair, and observation of changes in an individual’s mental state. CCRC medical exams may include testing blood pressure for hypertension and blood glucose levels for diabetes. They may also include tests for colon, breast, and prostate cancer as well as vision and hearing impairments. Residents’ medical records and staff are usually on site, making the periodic exams and assessments convenient for residents. The CCRCs we visited typically encourage periodic medical exams through seminars, written materials, and reminders such as notices sent to residents on their birthdays asking them to schedule an exam. Some CCRCs follow up by telephone or other means when residents do not schedule or appear for medical exams. If a resident does not come for an exam after follow-up, some CCRC officials told us that this information is tracked and an exam conducted when the resident next comes in for outpatient care because of illness. CCRCs we visited use a multidisciplinary, coordinated approach to manage care for their residents with chronic conditions such as hypertension and heart disease. Essential elements of this approach include a wide range of on-site services, coordination of services to ensure residents receive them in an appropriate and timely manner, and active monitoring of residents with chronic conditions. The prevalence of chronic conditions increases substantially with age, and CCRC officials told us that properly managing these conditions helps maintain residents’ functioning while delaying or reducing use of costly services such as hospital care. CCRCs we visited offer a wide range of services on site to manage care for residents with chronic conditions. These services may include primary health care, care by specialists, skilled nursing care, and laboratory testing. Other services may include physical therapy, social work, personal care, dietary counseling, home chore service, and transportation. Various combinations of services may be provided across a range of settings, including an outpatient clinic, a skilled nursing facility, or a resident’s own home. In addition, some of the CCRCs we visited adapt their health promotion and wellness programs to help meet the needs of residents with chronic conditions. For example, they may modify a regular exercise program to help people with arthritis retain the ability to walk. Similarly, these CCRCs may encourage and help those with chronic conditions to continue regular social interaction through special arrangements. For example, a resident who can no longer walk to recreational events and congregate eating areas may be provided with an electric cart so that he or she can remain independent. CCRC officials told us that having a wide range of services on site makes it possible to manage most of the care of residents with chronic conditions within the community even when the needs are intense. CCRC officials said that residents less frequently need care at hospital emergency rooms or as many days of hospital care when admitted because they have access to physicians, nursing care, and other services at the CCRC. The availability of a skilled nursing facility where residents can easily be admitted from the hospital or from home for short stays may also help return residents more quickly to their homes, according to these officials. CCRCs we visited typically coordinate services to enhance their benefit for residents. CCRC staff coordinate various services provided by both CCRC staff and other providers whether on site or off. For example, a CCRC may coordinate an arthritic resident’s pain relief medication, specialized exercise program, home modifications, the availability of walkers or other ambulatory aides, and periodic assistance with dressing or bathing to help the resident stay as functional as possible and to reduce or delay the use of more intensive services. Multidisciplinary teams may facilitate coordination through joint team assessments and the development of a plan of care. Teams meet regularly to reassess needs and services. CCRC officials told us that nursing staff generally serve as the focal point for convening teams and providing ongoing coordination of services between team meetings. Some CCRC officials said that nursing and social work staff usually have day-to-day responsibility for coordinating services and troubleshooting when problems arise. CCRC officials told us that they actively monitor residents with chronic conditions. Staff oversees the plan of care developed for each resident with chronic conditions to ensure that the resident is receiving needed services. Monitoring can include simply verifying that a resident has visited the clinic as prescribed or kept a scheduled appointment with the physical therapist. Or professional care staff may review medical records, visit or call the resident at home, or call other service providers to verify that care was received. Frequent monitoring is necessary in some cases because a resident’s physical and mental condition can change quickly and require different services. For example, CCRC staff may check more frequently if episodes of pain may impair an arthritic resident’s ability to walk or dress unassisted. CCRC officials told us that nonmedical staff and the residents themselves can also be important in the monitoring process. Some CCRCs we visited train food services staff, residential and grounds crews, and other staff to recognize potentially serious problems that residents may have and to report this information to clinical or social work staff. For example, a housekeeper may inform clinical staff that an individual with some memory loss has burned pots on the stove or that a resident with arthritis is unable to get out of bed on a particular day. In addition, some CCRCs encourage residents to notify them when they see or suspect that another resident may need assistance. In some CCRCs, buddy systems are developed in which two residents agree to contact or watch out for each other regularly. When problems are reported, clinical staff call or visit residents to investigate and respond as needed. Many of the practices we identified in CCRCs for health promotion, disease prevention, and early detection of health problems are credited by experts and the literature with reducing the risk of disease and disability and improving health and functioning among the elderly. Among the measures considered to be effective are regular physical exams that include screening for early detection of conditions such as hypertension, colon cancer, breast cancer, and vision and hearing loss, and immunization against flu and pneumonia. Education and counseling to encourage exercise and proper nutrition are also recommended. Regular aerobic or conditioning exercise reduces the risk of coronary heart disease, diabetes, and obesity, and exercises to improve strength, flexibility, and balance may reduce the risk of falls and fractures. Encouraging social interaction may also reduce isolation, which is associated with poorer health and functioning among the elderly. The coordinated, multidisciplinary approach to chronic disease management used by the CCRCs we visited is also consistent with the recommendations of geriatric care experts and is supported in the literature as effective in slowing the progression of disease and restoring loss of function. Multiple interventions are often used in managing many chronic conditions that are common among the elderly, such as hypertension, cardiovascular disease, and arthritis. These methods may include drug therapy, physical and occupational therapy, behavior modification, counseling, and use of special medical equipment. Experts told us that because care for older people with chronic conditions may involve many modes of treatment and disciplines, it needs to be organized, coordinated, and managed. Crucial to effective care management, they said, is providing periodic monitoring and follow-up both to ensure that the chronic condition is being controlled and to minimize any negative effects of treatment. While evidence exists for the effectiveness of many of the practices we found in these CCRCs, their effect on health care costs and use of health services has not been conclusively demonstrated. With the exception of flu immunizations and medical screening for certain forms of cancer, such as breast and colon cancer, little evidence exists to demonstrate clearly the cost-effectiveness of most of the individual health promotion and chronic disease management practices used by the CCRCs. Furthermore, CCRC residents tend to be very different from the general elderly population on a number of important sociodemographic, health, and other measures. No studies have been conducted that adequately consider these factors in assessing the effect of the CCRC package of services on health costs. Because no federal agency or program was the focus of our review, we did not seek agency comments. We did, however, have a number of experts in geriatric medicine and continuing care retirement communities review a draft of this report. They generally agreed with its contents and provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services; the Administrator, Health Care Financing Administration; and other interested parties. Copies of this report will also be made available to other interested parties on request. If you or your staff have any questions, please call me at (202) 512-7119 or Bruce D. Layton, Assistant Director, at (202) 512-6837. Other major contributors to this report are James C. Musselwhite, Eric R. Anderson, Ron Viereck, and Carla Brown. We focused our work on practices that 11 continuing care retirement communities (CCRCs) use to maintain or improve the health and functioning of their elderly residents and to manage the use of health and other services by residents with chronic conditions. We also examined what is known about the possible health and cost effects of these practices. To address our study objectives, we (1) visited 11 CCRCs to examine care management practices, (2) reviewed the literature on CCRCs and on health and cost effects of CCRCs’ practices, and (3) interviewed experts on CCRCs and geriatric medicine as well as officials from HCFA’s Office of Managed Care. The 11 CCRCs we visited in California, Maryland, Pennsylvania, and Virginia (see table I.1) were selected primarily for three reasons. First, they assume most residents’ financial risk for the cost of long-term care (see app. II for a description of CCRC financial risk arrangements for long-term care costs).These financial arrangements provide incentives to manage health and other services so that residents remain healthy and functioning as independently as possible and so that costs are controlled. Second, these CCRCs are accredited by the Continuing Care Accreditation Commission.Third, they represent some range of geographic variation. Our findings from this sample of CCRCs, however, cannot be generalized to all CCRCs, to CCRCs that are at financial risk for most residents’ long-term care costs, or to those that are accredited. We conducted structured interviews to obtain information from CCRC executive officers, administrative officials, and medical staff regarding the practices used for health promotion, disease prevention, medical screening, and management of chronic conditions. In addition, we collected documentation on services provided and residents’ contracts, and we directly observed some CCRC activities, programs, campus buildings, and grounds used by residents. We conducted telephone follow-ups to obtain additional information from CCRC officials as needed. To examine the potential health and cost effects of CCRC practices, we reviewed the literature and interviewed selected experts in geriatric medicine regarding generally accepted practices or guidelines for health promotion, disease prevention, medical screening, and management of chronic conditions. We also interviewed officials from HCFA’s Office of Managed Care. We conducted our review between June and November 1996 in accordance with generally accepted government auditing standards. CCRCs assume different levels of financial risk for the costs of their residents’ long-term care services, such as nursing home care and assisted living services. These long-term care services are provided in combination with housing, residential services such as cleaning and meals, and related services. CCRCs’ financial risks for residents’ care are defined in lifetime contracts between the CCRC and the individual resident. A CCRC may offer more than one type of long-term care risk arrangement from which residents may choose. Some CCRCs are at full financial risk for the cost of long-term care services. This means that the CCRC must pay all the costs of long-term care services residents need except for those costs that may be reimbursed by third parties such as Medicare. These CCRCs typically require that residents pay an entrance fee and a monthly fee that includes prepayment for long-term care costs, similar to an insurance arrangement. The monthly fee can increase based on changes in operating costs and inflation adjustments but not because of the use of long-term care services. As a result, residents having these agreements are not at risk for covered long-term care costs. This kind of agreement is sometimes known as a life care agreement or an extensive or Type A contract. Some CCRCs are at partial financial risk for the cost of long-term care services. These CCRCs must pay some, but not all, of the costs of long-term care services for residents beyond those reimbursed by third parties such as Medicare. The financial risk of these CCRCs is limited by a cap on the amount of long-term care services for which the CCRC will pay. For example, for each resident, a CCRC may pay for a maximum of 30 or 60 days of nursing home care per year, whatever limit is specified in the resident’s contract. Under these arrangements, CCRCs typically require that residents pay an entry and monthly fee, which may be lower than the fees for arrangements under which CCRCs assume full financial risk for the costs of long-term care. Until the cap on long-term care services is reached, residents’ monthly fees under the partial risk agreement can increase based on changes in operating costs and inflation adjustments but not as a result of the use of long-term care services. If the contract cap is reached, however, the resident is at risk for the cost of all additional long-term care services not reimbursed by third parties. This kind of agreement is sometimes known as a modified, limited services, or Type B contract. Some CCRCs are not at risk for the cost of long-term care services. These CCRCs require residents to pay for services they use either through a combination of an entry fee and a monthly fee or through a monthly fee alone. Monthly fees in either payment arrangement can increase based on operating costs, inflation adjustments, and the use of long-term care services. As a result, residents are at risk for all long-term care service costs not reimbursed by third parties such as Medicare. When this kind of risk arrangement is based on a combination of an entrance fee and a monthly fee it is sometimes known as a Type C contract. When it is based only on a monthly fee it is sometimes known as a Type D contract. Under either Type C or D contracts, residents typically pay lower fees than under Type A or B contracts unless long-term care services are needed. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the processes of managed care in continuing care retirement communities (CCRC), focusing on: (1) CCRC practices for promoting wellness; (2) practices for managing care for elderly people with chronic conditions; and (3) evidence regarding the possible effect of these practices on health status and costs. GAO found that: (1) to serve their elderly residents, CCRCs GAO examined manage care to meet the needs of both healthy individuals and those who have chronic conditions; (2) they use active strategies to promote health, prevent disease, and detect health problems early by encouraging exercise, proper nutrition, social contacts, immunizations, and periodic medical exams and assessments for all residents; (3) many of these CCRCs also have multidisciplinary teams of nurses, social workers, rehabilitation specialists, physicians, dieticians, or others to plan and manage residents' care; (4) these teams meet periodically to discuss residents' health and functional status, determine whether services are needed, and decide on the types of treatment, services, and supports that will be provided; (5) CCRC staff coordinate a wide range of health and other services, whether provided on or off site, to enhance their benefit to the individual resident; (6) active monitoring of the health and functioning of residents who have chronic conditions, such as arthritis, hypertension, and heart disease, is an integral part of this coordinated, multidisciplinary approach to managing care; (7) many of these CCRCs' practices are considered to be effective in improving the health and functioning of the elderly, although their effect on health care costs is largely undemonstrated; (8) regular medical exams and health assessments, immunizations, and counseling to encourage exercise, proper nutrition, and social interaction are all recommended by experts and the literature as effective health promotion and disease prevention strategies for the elderly; (9) in addition, geriatric experts recommend a coordinated and multidisciplinary approach to manage chronic conditions among the elderly because their care may involve many modes of treatment and disciplines; and (10) while the health benefit of these practices has been demonstrated, little evidence exists to demonstrate health cost savings from either the CCRC package of services or most of the practices individually. |
In November 2013, we reported that (1) peer-reviewed, published research we reviewed did not support whether nonverbal behavioral indicators can be used to reliably identify deception, (2) methodological issues limited the usefulness of DHS’s April 2011 SPOT validation study, and (3) variation in referral rates raised questions about the use of indicators. In November 2013, we reported that our review of meta-analyses (studies that analyze other studies and synthesize their findings) that included findings from over 400 studies related to detecting deception conducted over the past 60 years, other academic and government studies, and interviews with experts in the field, called into question the use of behavior observation techniques, that is, human observation unaided by technology, as a means for reliably detecting deception. The meta- analyses we reviewed collectively found that the ability of human observers to accurately identify deceptive behavior based on behavioral cues or indicators is the same as or slightly better than chance (54 percent). We also reported on other studies that do not support the use of behavioral indicators to identify mal-intent or threats to aviation. In commenting on a draft of our November 2013 report, DHS stated that one of these studies, a 2013 RAND report, provides evidence that supports the SPOT program. However, the RAND report, which concludes that there is current value and unrealized potential for using behavioral indicators as part of a system to detect attacks, refers to behavioral indicators that are defined and used significantly more broadly than those in the SPOT program. The indicators reviewed in the RAND report are not used in the SPOT program, and, according to the RAND report’s findings, could not be used in real time in an airport environment. Further, in November 2013, we found that DHS’s April 2011 validation study does not demonstrate effectiveness of the SPOT behavioral indicators because of methodological weaknesses. The validation study found, among other things, that some SPOT indicators were predictive of outcomes that represent high-risk passengers, and that SPOT procedures, which rely on the SPOT behavioral indicators, were more effective than a random selection protocol implemented by BDOs in identifying outcomes that represent high-risk passengers. While the April 2011 SPOT validation study is a useful initial step and, in part, addressed issues raised in our May 2010 report, methodological weaknesses limit its usefulness. Specifically, as we reported in November 2013, these weaknesses include, among other things, the use of potentially unreliable data and issues related to one of the study’s outcome measures. First, the data the study used to determine the extent to which the SPOT behavioral indicators led to correct screening decisions at checkpoints were from the SPOT database that we had previously found in May 2010 to be potentially unreliable. In 2010, we found, among other things, that BDOs could not record all behaviors observed in the SPOT database because the database limited entry to eight behaviors, six signs of deception, and four types of serious prohibited items per passenger referred for additional screening, though BDOs are trained to identify 94 total indicators.subsequent to our May 2010 report, the validation study used data that were collected from 2006 through 2010, prior to TSA’s improvements to the SPOT database. Consequently, the data were not sufficiently reliable for use in conducting a statistical analysis of the association between the indicators and high-risk passenger outcomes. Although TSA made changes to the database Second, our analysis of the validation study data regarding one of the primary high-risk outcome measures—LEO arrests—suggests that the screening process was different for passengers depending on whether they were selected using SPOT procedures or the random selection protocol. Specifically, different levels of criteria were used to determine whether passengers in each group were referred to a LEO, which is a necessary precondition for an arrest. Because of this discrepancy between the study groups, the results related to the LEO arrest metric are questionable and cannot be relied upon to demonstrate the effectiveness of the SPOT program’s behavioral indicators. In November 2013, we also reported on other methodological weaknesses, including design limitations and monitoring weaknesses, that could have affected the usefulness of the validation study’s results in determining the effectiveness of the SPOT program’s behavioral indicators. In November 2013, we reported that variation in referral rates and subjective interpretation of the behavioral indicators raise questions about the use of indicators, but TSA has efforts under way to study the indicators. Specifically, we found that SPOT referral data from fiscal years 2011 and 2012 indicate that SPOT and LEO referral rates vary significantly across BDOs at some airports, which raises questions about the use of SPOT behavioral indicators by BDOs. The rate at which BDOs referred passengers for SPOT referral screening ranged from 0 to 26 referrals per 160 hours worked during the 2-year period we reviewed. Similarly, the rate at which BDOs referred passengers to In November 2013, we LEOs ranged from 0 to 8 per 160 hours worked.also reported that BDOs and TSA officials we interviewed said that some of the behavioral indicators are subjective and TSA has not demonstrated that BDOs can consistently interpret the behavioral indicators. We found that there is a statistically significant relationship between the length of time an individual has been a BDO and the number of SPOT referrals the individual makes. This suggests that different levels of experience may be one reason why BDOs apply the behavioral indicators differently. TSA has efforts underway to better define the behavioral indicators currently used by BDOs, and to complete an inter-rater reliability study. The inter- rater reliability study could help TSA determine whether BDOs can consistently and reliably interpret the behavioral indicators, which is a critical component of validating the SPOT program’s results and ensuring that the program is implemented consistently. According to TSA, the current contract to study the indicators and the inter-rater reliability study will be completed in 2014. In November 2013, we reported that TSA plans to collect and analyze additional performance data needed to assess the effectiveness of its behavior detection activities. In response to a recommendation in our May 2010 report to develop a plan for outcome-based performance measures, TSA completed a performance metrics plan in November 2012. The plan defined an ideal set of 40 metrics within three major categories that TSA needs to collect to measure the performance of its behavior detection activities. As of June 2013, TSA had collected some information for 18 of 40 metrics the plan identified, but the agency was collecting little to none of the data required to assess the performance and security effectiveness of its behavior detection activities or the SPOT program specifically. For example, TSA did not and does not currently collect the data required to determine the number of passengers meaningfully assessed by BDOs, BDOs’ level of fatigue, or the impact that fatigue has on their performance. To address these and other deficiencies, the performance metrics plan identifies 22 initiatives that are under way or planned as of November 2012. For example, in May 2013, TSA began to implement a new data collection system, BDO Efficiency and Accountability Metrics, designed to track and analyze BDO daily operational data, including BDO locations and time spent performing different activities. According to TSA officials, these data will allow the agency to gain insight on how BDOs are utilized, and improve analysis of the SPOT program. However, according to the performance metrics plan, TSA will require at least an additional 3 years and additional resources before it can begin to report on the performance and security effectiveness of its behavior detection activities or the SPOT program. Without the data needed to assess the effectiveness of behavior detection activities or the SPOT program, we reported in November 2013 that TSA uses SPOT referral, LEO referral, and arrest statistics to help track the program’s activities. As shown in figure 1, of the approximately 61,000 SPOT referrals made during fiscal years 2011 and 2012 at the 49 airports we analyzed, approximately 8,700 (13.6 percent) resulted in a referral to a LEO. Of the SPOT referrals that resulted in a LEO referral, 365 (4 percent) resulted in an arrest. TSA has taken a positive step toward determining the effectiveness of its behavior detection activities by developing the performance metrics plan, as we recommended in May 2010. However, as we reported in November 2013, TSA cannot demonstrate the effectiveness of its behavior detection activities, and available evidence does not support whether behavioral indicators can be used to identify threats to aviation security. According to Office of Management and Budget (OMB) guidance accompanying the fiscal year 2014 budget, it is incumbent upon agencies to use resources on programs that have been rigorously evaluated and determined to be effective, and to fix or eliminate those programs that have not demonstrated results. As we concluded in our November 2013 report, until TSA can provide scientifically validated evidence demonstrating that behavioral indicators can be used to identify passengers who may pose a threat to aviation security, the agency risks funding activities that have not been determined to be effective. Therefore, in our November 2013 report, we recommended that TSA limit future funding for its behavior detection activities. DHS did not concur with our recommendation. The negatively and significantly related indicators were more commonly associated with passengers who were not identified as high-risk, than with passengers who were identified as high-risk. available. However, as described in the report, in addition to the meta- analyses of over 400 studies related to detecting deception conducted over the past 60 years that we reviewed, we also reviewed several documents on behavior detection research that DHS officials provided to us, including documents from an unclassified and a classified literature review that DHS had commissioned. Finally, in stating its nonconcurrence with the recommendation to limit future funding in support of its behavior detection activities, DHS stated that TSA’s overall security program is composed of interrelated parts, and to disrupt one piece of the multilayered approach may have an adverse impact on other pieces. As we reported in November 2013, TSA has not developed the performance measures that would allow it to assess the effectiveness of its behavior detection activities compared with other screening methods, such as physical screening. As a result, the impact of behavior detection activities on TSA’s overall security program is unknown. Further, not all screening methods are present at every airport, and TSA has modified the screening procedures and equipment used at airports over time. These modifications have included the discontinuance of screening equipment that was determined to be unneeded or ineffective. Therefore, we concluded that providing scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security is critical to the implementation of TSA’s behavior detection activities. Consequently, we added a matter for congressional consideration to the November 2013 report. Specifically, we suggested that Congress consider the findings in the report regarding the absence of scientifically validated evidence for using behavioral indicators to identify aviation security threats when assessing the potential benefits of behavior detection activities relative to their cost when making future funding decisions related to aviation security. Such action should help ensure that security-related funding is directed to programs that have demonstrated their effectiveness. Chairman Hudson, Ranking Member Richmond, and members of the subcommittee, this concludes my prepared testimony. I look forward to answering any questions that you may have. For questions about this statement, please contact Steve Lord at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include David Bruno (Assistant Director), Nancy Kawahara, Elizabeth Kowalewski, Susanna Kuebler, Grant M. Mallie, Amanda K. Miller, Linda S. Miller, and Douglas M. Sloane. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses GAO's November 2013 report assessing the Department of Homeland Security (DHS) Transportation Security Administration's (TSA) behavior detection activities, specifically the Screening of Passengers by Observation Technique (SPOT) program. The recent events at Los Angeles International Airport provide an unfortunate reminder of TSA's continued importance in providing security for the traveling public. TSA's behavior detection activities, in particular the SPOT program, are intended to identify high-risk passengers based on behavioral indicators that indicate mal-intent. In October 2003, TSA began testing the SPOT program, and by fiscal year 2012, about 3,000 behavior detection officers (BDO) had been deployed to 176 of the more than 450 TSA-regulated airports in the United States. TSA has expended a total of approximately $900 million on the program since it was fully deployed in 2007. This testimony highlights the key findings of GAO's November 8, 2013, report on TSA's behavior detection activities. Specifically, like the report, this statement will address (1) the extent to which available evidence supports the use of behavioral indicators to identify aviation security threats, and (2) whether TSA has data necessary to assess the effectiveness of the SPOT program in identifying threats to aviation security. In November 2013, GAO reported that (1) peer-reviewed, published research we reviewed did not support whether nonverbal behavioral indicators can be used to reliably identify deception, (2) methodological issues limited the usefulness of DHS's April 2011 SPOT validation study, and (3) variation in referral rates raised questions about the use of indicators. GAO reported that its review of meta-analyses (studies that analyze other studies and synthesize their findings) that included findings from over 400 studies related to detecting deception conducted over the past 60 years, other academic and government studies, and interviews with experts in the field, called into question the use of behavior observation techniques, that is, human observation unaided by technology, as a means for reliably detecting deception. The meta-analyses GAO reviewed collectively found that the ability of human observers to accurately identify deceptive behavior based on behavioral cues or indicators is the same as or slightly better than chance (54 percent). GAO also reported on other studies that do not support the use of behavioral indicators to identify mal-intent or threats to aviation. GAO found that DHS's April 2011 validation study does not demonstrate effectiveness of the SPOT behavioral indicators because of methodological weaknesses. The validation study found, among other things, that some SPOT indicators were predictive of outcomes that represent high-risk passengers, and that SPOT procedures, which rely on the SPOT behavioral indicators, were more effective than a random selection protocol implemented by BDOs in identifying outcomes that represent high-risk passengers. While the April 2011 SPOT validation study is a useful initial step and, in part, addressed issues raised in GAO's May 2010 report, methodological weaknesses limit its usefulness. Specifically, as GAO reported in November 2013, these weaknesses include, among other things, the use of potentially unreliable data and issues related to one of the study's outcome measures. |
DOD, the Army, and the Marine Corps have emphasized the need for improved language and culture skills in strategic guidance and are implementing training and education programs to begin to address these needs. Before September 11, 2001, DOD generally focused efforts to build language and culture capabilities on its professional communities. As military operations in Afghanistan and Iraq have continued, DOD has broadened this focus to the general purpose forces. In figure 1, we show that in departmentwide and service-level documents issued since 2005, DOD and the Army and Marine Corps addressed the need for improved language and culture skills. The responsibilities within DOD for identifying, developing, and maintaining language and culture capabilities are shared among several components, including the Office of the Secretary of Defense, the combatant commanders, and the military services. The Office of the Under Secretary of Defense for Personnel and Readiness provides overall policy guidance for the defense language program and is also responsible for reviewing the policies, plans, and programs of the DOD components to ensure that foreign language and regional proficiency needs are adequately addressed. DOD has designated Senior Language Authorities within the Office of the Secretary of Defense, the military services, and other DOD components, and established a governance structure for DOD’s language and culture programs, which consists of a number of entities, including the following: Defense Language Office: provides strategic direction and programmatic oversight to the DOD components, including the services and combatant commands, on present and future requirements related to language as well as regional and cultural proficiency. The Director of the Defense Language Office, within the Office of the Under Secretary of Defense for Personnel and Readiness, has been designated as the DOD Senior Language Authority. Defense Language Steering Committee: comprised of Senior Language Authorities from the military services and other DOD organizations and chaired by the DOD Senior Language Authority, the committee provides senior-level guidance regarding the development of DOD’s language capabilities. Defense Language Action Panel: comprised of less-senior representatives from the same entities represented on the Defense Language Steering Committee, the panel supports the activities, functions, and responsibilities of the Defense Language Steering Committee. Combatant commanders, such as the Commander of U.S. Central Command, are responsible for identifying foreign language and culture requirements in support of operations in their geographic areas of responsibility. In some cases, battlefield commanders, such as the Commander of U.S. Forces in Afghanistan, may publish guidance and other documents that specify training tasks that should be completed before military forces deploy to an area where combat operations are being conducted. Each military service is responsible for training forces with the language and culture capabilities necessary to support departmentwide and service- specific requirements and the needs of combatant commanders. Army and Marine Corps headquarters staff and service commands develop guidance and training programs to prepare forces with required skills, such as language and culture. The Army and Marine Corps have published language and culture strategies to guide servicewide efforts. Within the Army, the Training and Doctrine Command has been designated as the lead agency for implementing the Army Culture and Foreign Language Strategy and has also established the Training and Doctrine Command Culture Center. The Marine Corps has established a culture center—the Center for Advanced Operational Culture Learning—which is responsible for developing and implementing the aspects of the Marine Corps Language, Regional and Culture Strategy: 2011-2015 that apply to general purpose forces. The Army and Marine Corps provide language and culture training at various points of a service member’s career through formal service institutions, such as professional military education schools, and within operational units. The following are examples: Training offered during enrollment in formal service institutions: The Army offers new recruits courses to build basic cultural competence and is in the process of adjusting training programs at each of its schools to expand the amount of cultural content in training. The Army has also provided some soldiers with an opportunity to study a foreign language in professional military education courses and develop foreign language skills through self-directed, computer-based training. The Marine Corps has begun implementing a career development program for all marines that begins when marines enter military service and continues throughout their career. During the initial part of the program, marines receive training and education on general cultural skills that that can be applied to any operational environment and an assignment to 1 of 17 regions around the world for future instruction. Each sequential part of the program is designed to deepen understanding of general culture skills and build specific regional knowledge, including some computer-based foreign language study. As of December 2010, the Marine Corps had provided more than 7,000 officers with a regional assignment. Predeployment training. The Army and Marine Corps offer predeployment training programs to provide additional language and culture instruction focused on the particular area to which a unit will deploy. The Defense Language Institute Foreign Language Center and Army and Marine Corps culture centers provide deploying forces with language survival kits, briefings on culture issues, and mobile training teams that present more in-depth language and culture training. Funding for language and culture training programs is provided at the department and service level in base and Overseas Contingency Operations portions of the annual budget. In fiscal year 2010, DOD received about $550 million for major language and culture programs identified by the Defense Language Office. In addition, the Army and Marine Corps have received funding to implement their respective language and culture strategies. For example, in fiscal year 2010, the Army’s Training and Doctrine Command received about $13 million for activities related to implementing the Army Culture and Foreign Language Strategy and the Marine Corps’ Center for Advanced Operational Culture Learning received about $10 million to develop language and culture- related programs for general purpose forces. Regarding funding for predeployment training, the Office of the Secretary of Defense directed the Army to include a total of about $160 million in its budget submissions for fiscal years 2011 through 2015 for language training sites on selected military installations to teach foreign languages to military and civilian personnel, including Army and Marine Corps operational units that are preparing for deployments to Afghanistan. This training includes self-directed learning, classroom instruction, and role playing (see figure 2). According to DOD, ultimately approximately 3,500 service members will learn basic Afghan language skills each year at its language training sites. For Afghanistan deployments, the focus of language training has varied because of the multiple languages in that country. Among the country’s many ethnic groups (which are known collectively as Afghans), Dari and Pashto are the dominant and official languages of Afghanistan. Pashto speakers are found in large numbers in Afghanistan and northern Pakistan, and the use of the language is generally limited to these regions. Dari, by contrast, can be understood by anyone proficient in Persian-Farsi. Although Pashto is the language of the largest ethnic group in Afghanistan, Dari is the working language for the majority of Afghans. Our prior work shows that establishing priorities and results-oriented performance metrics can help federal agencies target training investments and assess the contributions that training programs make toward achieving strategic program goals and objectives. The Army and Marine Corps have developed service-specific strategies with elements such as broad goals and objectives for building language and culture capabilities, but the strategies did not fully address other key elements, such as the identification of training priorities and investments and results-oriented performance metrics. We found that the Army and Marine Corps had not conducted comprehensive analyses to prioritize language and culture training investments and assign responsibilities for program performance and departmentwide efforts to establish a planning process for language and culture capabilities were not yet complete. The Army and Marine Corps developed broad service-specific goals and objectives for language and culture training within their respective language and culture strategies and identified some key training programs and activities. In the strategy it issued in December 2009, the Army states that the service’s goal is to develop a baseline of foreign language and culture capabilities for all leaders and soldiers to support the accomplishment of unit missions. The Army strategy establishes language and culture subject areas and learning objectives for officers and enlisted soldiers for various stages of a military career for both career development and predeployment training.According to the Army strategy, the learning objectives are intended to provide a vision of the desired end state for soldiers at each career stage. For example, the strategy identifies three components of cross-cultural competence, which include culture fundamentals, culture self-awareness, and culture skills, and a number of learning objectives for each subject area that are tied to rank and level of responsibility. The Army’s strategy notes that its primary focus is establishing the framework and content of training, and that additional steps are needed to determine the methods that are the most appropriate for delivering the education and training necessary to support the Army’s requirements. In the strategy it issued in January 2011, the Marine Corps established a broad strategic goal to provide all marines in the general purpose forces with a baseline in cross-cultural competence while simultaneously enhancing regional proficiency and functional language/communication skills throughout the force. The strategy outlines a number of language and culture training areas that are designed to enhance marines’ ability to communicate and interact with local populations on a basic level and perform core missions in a culturally complex environment. For example, to support its cross-cultural competence goal, the strategy discusses the need for marines to be able to conduct a cultural analysis, incorporate operational culture into planning, influence a foreign population, apply operational culture, and interact with a foreign population. In addition, the strategy identifies specific programs and the training activities that are available to achieve the Marine Corps’ strategic goal. Additionally, according to the strategy, the service’s operational culture training manual identifies the specific learning outcomes and objectives across the entire training and education continuum in the areas of cross-cultural competence, regional proficiency, and communication skills. The Army’s and Marine Corps’ respective strategies did not address some key elements that could guide their training efforts and investments. Our prior work has found that effective planning includes a clear identification of training priorities and the investments required to implement and sustain training programs and activities. These elements provide a framework for decision makers to assess the extent to which annual budget requests are coordinated with training priorities and strategic goals and objectives. Additionally, our work has found that it is important for agencies to incorporate performance metrics that can be used to assess the contributions training programs make collectively toward achieving strategic program goals and objectives. DOD noted in its fiscal year 2012 budget request that every level of the department is accountable for measuring performance and delivering results that support departmentwide strategic goals and objectives. With regard to training programs, both the Army and Marine Corps have included requirements to perform evaluations in their respective training-related guidance. We found that the Army and Marine Corps did not always identify training priorities with the proposed investments that are required for implementing and sustaining the training within their respective language and culture strategies. Within its strategy, the Army identifies a number of career development and predeployment training objectives, for example that all individuals have a basic understanding of the language used in their potential area of deployment appropriate to their mission, but the strategy does not identify training priorities to achieve these objectives. Furthermore, the Army’s strategy does not identify the investments that are needed to implement and sustain training programs and activities that will build the Army’s desired language and culture capability. The Marine Corps’ strategy identifies two language and culture training priorities for its general purpose forces—the Regional, Culture, and Language Familiarization and predeployment training programs and provides information on training activities, such as language learning software and language learning centers, that support these training programs. However, the Marine Corps’ strategy did not identify the total investment required to develop and sustain these training programs and activities. In some instances, the Army and Marine Corps have identified language and culture funding requirements, for example within their annual budget requests, but this information is not linked with the services’ respective language and culture strategies. Officials with Army and Marine Corps headquarters and training commands told us that there is not a cohesive picture of language and culture training investments and that multiple commands and units have separately developed and funded language and culture training programs. For example, the Marine Corps’ Center for Advanced Operational Culture Learning has funded language and culture training for all marines in the general purpose forces, while operational units have also funded predeployment language training for these marines to attend classes at a local community college and university. In addition, other DOD organizations, such as the Defense Language Office, have funded language and culture training for Army and Marine Corps general purpose forces. For example, the Defense Language Office has funded some language and culture predeployment training for Army and Marine Corps general purpose forces and also the development of interactive training tools to enhance the cultural proficiency skills of service members. Because the Army and Marine Corps have not linked their budget requests with their respective strategies and multiple DOD and service organizations have funded language and culture training programs, the department does not have full visibility over the potential total costs associated with implementing the Army’s and Marine Corps’ respective language and culture training strategies. We also found that the Army and Marine Corps had not yet established a systematic approach with results-oriented performance metrics to assess the contributions that training programs have made collectively in achieving their strategic goals and objectives. Within its strategy, the Army notes that performance metrics are necessary to determine the effectiveness of training programs, but the strategy does not establish any specific metrics or other indicators to evaluate progress toward the service’s strategic goals or an approach to assess them. Similarly, the Marine Corps’ strategy does not discuss any metrics that the service will utilize to assess language and culture training programs that are intended to achieve the service’s strategic goals and objectives. While the Army and Marine Corps had not established comprehensive metrics within their strategies to assess progress towards achieving their overall strategic goals and objectives, the services have established limited metrics to inform the development of specific language and culture training programs. For example, in July 2010, the Army set out a requirement for at least one leader per platoon deploying to Afghanistan and Iraq that will have regular contact with a local population to have more advanced language training and set standards for the leader’s language capability using DOD’s agreed upon method of measuring proficiency. Army officials reported that, based on their testing, nearly 100 percent of soldiers who have completed the language training program intended to support this requirement are meeting or exceeding the performance metric. The Marine Corps published an operational culture training manual in April 2009 with language and culture-related training tasks and the Center for Advanced Operational Culture Learning has developed training programs to assist Marine Corps units in accomplishing the tasks called for in the manual. These training programs include individual and unit-level performance metrics, such as student exams and training evaluation scorecards. However, the Army and Marine Corps have not yet established a comprehensive set of metrics for their respective language and culture training programs. For example, the Army had not established performance metrics for its culture training programs and the Marine Corps had not established metrics for predeployment language training. We found that the Army and Marine Corps did not include these key planning elements within their respective strategies because they did not fully analyze their training efforts to identify a clear prioritization of training investments and formalize responsibilities for ensuring the accountability for program performance prior to the design and implementation of their language and culture strategies and related training programs. Both the Army and the Marine Corps note that their respective language and culture strategies will be updated as needed. The Army is taking steps to further define the investments it requires to implement the service’s language and culture strategy and develop performance metrics to determine language and culture proficiency gaps that would inform the development of training and education programs. Once these analyses are completed, the Army plans to revise its servicewide strategy. An official from the Marine Corps’ Center for Advanced Operational Culture Learning told us that the Marine Corps had not formally assigned it or any other service organization with the responsibility and accountability for language and culture program performance. For example, the center is responsible for developing training programs of instruction and other materials, but not for ensuring that operational units complete the training programs in total or assessing training programs in meeting strategic goals and objectives. The Marine Corps plans to develop a concept of operations document that will formalize stakeholder roles and responsibilities for implementing its strategy and conduct additional analyses to identify language and culture capability needs that are not currently being addressed by current training programs. However, at the time of our review, these efforts were in the planning stage and are not yet complete. Without a complete understanding of the actions and investments that are necessary to achieve their strategic goals and objectives, the Army and the Marine Corps cannot provide DOD and the Congress with a reasonable assurance that their approaches and funding requests are building a capability that meets service and DOD long-term needs. In June 2009, we reported that DOD did not have a comprehensive strategic plan to transform language and culture capabilities with measures to assess the effectiveness of its transformation efforts. At that time, we recommended that DOD develop a strategic plan or set of linked plans that contain measurable performance goals and objectives and investment priorities that are linked to these goals to guide the military services’ efforts to transform language and culture capabilities. In February 2011, DOD published the Department of Defense Strategic Plan for Language Skills, Regional Expertise, and Cultural Capabilities (2011- 2016). The strategy outlines a broad planning process that includes a vision, goals, and objectives and notes that the department will review the strategy annually and modify it when needed to ensure alignment with overarching DOD guidance. While the strategy broadly describes a strategic planning process, the department has not yet set up internal mechanisms, such as procedures and milestones, which our prior work has found can assist the department reach consensus with the military departments and others on priorities, synchronize the development of department- and servicewide plans with each other and the budget process, and guide efforts to monitor progress and take corrective action. DOD officials told us that a more detailed implementation plan will be issued separately and the plan would likely include action plans that define responsibilities and time frames for completing specific tasks, as well as performance measures to assess progress and guide the allocation of resources, but it is unclear if this plan will provide the department with the clearly defined planning process needed to achieve it goals. During the course of our review, officials with the Army and Marine Corps told us that there has been a lack of strategic direction and coherent departmentwide policy on language and culture capability needs, which has limited the services’ ability to train service personnel in the general purpose forces with the right mix of skills to meet combatant commander requirements and develop service-specific strategies that align with departmentwide goals. In June 2009, we also reported that DOD did not have the information it needs to identify gaps and make informed investment decisions about language and culture capability needs, in part because DOD did not have a standardized methodology to determine language and regional proficiency requirements. We recommended that DOD develop a validated methodology for identifying language and regional proficiency requirements, which includes cultural awareness. Citing our June 2009 recommendation, DOD has taken steps to develop a new, standardized methodology to define geographic combatant commander language and culture capability requirements and plans to implement the methodology by March 2012. However, since these requirements are still incomplete, the Army’s and Marine Corps’ strategies do not yet address the specific actions that the services will be required to take to address DOD-wide language and culture capability requirements. Without a clearly defined planning process that includes internal mechanisms, such as procedures and milestones, and a validated set of language and culture capability requirements, the department does not have the tools it needs to set strategic direction for language and culture training efforts, fully align departmentwide efforts to develop plans and budget requests that reflect its priorities, and measure progress in implementing various initiatives. DOD components identified language and culture training requirements for Army and Marine Corps general purpose forces that will deploy to the U.S. Central Command area of responsibility, but these requirements varied among and within DOD components. Within recent planning guidance, DOD describes the importance of establishing a robust training requirements identification process and synchronizing training among DOD components. However, we found that U.S. Central Command did not clearly identify and approve predeployment language and culture training requirements and synchronize them among and within DOD components, because the command has not yet developed a comprehensive, analytically based process for identifying and synchronizing training requirements. Given the dynamic security environment presented by current operations in the U.S. Central Command area of responsibility, DOD components have been required to rapidly respond to changing capability needs for language and culture. This has resulted in multiple DOD components promulgating language and culture predeployment training requirements that are intended to prepare forces for operations in the U.S. Central Command area of responsibility. Since 2008, the Office of the Secretary of Defense, U.S. Central Command, U.S. Forces Afghanistan, and the Army and the Marine Corps have utilized various means to articulate joint force and service-specific language and culture predeployment training requirements, including combatant commander orders, battlefield commander guidance, departmentwide memorandums, and service-level orders and administrative messages. We surveyed 15 documents issued since June 2008 that address language and culture predeployment training requirements. In table 1, we list the documents we reviewed and include descriptions of language and culture training requirements, which are not intended to be comprehensive descriptions of the documents. Within these documents, we found several examples of variances in language and culture training requirements among and within DOD components. In particular, we identified examples of language and culture predeployment training requirements that varied even at similar points in time with respect to the specific language to be trained—whether Dari, Pashto, or both languages, as well as variances in the type and duration of training. For example, the language designated as the focus of training varied amongst multiple pieces of guidance issued since 2009. In November 2009, U.S. Forces Afghanistan issued guidance recommending that all forces deploying to Afghanistan focus their predeployment language training on Dari. In that same month, the Marine Corps issued an administrative message directing that certain commanders deploying to Afghanistan develop a basic language capability in Pashto. From November 2009 to March 2011, the Office of the Secretary of Defense, U.S. Central Command, U.S. Forces Afghanistan, and the Army and the Marine Corps issued additional guidance addressing language training, and the language focus has continued to vary among the different pieces of guidance. For example, in October 2010, U.S. Forces Afghanistan published an order that required all forces to complete training with a focus on Dari, and included an option for commanders to specify training with a focus on Pashto in certain cases. In November 2010, the Secretary of Defense approved Afghanistan counterinsurgency training standards that include a requirement that U.S. forces understand basic phrases in both Dari and Pashto. Additionally, just as the focus of training has varied, the type and duration of training has varied as well. For example, in July 2010, the Army required that all forces deploying to either Afghanistan or Iraq complete a 4- to 6-hour online training program for language and culture. In September 2010, the Marine Corps directed that all ground units assigned to the I Marine Expeditionary Force preparing for Afghanistan deployment complete a 2-day culture course and receive an introduction to software used for self-paced study. During the course of our review, Army and Marine Corps officials noted that language and culture predeployment training requirements changed constantly, which led to some confusion over the training that was needed to meet operational needs and that considerable time and resources were spent adjusting training programs. According to DOD guidance, the Commander of U.S. Central Command is to coordinate and approve training necessary to carry out missions assigned to the command. DOD’s 2010 strategic plan calls for the establishment of a robust, relevant requirements process that includes investing in front-end analysis and supporting requirements identification activities and synchronizing service training programs with combatant commander requirements. Moreover, in 2011 guidance, the Chairman of the Joint Chiefs of Staff stated that DOD will convert requirements into deployable capabilities more quickly and effectively, synchronizing force- providers with force-commander needs. At the time of our review, we found that U.S. Central Command had not yet developed a comprehensive, analytically based process for identifying and synchronizing predeployment training requirements among DOD components. In the absence of a comprehensive process, we identified instances in which U.S. Central Command did not clearly identify and approve training requirements and coordinate them with key stakeholders, such as the military services and subordinate commands, to ensure that requirements are synchronized among and within DOD components and with departmentwide guidance. We also observed instances in which U.S. Central Command did not obtain feedback to determine the extent to which predeployment training approaches met battlefield commander needs. For example: U.S. Central Command did not formally approve U.S. Forces Afghanistan’s January 2010 language training guidance requiring language training. For example, the command did not conduct front- end analyses of feasibility or cost of the training requirements or release a message validating U.S. Forces Afghanistan’s language predeployment training requirements. U.S. Central Command, as the combatant commander responsible for coordinating training requirements for the geographic area of responsibility, had not coordinated U.S. Forces Afghanistan’s October 2010 order mandating online language and culture training for all U.S. forces and DOD civilians currently deployed and deploying to Afghanistan with the Army and Marine Corps prior to its release. U.S. Forces Afghanistan officials told us that coordination with the services on the requirements would have provided better insight as to potential issues associated with its implementation. During the course of our review, U.S. Forces Afghanistan reissued the October 2010 order once to clarify confusion over the training requirements and was considering another revision to the order to further clarify its requirements. U.S. Central Command had not synchronized language and culture predeployment training requirements with departmentwide guidance. For example, in December 2010, the Office of the Secretary of Defense released a directive type memorandum on counterinsurgency training and reporting guidance that requires the services to ensure that at least one leader per platoon that will have regular contact with the population will have a measurable language capability in the language of the region to which they will be assigned. According to senior officials within the Office of the Secretary of Defense, this guidance is based on their understanding of U.S. Forces Afghanistan’s requirements, a subordinate command of U.S. Central Command, and is the authoritative department policy on training requirements for ongoing operations and is considered mandatory training. However, U.S. Central Command did not explicitly include the requirement established by the Office of the Secretary of Defense within either of its March 2011 orders on training requirements for standard and nonstandard forces. U.S. Central Command had not coordinated with the Army and Marine Corps to obtain feedback on the services’ language and culture predeployment training approaches in meeting operational needs prior to issuing new training requirements. For example, until December 2010, neither U.S. Central Command nor U.S. Forces Afghanistan had obtained feedback from the Marine Corps on language and culture training approaches that were developed by the Marine Corps to address service-specific requirements. We were told that informal efforts exist among DOD components to receive feedback on service training approaches, such as training forums and action officer-level communication, but U.S. Forces Afghanistan training officials told us that these informal processes had not provided them with full visibility over the services’ training programs. In its March 2011 order establishing theater predeployment training requirements for standard forces, U.S. Central Command consolidated predeployment training requirements that have been published in various documents in a single source. Refinements to training requirements occur over time due to changing operational conditions, and one aspect of this new order calls for an annual review and validation of U.S. Central Command’s consolidated training requirements followed by the publication of an order announcing updates. In addition, the order assigns responsibilities within U.S. Central Command for approving new requirements, describes how organizations can request modifications to existing requirements, and identifies how decisions on training requirements will be communicated within the command through official messages. While this appears to be a positive step in identifying predeployment training requirements, including those for language and culture, the order does not provide details on the analysis that is required to support these decisions, a coordination process with key stakeholders, such as the military services and subordinate commands, to ensure that requirements are synchronized among and within DOD components and with departmentwide guidance and to solicit feedback on service training approaches in meeting operational needs. Without a comprehensive process, U.S. Central Command will not have a mechanism to identify and synchronize training for current and future operations, which may result in deploying forces that receive training that is inconsistent and may not meet operational needs. DOD continues to emphasize the importance of language and culture training and, along with the military services, is investing millions of dollars to provide it to general purpose forces. However, the Army and Marine Corps have not established investment priorities, assigned responsibilities for training program performance, or developed comprehensive metrics to gauge progress in achieving their strategic goals and objectives and therefore cannot provide DOD and the Congress with a reasonable assurance that their approaches and funding requests are building a capability that meets service and DOD long-term needs. Further, without a clearly defined planning process, the department does not have the tools it needs to set strategic direction for language and culture training efforts, fully align departmentwide efforts to develop plans and budget requests that reflect its priorities, and measure progress in implementing various initiatives. Regarding predeployment language and culture training, over the last several years multiple DOD components have issued requirements for deploying forces, resulting in the Army and Marine Corps expending considerable time and resources adjusting service training programs. U.S. Central Command has taken some steps to consolidate training requirements, but the command has not yet established a comprehensive, analytically based process for identifying and synchronizing predeployment training requirements. Without a comprehensive process, U.S. Central Command will not have a mechanism to identify and synchronize training for current and future operations, which may result in deploying forces that receive training that is inconsistent and may not meet operational needs. We recommend the Secretary of Defense take the following three actions. In written comments on a draft of this report, DOD concurred with two recommendations and partially concurred with one recommendation. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army and the Secretary of the Navy to assign responsibilities for training program performance and include in subsequent updates of the Army’s and Marine Corps’ respective language and culture strategies training priorities and investments that are necessary to achieve strategic goals and objectives and results-oriented performance metrics to measure progress in achieving their strategic goals and objectives. In its comments, DOD separately addressed the two elements in our recommendation—training priorities and investments, and results-oriented performance metrics. With regard to identifying training priorities and investments, DOD stated that linking strategy development with training and resource prioritization would better identify the resources that are necessary to address goals, objectives, and programs outlined in the language, regional, and culture strategy. DOD noted that this would allow senior leaders to obtain a better understanding of the time and resources necessary to implement the strategy and may prompt modifications early in the process when viewed against time and fiscal realities. DOD also stated, however, that the department develops strategy and capabilities separately from the resource allocation process to capture the required operational capability and determine the gaps, independent of the fiscal environment. It noted that capability requirements are then prioritized and compete for resources. DOD stated that before definitive measures are implemented to more closely integrate requirements development and resource allocation at a much earlier stage, it is necessary to assess potential negative consequences and then weigh costs versus benefits. Our report did not address the timing of the requirements development and resource allocation processes, but rather emphasized the importance of a clearly defined planning process that produces outcomes that clearly link strategy development with training prioritization and resource allocation. As noted in our report, the Army and Marine Corps had not yet fully defined the language and culture capabilities needs of their general purpose forces; prioritized the investments required to implement their respective language and culture strategies; or clearly linked their funding requests with their respective strategies. We therefore continue to believe that as the Army and Marine Corps update their strategies, the services should fully identify the language and culture capabilities and the training priorities and needed investments in order to provide DOD and the Congress with a reasonable assurance that their approaches and funding requests are building a capability that meets service and DOD long-term needs. With regard to results-oriented performance metrics, DOD stated that several efforts are being pursued to enhance and fully implement metrics that accurately capture programmatic performance and utility, to include initiatives to more closely link training and readiness standards with operational readiness through the Defense Readiness Reporting System and other reporting mechanisms. DOD noted that any effort to start measuring and tracking individual performance with “hard” metrics such as cultural proficiency should be thoroughly reviewed before implementation and that such metrics may not provide an accurate assessment tied to operational effectiveness. Lastly, DOD stated that the actual administrative and logistical costs associated with the effort may far outweigh any benefits that are potentially gained. We agree that it is important for the Army and Marine Corps to establish metrics that accurately capture programmatic performance and utility in a manner that provides an accurate assessment of operational effectiveness. As stated in our report, the Army and Marine Corps have established limited metrics focused on individual and unit-level assessments, but had not established comprehensive metrics that would enable them to assess the contributions that training programs are making collectively toward achieving their overall strategic goals and objectives. We also noted that the Army and Marine Corps are planning to make additional investments to build the language and culture capabilities of their general purpose forces. We recognize that there is a cost associated with the time and effort required to establish metrics and implement efforts to measure progress against any metrics. However, developing comprehensive metrics is a key element needed to provide DOD and the Congress with the assurance that the services’ training approaches and funding requests are building a capability that meets service and DOD long-term needs. Therefore, we continue to believe the development of such metrics would better inform the services’ investment decisions and enhance their ability to maximize available resources. DOD concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to issue guidance to establish within the implementation plan for the Department of Defense Strategic Plan of Language Skills, Regional Expertise, and Cultural Capabilities (2011-2016) a clearly defined planning process with mechanisms, such as procedures and milestones, by which it can reach consensus with the military departments, coordinate and review approval of updates to plans, synchronize the development of plans with the budget process, monitor the implementation of initiatives, and report progress, on a periodic basis, towards achieving established goals. DOD stated that the DOD Implementation Plan for Language Skills, Regional Expertise, and Cultural Capabilities for FY 2011-2016 will include a clearly defined planning process for working with the military departments to coordinate plans, synchronize plans with resources, and evaluate and report performance as the department works toward its strategic goals. DOD stated that it planned to complete the implementation plan by June 2011. DOD concurred with our recommendation that the Secretary of Defense direct the Commander of U.S. Central Command to establish a comprehensive, analytically based process to (1) identify and approve predeployment training requirements that includes a description of the analysis to be conducted prior to approving the requirements and (2) coordinate with key stakeholders, such as the military services and subordinate commands to ensure that requirements are synchronized among and within DOD components and with departmentwide guidance, and solicit feedback on service training approaches in meeting operational needs. In its comments, DOD separately addressed our recommendation on conducting analysis as part of the requirements identification process and coordinating with key stakeholders to ensure that requirements are synchronized. DOD stated that U.S. Central Command agreed that such a process was necessary at the time of our review and noted that U.S. Central Command has established and instituted a process to coordinate and synchronize requirements among the service components and subordinate commands, to include cross directorate coordination within U.S. Central Command headquarters, to ensure all training requirements are meeting operational needs. Specifically, DOD stated that U.S. Central Command utilized this process in the development of U.S. Central Command Fragmentary Order 09-1700, USCENTCOM Theater Training Requirements, dated March 28, 2011. DOD also stated that U.S. Central Command assessed it is a service responsibility to determine the training approach they utilize to meet training requirements for the U.S. Central Command’s area of responsibility. As stated in our report, we recognize that DOD has taken positive steps in developing the fragmentary order, but continue to believe that additional actions are needed to ensure that U.S. Central Command has a comprehensive, analytically based process to coordinate and synchronize predeployment training requirements. For example, in its current form, U.S. Central Command Fragmentary Order 09-1700 order does not provide details on the analysis that is required to support decisions on the identification of training requirements, despite the fact that DOD’s September 2010 Strategic Plan for the Next Generation of Training for the Department of Defense calls for the establishment of a robust, relevant requirements process that includes investing in front-end analysis and supporting requirements identification activities. Moreover, in developing its March 2011 order, U.S. Central Command did not fully synchronize language and culture predeployment training requirements with departmentwide guidance. Specifically, U.S. Central Command did not explicitly include the language training requirement established by the Office of the Secretary of Defense in December 2010 counterinsurgency training and reporting guidance that requires the services to ensure that at least one leader per platoon that will have regular contact with the population will have a measurable language capability in the language of the region to which they will be assigned. We therefore continue to believe that additional actions are necessary for U.S. Central Command to establish a comprehensive, analytically based process to identify training requirements and coordinate with key stakeholders to ensure that requirements are synchronized among and within DOD components and with departmentwide guidance. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretary of Army, the Secretary of the Navy, the Commandant of the Marine Corps, and the Commander of U.S. Central Command. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To address our objectives, we met with officials from the Office of the Secretary of Defense; the Joint Staff; U.S. Central Command; U.S. Forces Afghanistan; U.S. Joint Forces Command; and the Army and the Marine Corps. To evaluate the extent to which the Army and Marine Corps had developed language and culture strategies with key elements, such as goals, funding priorities, and metrics to guide training approaches and investments that were aligned with departmentwide planning efforts, we focused on the Army’s and Marine Corps’ general purpose forces. Therefore, excluded from this review were training programs for language and regional experts, such as foreign area officers, intelligence specialists, special operations forces, and other service efforts to provide culture experts to deployed forces, such as “human terrain teams.” We examined the Army Culture and Foreign Language Strategy and the Marine Corps Language, Regional and Culture Strategy: 2011-2015 and training documents to determine training priorities and metrics that have been used to measure progress in meeting service and departmentwide capability needs. We reviewed these documents in the context of our prior work, Department of Defense (DOD) budget documents, and service guidance to determine the extent to which the Army and Marine Corps were developing strategies that identified goals and objectives, training programs and priorities, resource requirements, and approaches for measuring progress, including results-oriented performance metrics.We also reviewed funding data for fiscal years 2009 through 2012 provided by the Army’s Training and Doctrine Command and the Marine Corps’ Center for the Advanced Operational Culture Learning that are associated with the implementation of the Army’s and Marine Corps’ respective language and culture strategies. To corroborate our understanding of the documents provided, we conducted interviews with officials responsible for developing the Army’s and Marine Corps’ language and culture strategies and related training programs, as well as Office of the Secretary of Defense officials that are responsible for providing strategic direction and programmatic oversight of the department’s language and culture programs. We also discussed the content and status of ongoing departmental efforts that are intended to further align Army and Marine Corps language and culture training approaches with officials representing the Office of the Secretary of Defense and the Joint Staff. These efforts include the implementation of a new, departmentwide methodology for determining geographic combatant commander language and regional proficiency requirements, which includes culture, and the development of DOD’s strategic plan for language skills and cultural capabilities. To evaluate DOD’s approach for identifying language and culture predeployment training requirements for Army and Marine Corps general purpose forces that will deploy to the U.S. Central Command area of responsibility, we reviewed relevant provisions of Title 10 of the U.S. Code and related DOD guidance that characterize the training roles and responsibilities of combatant commanders and the military services. We examined Office of the Secretary of Defense, U.S. Central Command, U.S. Forces Afghanistan, and Army and Marine Corps documents published from 2008 to 2011 and identified specific language and culture training requirements. To corroborate our understanding of the documents provided, we conducted interviews with officials representing the Office of the Secretary of Defense, U.S. Central Command, U.S. Forces Afghanistan, and Army and Marine Corps force provider and training commands to discuss the processes they use to identify language and culture training requirements for ongoing operations in the U.S. Central Command area of responsibility, including any analyses that were conducted to identify the feasibility of implementing the training and associated costs. We also discussed the processes used by DOD components to synchronize battlefield commander operational needs with training conducted by the services to prepare forces to conduct military operations. We analyzed these processes to determine the level of coordination among DOD components with respect to joint and service- specific predeployment training requirements for language and culture. We assessed these efforts in light of DOD guidance that describes the importance of establishing a robust training requirements identification process and synchronizing service training programs with combatant commander requirements. We conducted this performance audit from June 2010 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO DRAFT REPORT DATED APRIL 8, 2011 GAO-11-456 (GAO CODE 351586) “MILITARY TRAINING: ACTIONS NEEDED TO IMPROVE PLANNING AND COORDINATION OF ARMY AND MARINE CORPS LANGUAGE AND CULTURE TRAINING” RECOMMENDATION 1: The GAO recommends that the Secretary of Defense direct the Secretary of Army and the Commandant of the Marine Corps to assign responsibilities for training program performance and include in subsequent updates of their respective service- specific language and culture strategies training priorities and investments that are necessary to achieve strategic goals and objectives. DoD RESPONSE: Partially concur. Linking strategy development with training and resource prioritization across the enterprise would better identify, up front, what resources are necessary to address goals, objectives, and programs outlined in the language, regional, and culture strategy. This would allow senior leadership to obtain a better understanding of the rough order of magnitude in time and resources necessary to implement the strategy being presented, and may prompt modifications early in the process when viewed against time and fiscal realities. Currently, strategy and capability requirements within the Department and Services are developed separately from the resource allocation/Program Objective Memorandum process. The purpose is to accurately capture the required operational capability and determine the gaps, independent of the fiscal environment. From there, those capability requirements are then prioritized and compete for resources. This approach has some advantages that could be negated if the two processes were more closely linked early on. Consequently, before definitive measures are implemented to more closely integrate requirements development and resource allocation at a much earlier stage, assessing potential negative consequences and then weighing costs versus benefits will need to be conducted. RECOMMENDATION 2: The GAO recommends that the Secretary of Defense direct the Secretary of Army and the Commandant of the Marine Corps to assign responsibilities for training program performance and include in subsequent updates of their respective service- specific language and culture strategies results-oriented performance metrics to measure progress in achieving their strategic goals and objectives. DoD Response: Partially concur. Enhancing and fully implementing metrics that accurately capture programmatic performance and utility remains a consistent focus for the Army and Marine Corps. Several efforts are being pursued to achieve this objective, to include current initiatives to more closely link training and readiness standards outlined in training and readiness manuals with operational readiness through the Defense Readiness Reporting System and other reporting mechanisms. However, any effort to start measuring and tracking individual performance with “hard” metrics such as cultural proficiency scale/rating should be thoroughly studied and reviewed before implementation. There is significant data to suggest this is far from an exact science, and may not be able to provide an accurate assessment tied to operational effectiveness. Furthermore, even if it is achievable, the actual administrative and logistical costs associated with the effort may far outweigh any benefits that are potentially gained. RECOMMENDATION 3: The GAO recommends that the Secretary of Defense direct the Undersecretary of Defense for Personnel and Readiness to issue guidance to establish within the implementation plan for the Department of Defense Strategic Plan of Language Skills, Regional Expertise, and Cultural Capabilities (2011-2016) a clearly defined planning process with mechanisms, such as procedures and milestones, by which it can reach consensus with the military departments, coordinate and review approval of updates to plans, synchronize the development of plans with the budget process, monitor the implementation of initiatives, and report progress, on a periodic basis, towards achieving established goals. DoD Response: Concur. The DoD Implementation Plan for Language Skills, Regional Expertise, and Cultural Capabilities for FY 2011-2016 will include a clearly defined planning process for working with the Military Departments to coordinate plans, synchronize plans with resources, and evaluate and report performance as the Department works toward its strategic goals. The target date for its completion is June 2011. RECOMMENDATION 4: The GAO recommends that the Secretary of Defense direct the Commander of the U.S. Central Command to establish a comprehensive, analytically-based process to identify and approve predeployment training requirements and include in this documentation a description of the analysis to be conducted prior to approving the requirements. DoD Response: Concur. US Central Command (USCENTCOM) concurs that an analytically- based process by which to identify and approve predeployment training requirements was necessary at the time of this study. USCENTCOM Commander approved USCENTCOM FRAGO 09-1700, USCENTCOM Theater Training Requirements, dated March 28, 2011, which establishes the process for Service Components and Sub-Unified Commands to nominate training requirements for approval, modification, or deletion for approval by the Director of Operations, USCENTCOM. This document will be reviewed annually to ensure requirements are updated and promulgated to USCENTCOM Service Components, Sub-Unified Commands, Service Force Providers, and the Joint Staff. RECOMMENDATION 5: The GAO recommends that the Secretary of Defense direct the Commander of the U.S. Central Command to establish a comprehensive, analytically-based process to coordinate with key stakeholders, such as the military services and subordinate commands to ensure that requirements are synchronized among and within DOD components and with department wide guidance, and solicit feedback on service training approaches in meeting operational needs. DoD Response: Concur. USCENTCOM concurs that a process to ensure that requirements are synchronized among the Service Components and Subordinate commands was necessary at the time of this study. USCENTCOM has established and instituted a process that synchronizes requirements among the Service Components and Subordinate Commands. USCENTCOM coordinates with all Service Components and Sub-Unified Commands, to include cross- directorate coordination within Headquarters USCENTCOM, to ensure all training requirements are meeting operational needs. USCENTCOM utilized this process in the development of USCENTCOM FRAGO 09-1700, USCENTCOM Theater Training Requirements. USCENTCOM assesses it is a Service responsibility to determine the training approach they utilize to meet the training requirements for the USCENTCOM area of responsibility. In addition to the contact named above, Patricia Lentini, Assistant Director; Nicole Harms; Mae Jones; Susan Langley; Michael Silver; Matthew Ullengren; and Chris Watson made significant contributions to this report. | Today, and in the foreseeable future, military operations require U.S. personnel, in particular Army and Marine Corps ground forces, to communicate and interact with multinational partners and local populations. The committee report accompanying a proposed bill for the National Defense Authorization Act for Fiscal Year 2011 directed GAO to review several issues related to language and culture training for Army and Marine Corps general purpose forces. For this report, GAO evaluated (1) the extent to which the Army and Marine Corps had developed strategies with elements such as goals, funding priorities, and metrics to guide training approaches and investments that were aligned with Department of Defense (DOD) planning efforts and (2) DOD's approach for identifying training requirements for Army and Marine Corps forces that will deploy to the U.S. Central Command area of responsibility. To do so, GAO analyzed Army and Marine Corps strategies and training requirements and interviewed cognizant officials. The Army and Marine Corps developed service-specific language and culture strategies, but did not include some key elements to guide their training approaches and investments, and DOD-wide efforts to establish a planning process that could better align service training approaches are incomplete. The Army and Marine Corps developed broad goals and objectives within their strategies and identified some training programs and activities tied to these goals. However, the services did not always identify priorities and the investments needed to implement the training or a set of results-oriented performance metrics to assess the contributions that training programs have made collectively, which GAO and DOD have recognized can help ensure training investments are making progress toward achieving program goals and objectives. GAO found that the Army and Marine Corps did not complete underlying analyses and assign responsibilities for program performance prior to designing and implementing their strategies and associated training programs. DOD has taken steps to develop a strategic planning process to align service training approaches. For example, in February 2011, DOD published a strategic plan for language skills and cultural capabilities that outlines a broad departmentwide planning process. However, DOD has not yet set up internal mechanisms, such as procedures and milestones, by which it can reach consensus with the military services on priorities and investments. Without a clearly defined planning process, DOD does not have the tools it needs to set strategic direction for language and culture training efforts, fully align departmentwide efforts to develop plans and budget requests that reflect its priorities, and measure progress in implementing various initiatives. DOD components identified varying language and culture training requirements for Army and Marine Corps general purpose forces that will deploy to the U.S. Central Command area of responsibility, but the Command did not use a comprehensive process to synchronize these requirements. GAO surveyed 15 documents issued since June 2008 and found several variances with respect to the language to be trained and the type and duration of training. For example, in July 2010 the Army required that all forces deploying to either Afghanistan or Iraq complete a 4- to 6-hour online training program for language and culture. In September 2010, a senior Marine Corps commander directed that ground units preparing for Afghanistan deployments complete a 2-day culture course. Army and Marine Corps officials noted that training requirements changed constantly and this led to some confusion in developing training programs as well as considerable time and resources that were spent adjusting training. GAO found that contrary to DOD guidance, U.S. Central Command had not yet established a comprehensive process to approve training requirements and coordinate them with key stakeholders to ensure alignment with DOD guidance and obtain feedback on service training approaches. Without a comprehensive process, U.S. Central Command will not have a mechanism to identify and synchronize training for current and future operations, which may result in deploying forces that receive training that is inconsistent and may not meet operational needs. GAO recommends that the Army and Marine Corps assign responsibilities for program performance, and identify training investments and metrics; DOD establish a defined planning process with internal mechanisms, such as procedures and milestones, to align training efforts; and U.S. Central Command establish a process to identify and synchronize training requirements. DOD generally agreed with the recommendations. |
The Army has several mechanisms for providing needed health care services for reserve component soldiers who become injured or ill while mobilized on active duty. Some soldiers choose to be released from duty when their mobilization orders expire and seek care through their private insurers. Eligible soldiers may also seek care through the Department of Veterans Affairs or the transitional medical assistance program. Finally, soldiers may also request to remain on active duty for medical evaluation, treatment, or processing through the Army disability evaluation system. Remaining on active duty entitles soldiers to continue receiving full pay and allowances as well as health care without charge to the soldiers and their dependents. Prior to May 1, 2004, when the Army implemented MRP, if a soldier became injured or ill while supporting GWOT operations and requested to remain on active duty for medical evaluation and treatment, the Army extended the soldier’s active duty orders using its existing ADME process. ADME was designed to accommodate reserve component soldiers injured during annual training, weekend drills, or other activities associated with their Army National Guard or Army Reserve duties that would require care beyond 30 days. At that time, a soldier choosing to be extended on active duty for medical treatment or evaluation submitted an ADME order application packet to the Army Manpower Office at the Pentagon. Officials in that office evaluated the application packet and determined (1) whether the ADME order should be approved; (2) the length of the extension, if approved; and (3) the MTF to which the soldier should be attached. Army Manpower officials made these determinations based on the information included in the application packets. However, as the mobilization orders for the first wave of injured and ill reserve component soldiers coming back from Iraq and Afghanistan began to expire in 2003, the Army was not prepared and lacked the infrastructure to process the ADME requests. As a result, in our February 2005 report, we documented many instances in which these injured and ill soldiers were inappropriately dropped from active duty status in the automated systems that control pay and access to medical care, resulting in significant hardships for these soldiers and their families. We reported that the Army lacked an adequate control environment and management controls over ADME. First, the Army’s guidance for processing ADME orders did not clearly define organizational responsibilities or standards for being retained on active duty orders, how soldiers would be identified as needing extensions, and how and to whom ADME orders would be distributed. Without clear and comprehensive guidance, the Army was unable to establish straightforward, user-friendly processes that would provide reasonable assurance that injured and ill reserve component soldiers receive the pay and benefits to which they are entitled without interruption. Second, the Army lacked integrated order-writing, payroll, personnel, and medical eligibility systems. As a result, the Army lacked visibility over injured or ill reserve component soldiers and sometimes lost track of these soldiers. In addition, because the Army lacked these integrated systems, information did not always flow from one system to the next as it should—resulting in disruptions to pay and benefits as well as overpayments. Third, the Army did not adequately educate reserve component soldiers about ADME or train Army personnel responsible for helping soldiers apply for ADME orders. As a result, many of the soldiers we interviewed at the time said that neither they nor the Army personnel responsible for helping them clearly understood the process. This confusion resulted in delays in processing ADME orders and for some meant that they fell from their active duty orders and lost pay and medical benefits for their families. Finally, the Army lacked the infrastructure and resources needed to assist soldiers trying to navigate their way through the ADME process. Specifically, the Army lacked the staff needed to process ADME paperwork and help soldiers file their ADME requests. Reserve component soldiers who were mobilized in support of GWOT operations and are receiving medical treatment or being evaluated for conditions that made them unfit for duty are referred to as medical holdover (MHO) soldiers. MHO soldiers fall into three groups. The first comprises soldiers who are being treated while still on mobilization orders. Depending on the amount of time left on these soldiers’ mobilization orders, they may be treated and returned to duty or released from duty before their mobilization orders expire. Soldiers in this group fall outside the scope of our audit. The second group comprises soldiers whose mobilization orders have expired but who have been retained on active duty on MRP orders and are receiving medical treatment or being evaluated at an MTF. The third group comprises soldiers who are on MRP orders and whom the Army has agreed can return home as part of CBHCI and receive medical care through TRICARE—DOD’s worldwide network of civilian health care providers—rather than remaining at an Army installation and receiving care through an MTF. The focus of this report is on the management of the second and third group of soldiers and the processes used to retain these soldiers on active duty so that they can receive medical treatment or evaluation. Regardless of the soldiers’ MHO classification, the goals are the same—to ensure that each soldier attains the optimal level of physical or mental condition and to determine whether he or she can be returned to duty, released from active duty, or released from military service. Once an Army physician determines that a soldier has attained an optimal level of physical and mental condition, the Army determines—as part of its medical and physical evaluation board processes—whether the soldier will be returned to duty or released from military service with or without benefits. The Army’s medical and physical evaluation board processes fall outside the scope of our audit and, therefore, we did not evaluate and are not reporting on any aspect of soldiers’ experiences with those processes. In an effort to correct the problems we identified as part of our work related to ADME, the Army implemented the MRP program on May 1, 2004, for reserve component soldiers mobilized in support of GWOT operations. Since MRP’s inception, the Army has processed about 15,000 soldiers through the program. While ADME is still used for Army reserve component soldiers injured or who became ill during training, drills, or military operations not associated with GWOT, all eligible soldiers who were previously on ADME orders were allowed to apply for transfer to MRP orders when their original ADME orders expired. If the Army determines that a soldier (1) cannot return to duty within 60 days from the time he or she was injured or became ill or (2) can return to duty within 60 days but has 120 days or fewer beyond the return to duty date remaining on his or her mobilization order, the soldier can request to be retained on active duty on MRP orders. MRP requests are processed through Human Resource Command-Alexandria (HRC-A). Once the MRP request packet has been submitted and approved by HRC-A, the injured or ill reserve component soldier is attached to an MRPU that is responsible for command and control of mobilized reserve component soldiers who are not medically fit for duty. The MRPU consists of a unit commander, an executive officer, platoon sergeants, and supply and other administrative support staff. These soldiers are also assigned a case manager located at the MTF who is responsible for helping reserve component soldiers schedule medical appointments and understand what steps they need to take to progress through the treatment or evaluation process—to include applying for new MRP orders if necessary. According to the Army’s MRP procedural guidance, initial and any subsequent MRP orders are written for 179 days. Although the procedural guidance does not limit the number of times or the total number of days that soldiers may be on MRP orders for the purpose of medical treatment or evaluation, according to a DOD directive, if a soldier remains medically unfit for duty for a year, the Army is to examine whether the soldier can be returned to duty, released from active duty, or put before a medical evaluation board and entered into the physical disability evaluation process to determine the likelihood of return to duty. In March 2004, in conjunction with MRP, the Army also implemented CBHCI. CBHCI allows selected reserve component soldiers to return to their homes and receive medical care through TRICARE—DOD’s worldwide network of civilian health care providers—rather than remaining at an Army installation and receiving care through an MTF. Unless specifically excluded by the Army’s minimum eligibility criteria, all soldiers on MRP orders may be considered for CBHCI. Before a soldier may considered for CBHCI, he or she must be able to perform duties within a limited duty profile; be unable to return to duty within 60 days; be unencumbered by legal or administrative action or holds; reside in a state or regional catchment area participating in CBHCI; have a residence with a valid street address (not just a PO Box) and phone number that will accommodate the soldier’s medical condition; volunteer to remain on or extend active duty under MRP status while undergoing medical treatment and adjudication of unresolved medical condition; have access to transportation to and from medical appointments, as well as his or her designated place of duty; have a preliminary diagnosis and care plan that can be supported by CBHCI (appropriate medical care is available within 50 miles of the soldier’s residence); and live within 50 miles of a duty location that has duties to be performed within the limits of the soldier’s physical profile. According to Army guidance, in most cases, soldiers should not be considered for CBHCI if their medical problems involve issues not commonly treated by civilian practitioners—including exposure to depleted uranium or chemical, biological, radiological, or nuclear agents or a confirmed or working diagnosis of leishmaniasis. The Army currently has eight CBHCOs in operation providing coverage for the continental United States (CONUS).The CBHCOs serving CONUS are located in Alabama, Arkansas, California, Florida, Massachusetts, Utah, Virginia, and Wisconsin. Each CBHCO serves the soldiers living in a particular geographic region. For example, the Alabama CBHCO, which is located in Birmingham, Alabama, serves a multistate region comprising Alabama, Kentucky, Mississippi, and Tennessee. The Army has also located smaller CBHCO facilities in Alaska, Hawaii, and Puerto Rico to serve soldiers living outside CONUS. Like soldiers who are being treated at MTFs, soldiers attached to a CBHCO are assigned a case manager who is responsible for helping them schedule medical appointments and understand what steps they need to take to progress through the treatment or evaluation process and a platoon sergeant who is responsible for command and control functions—such as making sure the soldiers are reporting to their assigned duty stations. However, unlike soldiers treated through an MTF, these functions are performed remotely in that the Army physician, case manager, and platoon sergeant are physically located at the CBHCO and the injured or ill soldier is at his or her residence— possibly in another state. The Army’s MRP program has resolved most of the pay-related problems we identified previously with ADME. As a result, most reserve component soldiers who request to be retained on active duty to receive medical treatment or evaluation, did not experience delays in obtaining MRP orders and therefore have not experienced significant gaps in pay and benefits. In response to our prior work in this area, the Army has fully implemented 17 of the 22 recommendations we made in our previous report and partially implemented 2 recommendations aimed at improving training for reserve component soldiers in the MRP program and the Army personnel responsible for managing these soldiers. The 3 remaining open recommendations address actions needed to improve the Army’s order- writing, pay, personnel, and medical eligibility systems. These actions are part of a continuing Army-wide systems integration challenge that affects all soldiers, including those in the MRP program. Because the Army’s systems are not integrated and therefore the same or similar data must be manually entered into multiple systems, information that may affect a soldier’s pay and access to medical care is not always appropriately updated in each system. When this happens, it can result in disruptions to pay and benefits or, conversely, overpayments and potentially unauthorized access to benefits. See appendix II for a complete list of prior recommendations and their implementation status. In response to our previous work related to ADME, the Army has implemented a more streamlined, customer-friendly process for requesting MRP orders, implemented comprehensive guidance intended to effectively manage injured and ill reserve component soldiers, provided a more effective means of tracking injured and ill reserve component soldiers in the MRP program, addressed the issues we identified previously related to the Army’s capacity to house and manage injured and ill reserve component soldiers, and developed performance measures to evaluate MRP. According to Army officials and injured reserve component soldiers we interviewed, these improvements have virtually eliminated the widespread delays in order processing that were associated with the ADME request process. Unlike the ADME request process, MRP requests are not processed through the Army Manpower Office at the Pentagon. Instead, once signed and approved by the MRPU commander, MRP requests are sent directly to HRC-A to be processed. The Army Manpower Office, which is a policy- setting organization, was ill-equipped to handle the workload associated with processing ADME orders. As a result, soldiers’ active duty orders often expired before ADME orders were approved—creating gaps in pay and benefits. In addition, because all MRP orders are issued for 179 days, MRP has reduced the workload associated with processing orders. ADME orders were often issued with a much shorter duration and therefore soldiers often had to reapply for extensions every 30, 60, or 90 days. According to the metrics recently developed based on our recommendation, the Army has met and surpassed its 98 percent goal of processing all MRP orders on time. However, out of the 25 randomly selected injured or ill reserve component soldiers we interviewed, only 1 reported that he experienced an order processing delay. As a result, the wounded national guardsman stated his family’s medical benefits were temporarily disrupted for approximately 2 weeks until the MRP order was processed. Based on recommendations included in our previous report, the Army has improved its guidance related to retaining soldiers on active duty so that they can receive medical treatment. In July 2006, the Army issued the Department of the Army Medical Holdover (MHO) Consolidated Guidance, which includes comprehensive guidance for effectively managing the MRP program. Among other things, the guidance now provides specific organizational responsibilities for administering MRP; an order distribution list covering the command and control, pay, personnel, and medical eligibility functions; eligibility criteria for being retained on active duty, including guidelines for extension of orders beyond 1 year; criteria that clearly establish priorities for where a soldier may be attached for medical care (i.e., medical facility has the specialties and the capacity needed to treat the soldier, proximity to soldiers’ residence); minimum eligibility criteria for soldiers applying for MRP and ADME avenues through which eligible soldiers may apply for MRP and ADME; a list and examples of the specific documentation required to retain or extend active duty orders for the purpose of medical treatment or evaluation; and a list of the entitlements available for injured reserve component soldiers and their dependents. Although the Army continues to lack an integrated personnel system to provide visibility over all soldiers—including injured and ill reserve component soldiers—the Army has, as we recommended, increased use of the Medical Operational Data System (MODS) for this purpose. This, combined with improved guidance related to the distribution of MRP orders, has improved the Army’s visibility over injured and ill reserve component soldiers. In response to recommendations included in our previous report, the Army now requires that all Army installations use MODS to track the administrative and clinical status of these soldiers and makes MHO unit commanders responsible for the accuracy of the data. For example, MODS contains information such as the number of days in the program, the MRP order start and end date, the unit the soldier is attached to, and information on the soldier’s medical status (e.g., orthopedic, neurological, internal medicine). Previously, installations were not required to use MODS and therefore used their own local databases to track the status of injured and ill soldiers—limiting Army-wide visibility over these soldiers. For example, the Army previously did not know how many reserve component soldiers had been extended on active duty to receive medical treatment or the duration of the extended service. Based on our assessment of the data contained in MODS as of July 25, 2006, the Army has greatly improved the completeness and reliability of MODS data and its ability to monitor the status of injured and ill soldiers. For example, we traced the data from source documents to MODS for 564 soldiers and noted only 5 cases in which the soldier was not listed in MODS. (Additional information on the procedures used to assess the reliability of MODS data are discussed in app. I.) Further, all the sites we visited used MODS-generated reports to enhance their ability to monitor soldiers whose MRP orders would soon expire. These reports list all soldiers in the MRP program whose orders will expire in 30, 60, or 90 days—alerting Army officials that each soldier may need to submit another request to be retained on active duty for an additional 179-day period. In addition, new guidance related to maintaining visibility over injured or ill soldiers who are transferred from one MTF to another has improved the Army’s ability to monitor the movement of these soldiers. Previously, according to Army officials, when ADME orders were used to attach a soldier to an MTF for treatment, the receiving MTF was not notified in advance of the soldier’s arrival. As a result, the receiving MTF had no knowledge that it was responsible for the injured or ill soldier until he or she arrived. Such knowledge is necessary to ensure that the soldier is assigned a case manager and receives appropriate medical attention. Now, according to the Army’s MHO guidance, the losing unit’s commander must contact the gaining unit’s commander and coordinate the movement of injured or ill reserve component soldiers. According to Army officials at the installations we visited, they were not experiencing the problems they had previously related to the transfer of soldiers. The Army has also addressed most of the problems we identified previously related to inadequate administrative support and resources by taking steps to improve its capacity to house and manage injured and ill reserve component soldiers. The Army has improved its capacity to house and manage injured and ill reserve component soldiers by implementing CBHCI and by increasing the overall number of case managers it has on staff. As discussed previously, CBHCI allows injured and ill reserve component soldiers to return home, while remaining on active duty MRP orders, to receive medical treatment through a civilian provider in DOD’s TRICARE network. As of January 2007, of the 3,358 soldiers who the Army reported were on MRP orders, about 1,365—or 41 percent—were receiving care through civilian providers as part of CBHCI. Allowing these soldiers to return home for treatment reduces the number of injured and ill soldiers being housed and treated at Army installations. According to the Army’s MHO capacity report, as of January 2007, all of its installations reported having excess capacity. In addition, the Army has reduced its soldier-to-case manager ratios. When we last reported, the Army had 105 case managers and maintained, at best, a 50-to-1 soldier-to-case manager ratio. As of January 2007, the Army reported having 208 case managers providing coverage to soldiers at Army installations and participating in CBHCI and soldier-to-case manager ratios for each location ranging between 12-to-1 and 24-to-1. As noted previously, we did not evaluate the quality of the medical care or facilities provided or other quality of life issues. In addition, based on our prior recommendation, the Army has begun to survey injured soldiers about their satisfaction with MRP and CBHCI. According to the results of the first survey given in December 2006, 81 percent of soldiers receiving care at an MTF and 93 percent of soldiers receiving care through CBHCI were either completely satisfied or somewhat satisfied with their case management. In response to the problems we identified with ADME, the Army has improved the information it provides to injured or ill reserve component soldiers about MRP by creating the Medical Holdover (MHO) Soldier’s Handbook. The handbook provides injured and ill reserve component soldiers with guidance on key policies and standards of conduct when transitioning to MRP orders—including the role of soldiers’ primary care providers and case managers, as well as soldiers’ rights and responsibilities related to receiving medical treatment. While the soldier’s handbook is a big improvement over the lack of information available to soldiers under ADME, 4 of the 25 soldiers we interviewed reported that they did not receive the handbook. Providing these soldiers with MRP guidance is an important part of easing their burden and allowing them to focus on recovering. In addition, some enhancement could be made to the soldiers’ handbook. For example, the Important Numbers section of the handbook does not contain point-of-contact information for soldiers to use if they need to resolve problems associated with pay and benefits— including the Defense Finance and Accounting Service (DFAS) ombudsman responsible for assisting soldiers with pay-related problems. As discussed later, when pay and benefit discrepancies have occurred, some soldiers we interviewed expressed frustration because information on how to resolve these discrepancies was not always readily available. Further, the Army has not established specific Army-wide training standards for MRP units—a practice common in all other Army units. As a result, the training and information provided to injured reserve component soldiers varied from installation to installation—with only 4 of the 17 installations we contacted having formalized or documented training programs for soldiers entering the MRP program. For example, some installations provided only a general overview of the MRP program while others provided a series of comprehensive training courses on the program benefits and responsibilities related to MRP and CBHCI. The Army’s Systems Analysis and Review team—which was formed in May 2005 to assess the status of each MRP unit and make recommendations for improvement—found similar issues related to training across the installations it reviewed. Similarly, the Army lacks training standards for the Army personnel responsible for managing injured and ill reserve component soldiers—the majority of whom are reserve component soldiers themselves. According to the new Department of the Army Medical Holdover (MHO) Consolidated Guidance, the Army Medical Command is responsible for providing training to case manager and CBHCO medical staff and the Installation Management Command (IMCOM) is responsible for training MRPU command and control staff to ensure their competency to perform their duties. According to the Army guidance, MRPU staff are supposed to receive instruction in finance and personnel management. In an effort to address our prior recommendation, IMCOM developed formal training that it offers approximately every 6 months. However, at the sites we contacted, the adequacy of the training provided at the installation upon the arrival of new staff was inconsistent. For example, 8 of the 17 Army installations we contacted about training relied exclusively on the IMCOM training and on-the-job training. However, for 5 of these installations, the reserve component soldier who had previously filled the position was gone before his or her replacement arrived—diminishing the effectiveness of on-the-job training. Further, only 4 of the 17 installations we contacted had a formal or documented training program for personnel responsible for managing injured and ill reserve component soldiers. For example, they provided more structured on-the-job training—requiring that new staff train under the more experienced staff before taking over the position—or, in some cases, installations appointed training officers and provided formal training for newcomers. Effective training, including on- the-job training, and detailed desk procedures describing the duties associated with the position to be filled could enhance the continuity of care provided to injured and ill reserve component soldiers. The three recommendations from our prior work that the Army has not yet addressed were all aimed at improving the Army’s order-writing, pay, personnel, and medical eligibility systems. These actions are part of a continuing Army-wide systems integration challenge that affects all soldiers, including those in the MRP program. Because the Army’s systems are not integrated and therefore the same or similar data must be manually entered into multiple systems, information that may affect a soldier’s pay and benefits is not always appropriately updated in each system. When this happens, it can result in disruptions to pay and benefits or, conversely, overpayments and potentially unauthorized access to benefits. DOD has a major system modernization effort under way known as the Defense Integrated Military Human Resources System for Personnel and Pay (DIMHRS), intended to ultimately replace more than 80 legacy systems, including all pay and personnel systems. However, as we have reported, DOD has encountered a number of challenges with DIMHRS, including the program’s overly schedule-driven approach and DOD’s difficulty in overcoming its long-standing cultural resistance to departmentwide solutions. As a result, the Army is not scheduled to begin implementing DIMHRS until April 2008. When the Army retains a soldier on active duty by issuing an MRP order, it must update and extend the soldier’s active duty pay and benefits status in the appropriate pay, personnel, and medical eligibility systems. However, because these systems are not integrated, information that affects a soldier’s pay and access to benefits must be manually entered into each system, which can result in delayed processing or input errors that may cause disruptions in pay and benefits. For example, when a soldier is retained on active duty MRP orders, if information related to the soldier’s active duty status and resulting medical eligibility is not promptly updated in the medical eligibility system, it can result in a disruption to the medical benefits available to the soldier’s family through TRICARE. According to 7 of the 25 soldiers we interviewed, their families experienced problems getting medical appointments because the soldiers’ active duty status was not updated in the medical eligibility system in a timely manner and therefore it appeared as if they and their families were no longer eligible to receive TRICARE benefits. Although soldiers can resolve disruptions to their pay and benefits by presenting copies of their MRP orders to the appropriate pay, personnel, and medical eligibility staff, some injured soldiers expressed frustration because information on how to resolve pay and benefit discrepancies was not always readily available. According to some of the soldiers we interviewed, their MRP unit commanders and unit support staff were often reserve component soldiers new to their positions and with no prior experience dealing with the Army’s pay and personnel processes. As a result, they did not always know how to help soldiers resolve pay and benefit discrepancies, creating an additional burden for soldiers who may already be under considerable stress because of their medical conditions. The lack of integrated pay, personnel, and other systems can also cause problems when soldiers are released from active duty but still have time left on their MRP orders. When the Army processes orders that affect pay, including MRP orders, the order end date, or stop pay date, is entered into the Army’s pay system. If soldiers are released from active duty before their MRP orders expire, the finance officials must manually adjust the stop pay dates recorded in the pay system or else these soldiers will continue to receive active duty pay. As we reported in the past, when the Army initiates collection actions to recoup the debt associated with overpayments such as these, depending on the indebted soldiers’ financial situation, these actions can create financial hardships for these soldiers. For example, we reported that hundreds of battle-injured soldiers were pursued for repayment of military debts through no fault of their own, including at least 74 soldiers whose debts had been reported to credit bureaus and private collection agencies at the time we initiated our audit in June 2005. In response to our previous work in this area, DFAS implemented a process intended to identify discrepancies between the order end date in its reserve component pay system and the active duty release date reflected in the Army’s personnel separation system. According to DFAS officials, they perform this comparison monthly and forward any discrepancies to Army installation finance officials to identify and resolve potential overpayments. Although accurately stopping pay when a soldier is released early from active duty is a documented challenge for the Army, the rules governing the use of leave for soldiers on MRP orders present an additional challenge for the Army with respect to overpayments. According to the Department of the Army Medical Holdover (MHO) Consolidated Guidance, soldiers on MRP orders must sell back all unused leave before being released from active duty. In contrast, soldiers on regular mobilization orders are not required to sell back their leave and have the option of taking unused leave before being released from active duty. As a result, while these soldiers are on leave, and before they have been released from active duty, DFAS has time to make adjustments to the stop pay date in the payroll systems and straighten out potential pay issues. This same time is not available to DFAS for soldiers being released from MRP orders. To determine whether the Army’s procedure for detecting potential overpayments has been effective, using MODS data we selected a stratified random sample of all soldiers released early from MRP, from May 6, 2004, through November 1, 2006. For the 380 soldiers we selected, we obtained a copy of each soldier’s Certification of Release or Discharge from Active Duty, DD Form 214, and compared the soldier’s separation date with the stop pay date recorded in the pay system. If the stop pay date was later than the soldier’s separation date, we concluded that the soldier had been overpaid. Based on our analysis we determined that the Army overpaid in 44 of the cases we tested. Overpayments ranged from about $65 to $32,000 with 29 cases being overpaid less than $3,000 and 37 cases being overpaid for less than 30 days. Until we brought it to the Army’s attention, Army officials were unaware of these overpayments. In projecting our sample results to the population of 11,575 soldiers released early from MRP orders, we estimate that the Army overpaid 12 percent of these soldiers a total of at least $2.2 million. Although the Army has identified several factors associated with CBHCI that put soldiers at greater risk of being retained on active duty longer than medically necessary, the Army currently lacks the data needed to determine whether it is effectively managing this risk. According to the Army’s metrics, soldiers treated by civilian providers through CBHCI are, on average, retained on active duty 117 days longer than soldiers treated at MTFs—which could indicate that the Army is not managing the added risks associated with CBHCI. However, the metrics used by the Army to compare soldiers treated at the MTFs to those treated through CBHCI may not be comparable. For example, according to Army officials, the metrics for soldiers treated at MTFs may be skewed lower because of the Army’s CBHCI selection criteria. Specifically, the CBHCI selection excludes soldiers whose injuries or illnesses are expected to be treated within 60 days. Without more information about the patient populations that constitute these two groups, the Army does not know whether it is effectively managing the risk that soldiers treated through CBHCI may be retained longer than medically necessary. Whether a soldier is treated at an MTF or by a civilian provider as part of CBHCI, the Army’s goal is the same—to ensure that the soldier attains the optimal level of physical or mental condition and to determine whether he or she can be returned to duty, released from active duty, or released from military service. However, according to the Army, there is a greater risk that soldiers treated through CBHCI may be retained on active duty longer than medically necessary. According to the Army, this risk is greater because of (1) the remote physical locations of soldiers being treated from home, which precludes the Army from directly monitoring their medical care and progress, and (2) the reliance on civilian doctors, who may not be as familiar with Army standards of care or MRP program goals. As discussed previously, each soldier participating in CBHCI is assigned an Army physician, case manager, and platoon sergeant who are physically located at a regional CBHCI operating location, whereas the injured or ill soldier is physically located at his or her home—which could be in another state. For example, an Army physician, case manager, and platoon sergeant located at the CBHCO in Birmingham, Alabama, are responsible for managing injured or ill soldiers who live in Alabama, Mississippi, Tennessee, and Kentucky. Unlike soldiers treated at MTFs, soldiers participating in CBHCI are not treated by Army physicians. Instead, the Army physician and case manager assigned to an injured soldier participating in CBHCI review medical documentation provided by the civilian doctor to monitor the soldier’s progress toward attaining an optimal level of physical or mental condition. Similarly, the injured soldier’s platoon sergeant is not personally overseeing the soldier’s well- being. Instead, platoon sergeants located at the CBHCI operating location call the soldiers assigned to them each day—to make sure the soldiers have reported for duty. To ensure that soldiers are not retained on active duty longer than medically necessary, the Army actively monitors the status of individual soldiers, regardless of whether they are being treated at MTFs or through CBHCI. For example, at each of the four CBHCI regional operating locations we visited, case managers, platoon sergeants, and Army physicians met on a biweekly basis to discuss the status of each soldier approaching 180 days, 270 days, and 365 days on MRP orders, including a discussion of past appointments, scheduled appointments, and the steps remaining in the civilian providers’ treatment plans. Although the Army recently started comparing the average length of stay of soldiers treated by civilian providers through CBHCI with the average length of stay of soldiers treated at MTFs, these metrics may be misleading. According to the Army’s metrics, the average length of stay, before being returned to duty or medically separated, for soldiers treated by civilian providers through CBHCI is 288 days whereas the average length of stay for soldiers treated at MTFs is 171 days. These metrics indicate that soldiers treated through CBHCI are retained on active duty 117 days longer than soldiers treated at MTFs—which might indicate that soldiers treated through CBHCI are more likely to be retained on active duty longer than medically necessary. Army officials have suggested that the metrics may not accurately reflect how well they are managing the risk that soldiers treated through CBHCI may be retained on active duty longer than medically necessary. According to the Army’s CBHCI selection criteria, soldiers whose injuries are expected to be treated within 60 days are not eligible to participate in CBHCI, causing the metrics for soldiers treated at MTFs to be skewed lower than those for soldiers treated through CBHCI. However, the Army does not track the information needed to identify data that may inappropriately skew its metrics and remove it from its calculation to ensure that the populations of soldiers being treated through MTFs and CBHCI are comparable. Without additional information about the patient populations that make up these two groups, the Army does not know whether it is effectively managing the risk that soldiers treated through CBHCI may be retained on active duty longer than medically necessary. Through the corrective actions taken in response to our prior report on this topic, including developing comprehensive MRP guidance, implementing improved MRP applications processes, and developing performance measures to evaluate MRP, the Army has demonstrated its commitment to improving its processes and programs for managing and paying injured reserve component soldiers who request to be retained on active duty to receive medical care. We recognize that it may take several more years to fully address the pay-related problems stemming from weaknesses in the Army’s automated systems that control pay and access to pay-related benefits. In the interim, the Army can take several steps in the areas of training, improved CBHCI performance metrics, and payroll and personnel system reconciliation procedures to further improve the implementation and management of its MRP and CBHCI programs. We reiterate our previous recommendations to design and implement integrated order-writing, pay, personnel, and medical eligibility systems that provide visibility over injured and ill reserve component soldiers and ensure that the order-writing system automatically updates the pay, personnel, and medical eligibility systems. We also recommend that the Secretary of the Army direct the Assistant Secretary of Manpower and Reserve Affairs, in coordination with the Army’s Office of the Surgeon General, the Installation Management Command, and the Defense Finance and Accounting Service, to take the following six actions: Develop and apply consistent Army-wide standards for installation- level training of new MRPU staff, including the use of desk procedures, to help ensure that they are adequately trained before they assume their new job responsibilities. Develop and apply consistent standards for training of reserve component soldiers in the MRP program to ensure that they understand the requirements, benefits, and processes associated with the program. Develop and disseminate points of contact, including the names, telephone numbers, and e-mail addresses, for the Army officials responsible for assisting injured or ill reserve component soldiers with resolving discrepancies in pay or benefits. Also include in this information the name, telephone number, and e-mail address of the DFAS ombudsman responsible for assisting injured or ill reserve component soldiers with pay-related issues. Require that the local finance offices at Army installations reconcile all discrepancies between the stop pay date recorded in the Army’s payroll system and the separation date recorded in the Army’s personnel system and adjust the Army’s payroll and personnel systems accordingly. Evaluate the efficacy of allowing reserve component soldiers to take unused leave before they are released from active duty. Develop metrics that will allow comparison between the length of stay for soldiers treated through CBHCI and those treated at MTFs to determine whether the Army is effectively managing the additional risk associated with CBHCI. In its written comments on a draft of this report, which are reprinted in appendix III, DOD concurred with five of our six recommendations and partially concurred with the remaining recommendation. DOD partially concurred with our recommendation to develop metrics that will allow a comparison between the length of stay for soldiers treated through CBHCOs and those treated at MTFs. According to DOD, timely access to care for soldiers treated through CBHCOs depends on the willingness of local civilian health care providers to accept TRICARE patients and the variance of the number and type of health care providers available by geographic region; therefore, a soldier’s length of stay at a CBHCO cannot be directly compared to MRPUs. We agree that the access to care timeline for soldiers treated by civilian TRICARE providers may be longer than for soldiers treated at MTFs, which is why we have recommended that the Army develop metrics to determine how well it is managing this risk. In its written response, DOD has proposed developing metrics to compare administrative process timelines for CBHCOs and MRPUs. Although DOD does not provide more specific information on the proposed metrics, the intent of our recommendation could be satisfied with metrics that allow a comparison of the operating efficiency of these programs if the Army appropriately excluded soldiers whose injuries are expected to be treated within 60 days and who thus would not eligible to participate in CBHCI, which would allow a more meaningful comparison of the two populations. Although DOD concurred with our recommendation to reconcile all discrepancies between its payroll and personnel records, in commenting on this recommendation, DOD asserted that the findings in our report reflect one-half of 1 percent of the sample population. However, DOD’s assertion is incorrect. As discussed in appendix I, we selected a stratified random sample of 380 soldiers from the population of 11,575 soldiers released from active duty, from May 6, 2004, through November 1, 2006, and before their MRP orders expired. Our use of statistical sampling allowed us to project our sample results to the population of 11,575 soldiers released early from MRP orders. Based on our sampling results, we estimated that the Army overpaid 12 percent of these soldiers a total of at least $2.2 million. We will send copies of this report to interested congressional committees, the Secretary of the Army, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9095 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report were Diane Handley, Assistant Director; Francine DelVecchio; Jamie Haynes; and Christopher Spain. To determine whether the Army’s Medical Retention Processing (MRP) program has resolved the issues we identified previously with the Active Duty Medical Extension (ADME) program, we reviewed applicable policies, procedures, and program guidance; observed MRP operations; and interviewed appropriate agency officials. Specifically, we obtained and reviewed procedural guidance for reserve component soldiers on medical retention processing orders, including the Department of the Army Medical Holdover (MHO) Consolidated Guidance, Medical Holdover (MHO) Soldier’s Handbook, and Department of Defense (DOD) and Army regulations. We also relied on the Standards for Internal Control in the Federal Government to provide a framework for assessing the Army’s MRP program and its Community-Based Health Care Initiative (CBHCI). We applied the policies and procedures prescribed in these documents to the observed and documented procedures and practices followed by the key Army and DOD components involved in providing active duty pays and medical benefits to reserve component soldiers. We selected installations for review based on the reported populations of medical retention processing and medical holdover (MHO) soldiers, as well as other specialized traits, including presence of regional medical commands. The installations we selected for review were four of the top five installations based on the size of the MRP and MHO populations. The installations we visited are listed in table 1. At each installation, we interviewed officials who were responsible for counseling soldiers on the MRP program, officials who prepared and submitted the MRP application packets, case managers, primary care managers, MHO unit commanders, and installation payroll personnel. We obtained documentation on and performed walk-throughs of the process to request an MRP order for a reserve component soldier, the command and control structure of MHO units, the case management function, installation MRP tracking systems, as well as the Medical Operational Data System (MODS) and the medical-extension-to-pay system interface. We also randomly selected and interviewed 25 injured or ill reserve component soldiers from the installations we visited to ensure that the Army’s MRP program was operating as effectively as Army officials had asserted. Specifically, we asked these soldiers questions related to their experiences filing for and receiving MRP orders, accessibility of Army staff administering the program, and whether they had any problems related to their military pay and medical benefits while in the MRP program. In addition to the 4 Army installations we visited, we contacted Army officials at 13 other Army installations to obtain information on training provided to those responsible for managing and treating injured or ill reserve component soldiers. Specifically, we asked whether the medical retention processing units (MRPU) provided formalized training for new staff when they arrive at the MRPUs for duty and if so, whether training officers were assigned to coordinate the training. We also interviewed and obtained documentation on various aspects of MRP with officials from the following offices or commands: National Guard Bureau, Arlington, Virginia Army Human Resource Command, Alexandria, Virginia U.S. Army Reserve Command, Fort McPherson, Georgia Army’s Office of the Surgeon General, Falls Church, Virginia Army G-1, Army Pentagon, Washington, D.C. Army Task Force CBHCO-West, Fort Sam Houston, Texas Army Task Force CBHCO-East, Fort Jackson, South Carolina Defense Finance and Accounting Service (DFAS), Indianapolis, Indiana As part of our work with the Army’s Office of the Surgeon General, we requested and analyzed all available data and metrics related to MRP and CBHCI—including metrics related to (1) soldiers’ satisfaction with these programs, (2) the amount of time injured or ill reserve component soldiers had spent on MRP orders (by treatment location), and (3) the timeliness of processing MRP requests. With respect to the Army’s automated systems, we assessed whether they provided reasonable assurance that once an MRP order was issued, the appropriate pay, personnel, and medical eligibility systems are updated in an accurate and timely manner. To accomplish this, we interviewed and obtained available documentation from individuals responsible for entering MRP order transactions into the Army’s order-writing, pay, personnel, and medical eligibility systems. We did not test computer security or access controls or test individual transactions. To assess the reliability of the Army’s MODS, which houses, among other things, information on soldiers in the MRP program, we (1) reviewed existing documentation related to the data sources, such as patient rosters and MRP application packages; (2) interviewed knowledgeable agency officials about the data, including officials at the Office of the Surgeon General, case managers, and MRPU commanders; (3) manually tested the data for missing data items, outliers, and obvious errors; and (4) traced the data from source documents to MODS for 564 soldiers and noted only 5 cases in which the data were lacking. We determined that the data were sufficiently reliable for the purposes of this report. To determine whether the Army had overpaid reserve component soldiers who were released early from MRP, using MODS data we selected a stratified random sample of 380 soldiers from the population of 11,575 soldiers released from active duty, from May 6, 2004, through November 1, 2006, and before their MRP orders expired. We stratified the population into two groups based on whether the soldier had been released early from the initial MRP order or an extended MRP order. With this probability sample, each soldier in the population had a known, nonzero probability of being selected. Each selected soldier was subsequently weighted in the analysis to account statistically for all soldiers in the population, including those who were not selected. Because we selected a sample of soldiers, our results are estimates of the population and thus are subject to sample errors that are associated with samples of this size and type. Our confidence in the precision of the results from this sample is expressed in 95 percent confidence intervals, which are expected to include the actual results in 95 percent of the samples of this type. All percentage estimates in this report have a margin of error of plus or minus 5 percent or less. For the 380 soldiers we selected, we obtained a copy of each soldier’s Certification of Release or Discharge from Active Duty—DD Form 214— and compared the soldier’s separation date with the stop pay date recorded in the DFAS monthly Global War on Terrorism Army National Guard/Reserve payment file from October 2001 through December 2006 containing 80,972,329 component of pay level records. In cases where the Army’s pay system showed a pay stop date that occurred after the soldier’s separation date, we calculated the amount of the overpayment based on the soldier’s base pay per day while on active duty during the period in question. In cases where the pay system did not show a pay stop date and a soldier was still receiving active duty pay, we calculated the amount of the overpayment based on the soldier’s base pay per day while on active duty during the period in question up until the date of our test. To determine whether the Army has effectively managed the risk that soldiers treated through CBHCI may be retained on active duty longer than medically necessary, we reviewed applicable policies, procedures, and program guidance; observed CBHCI operations; interviewed appropriate agency officials; and obtained and analyzed all data and performance metrics related to CBHCI operations. The community-based health care organizations (CBHCO) we selected for review (see table 2) were four of the top six CBHCOs based on the number of soldiers. At each CBHCO, we interviewed case managers, platoon sergeants, CBHCO commanders, and the Army physicians responsible for determining whether injured or ill soldiers have attained an optimal level of physical or mental condition. We obtained documentation and observed the command and control structure, the case management function, and the systems and procedures used to track soldiers’ administrative and medical status. Using Army data, we also analyzed the amount of time injured or ill soldiers were on MRP orders—comparing the length of stay data for soldiers participating in CBHCI with the same data for soldier treated solely at military treatment facilities (MTF). We briefed DOD, Department of the Army, Army Reserve, and National Guard Bureau officials from the selected sites on the details of our audit, including our findings and their implications. We conducted our fieldwork from July 2006 through March 2007 in accordance with generally accepted government auditing standards. On March 30, 2007, we requested comments on a draft of this report from the Secretary of Defense or his designee. Written comments from the Deputy Under Secretary of Defense (Program Integration) received on May 1, 2007, are summarized and evaluated in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix III. Table 3 summarizes the status of the Army’s effort to implement the 22 recommendations we made in our February 2005 report entitled Military Pay: Gaps in Pay and Benefits Create Financial Hardships for Injured Army National Guard and Reserve Soldiers (GAO-05-125). | In February 2005, GAO reported that weaknesses in the Army's Active Duty Medical Extension (ADME) process caused injured and ill Army National Guard and Reserve (reserve component) soldiers to experience gaps in pay and benefits. During the course of GAO's previous work, the Army implemented the Medical Retention Processing (MRP) program in May 2004 and Community-Based Health Care Initiative (CBHCI) in March 2004. CBHCI allows reserve component soldiers on MRP orders to return home and receive medical care through a civilian health care provider. As directed by congressional mandate, GAO determined whether (1) MRP has resolved the pay issues previously identified with ADME and (2) the Army has the metrics it needs to determine whether it is effectively managing CBHCI program risks. GAO's scope did not include the medical, facilities, or disability ratings issues recently reported by the media at Walter Reed Army Medical Center. The Army's MRP program has largely resolved the widespread delays in order processing that were associated with ADME. As a result, injured and ill reserve component soldiers retained on active duty through MRP have not experienced significant gaps in pay and benefits. The Army has addressed 17 of the 22 recommendations GAO made previously, which include developing comprehensive guidance for retaining injured and ill reserve component soldiers on active duty, providing a more effective means of tracking the location of soldiers in the MRP program, addressing problems related to inadequate administrative support for processing active duty retention orders, and developing performance measures to evaluate MRP. Of the five recommendations the Army has not fully implemented, two are related to providing adequate training to reserve component soldiers in the MRP program and Army personnel responsible for managing the program and three deal with improving the Army's order-writing, pay, personnel, and medical eligibility systems. Although the Army has issued a soldiers' handbook for soldiers in the MRP program and developed a biannual training conference for Army personnel responsible for managing these soldiers, the Army lacks consistent, Army-wide training standards for injured reserve component soldiers in the MRP program and Army personnel responsible for managing the program. Because of an Army-wide system integration challenge that affects all soldiers, not just those in the MRP program, information is not always updated in the order-writing, pay, personnel, and medical eligibility systems as it should be. As a result, 7 of the 25 randomly selected soldiers GAO interviewed reported that their families' medical benefits were temporarily disrupted when they transitioned to MRP orders. The lack of integrated systems also caused overpayment problems when soldiers were released from active duty but still had time left on their MRP orders. Over a nearly 3-year period, GAO estimates that the Army overpaid these soldiers by at least $2.2 million. Although, according to the Army, soldiers participating in CBHCI are at greater risk of being retained on active duty longer than medically necessary, the Army currently lacks the data needed to determine whether it is effectively managing this risk. According to the Army's metrics, soldiers treated by civilian providers through CBHCI are, on average, retained on active duty 117 days longer than soldiers treated at military treatment facilities (MTF). According to the Army, the metrics for soldiers treated at MTFs are skewed lower because of the Army's CBHCI selection criteria-- which exclude soldiers whose injuries or illnesses are expected to be treated within 60 days. However, until the Army obtains more comparable information for the patient populations treated through CBHCI and MTFs, the Army cannot reliably determine whether it is effectively managing the program's risk. |
Interstate compacts are legal agreements between states that allow them to act collectively to address issues that transcend state borders. Interstate compacts that may affect the balance of power between states and encroach upon or impair the supremacy of the United States must have congressional consent. Since the late 1940s, states have entered into interstate compacts to facilitate the sharing of resources across state lines in response to disasters. In passing the Federal Civil Defense Act of 1950, Congress encouraged states to enter into interstate agreements that provided a legal framework for mutual defense aid and disaster assistance. By the early 1950s, virtually all states and other jurisdictions entered into defense aid and disaster compacts. However, after years of minimal financing and public support, the Federal Civil Defense Act did not play a significant role in facilitating disaster response. After Hurricane Andrew devastated southern Florida in 1992, Congress enacted many of the repealed provisions of the Federal Civil Defense Act into the Robert T. Stafford Disaster Relief and Emergency Assistance Act in 1994. Responding to similar concerns raised following Hurricane Andrew, the Southern Governors’ Association created the Southern Regional Emergency Management Assistance Compact to enable member states to provide mutual aid in managing any emergency or disaster that had been designated as such by the governor of the impacted state. It also provided for mutual emergency-related activities, testing, and training. In 1995, the Southern Governors’ Association opened membership to all U.S. states and territories, revising the terms of the agreement and adopting the new name, the Emergency Management Assistance Compact (EMAC). Congress consented to the compact in 1996. EMAC is a mutual aid agreement among member states and is not a government agency. Overall governance is provided by the EMAC Committee, whose chair is selected annually by the President of the National Emergency Management Association (NEMA). Day-to-day work of the EMAC Committee is carried out by an EMAC Executive Task Force whose members are elected by the EMAC membership. The Chair of the EMAC Committee works with the Executive Task Force to develop policies and issue guidance. NEMA provides administrative oversight for the EMAC network. Since 2003, NEMA has assigned one person to serve as the EMAC Coordinator—the only paid employee dedicated full time to EMAC—as well as a part-time consultant who serves in the position of EMAC Senior Advisor. Both of these positions have been funded through a cooperative agreement between FEMA and NEMA to provide administrative and management support for EMAC. EMAC operating protocols outline one process for member states to request and provide assistance, whether these resources are civilian or National Guard. The process describes how to request, provide, receive, and reimburse assistance from other member states in response to a disaster. Before resources can be deployed under EMAC, the governor of an impacted state must first declare an emergency. Representatives from the impacted state then contact EMAC leadership to inform them that interstate assistance may be needed. If desired, the impacted—or requesting—state can ask the EMAC leadership to send a team of emergency management personnel to the state’s emergency operations center to assist with subsequent resource requests under EMAC. The requesting state can then request additional resources through the EMAC network from other member states. These states—often referred to as assisting states—work with the requesting state to identify resources required and other details. Once both the requesting and assisting states approve the final details, resources are deployed to the area of need. Once the missions have been completed and resources have returned home, the assisting states prepare formal requests for reimbursement, which are then sent to, and processed by, the requesting state. Figure 1 provides a summary of this process. In cases when a disaster strikes multiple states, FEMA has a standing agreement with NEMA to request a team of emergency managers to deploy to its national or regional coordination centers to help coordinate EMAC network and federal activities, as appropriate. Although EMAC is an agreement between states, catastrophic disasters can overwhelm the resources of an impacted state, requiring it to seek assistance from the federal government. In the case of a presidentially declared disaster, impacted states can work with FEMA to seek federal financial assistance to cover costs associated with emergency response efforts that may include eligible missions conducted under EMAC. In such cases, the impacted state prepares project worksheets—a form used to collect and document information on the scope and estimated cost for public assistance projects—and submits them to FEMA for review. Once approved, FEMA will obligate funds for the project to the impacted state, which in turn reimburses the assisting state directly. As of June 2007, Mississippi and Louisiana are in the process of seeking financial assistance from FEMA to cover approximately $200 million for missions conducted under EMAC. The National Guard Bureau’s (NGB) mission is to participate with the Army and Air Force staffs in the formulation, development, and coordination of all programs, policies, concepts, and plans for the National Guard. NGB has visibility of all National Guard assets and advises the states on force availability to support all requirements. NGB serves as a coordinator between the Secretaries of the Army and Air Force and state National Guard assets. This is achieved through coordinating with state governors and adjutant generals. NGB also monitors and assists the states in the organization, maintenance, and operation of their National Guard units. Another aspect of NGB’s coordination is working with other DOD agencies as it carries out responsibilities to address domestic emergencies assigned in accordance with the National Response Plan (NRP). The purpose of the NRP is to establish a comprehensive, national, all-hazards approach to domestic incident management across a spectrum of activities, including prevention, preparedness, response, and recovery. In addition, it contains a catastrophic incident annex that establishes the strategy for implementing and coordinating an accelerated proactive national response to a catastrophic incident, including strategies to rapidly provide key resources to augment state, local, and tribal response efforts during a catastrophic event. The NRP also contains a catastrophic incident supplement with a detailed execution schedule that lists steps that agencies should take at specific times ranging from within 10 minutes of the start of an incident time to within 96 hours after the incident occurs. The purpose of this supplement is to accelerate the delivery of federal and federally accessible resources and capabilities in support of a response to a no-notice or short-notice catastrophic incident. These are incidents in which the response capabilities and resources of the local jurisdiction (including mutual aid from surrounding jurisdictions) will be profoundly insufficient and quickly overwhelmed. Since the inception of the EMAC in 1995, both the number of members and the volume and types of resources requested have grown considerably. States activated EMAC in response to a variety of emergencies, including hurricanes; floods; wildfires; and the September 11, 2001 terrorist attacks. In recent years, the volume and types of resources deployed under EMAC have also increased. Resources deployed under EMAC represented a substantial portion of overall out-of-state assistance deployed in response to the 2005 Gulf Coast hurricanes. EMAC membership has grown from a handful of members in 1995 to 52 today. EMAC grew out of the Southern Regional Emergency Management Assistance Compact, which was created in August 1993 by the Southern Governors’ Association and the Virginia Department of Emergency Services following Hurricane Andrew. When EMAC was formed in 1995, membership consisted of 4 states: Louisiana, Mississippi, Tennessee, and Virginia. Since that time, as figure 2 shows, EMAC membership has grown to 49 states, the U.S. Virgin Islands, Puerto Rico, and the District of Columbia. During this period, states have used EMAC in response to a variety of emergency events, including natural disasters, terrorist attacks, and other disasters and emergencies. For example, the states activated the EMAC process in response to disasters such as the 2005 Gulf Coast hurricanes; tornadoes in Kansas and Kentucky; floods in West Virginia and New Hampshire; wildfires in Texas and Nebraska; the September 11, 2001 terrorist attacks; and a variety of other disasters and emergencies, such as the 2003 Rhode Island Nightclub Fire and the Space Shuttle Columbia Disaster. In 2004 and 2005, the number and types of deployments under EMAC exceeded previous years’ deployments. Although deployment data for 1995 through 2004 are incomplete, EMAC leadership reported that deployments were higher in 2004 than in previous years. Data compiled by the EMAC network demonstrate that the total civilian and National Guard deployments in response to the 2005 Gulf Coast hurricanes were more than 25 times the number of the deployments for the 2004 Florida hurricanes. Figure 3 shows EMAC deployment data for some significant disasters. States have made larger requests for assistance under EMAC, and they have requested a wider range of resources. According to EMAC leadership, prior to 2004, states primarily requested emergency management personnel to support their state emergency operations centers. For example, of the estimated 40,000 people who responded to the September 11, 2001 terrorist attack on New York, New York officials requested only 26 emergency management personnel under EMAC to supplement state emergency management efforts. In 2004, Florida requested a wider variety of resources from other states under EMAC than had been requested in previous disasters. It requested first response personnel, health professionals, logistics support, and emergency management support for county emergency operations centers. In 2005, Louisiana, Mississippi, Texas, Alabama, and Florida requested an even greater variety of resources under EMAC, including 46,503 National Guard personnel, 6,882 law enforcement responders, 2,825 fire and hazardous materials responders, and 9,719 other responders, many of whom were local government assets deployed directly to the impacted areas. Figure 4 shows the variety of civilian personnel deployed under EMAC for selected significant disasters. Resources deployed under EMAC in response to the 2005 Gulf Coast hurricanes constituted a substantial portion of overall out-of-state response efforts. Following Hurricane Katrina in 2005, Louisiana and Mississippi both relied heavily on support from other states to supplement their own emergency response efforts. Although the exact number of personnel deployed to Louisiana and Mississippi in response to Hurricane Katrina is not known, data available on the response during the first 2 weeks clearly indicate that the share of personnel deployed under EMAC represented a significantly larger share of personnel deployed from out of state than from any other contributor, including states that are not members of EMAC; the active component, military; FEMA; the U.S. Coast Guard; and federal law enforcement. Figure 5 shows the distribution of out-of-state personnel deployed to impacted states following Hurricane Katrina. EMAC, along with its accompanying policies, procedures, and practices, provides for successful collaboration that enables its members to request resources and provides timely assistance to states in need. However, opportunities exist to enhance and sustain collaborative efforts within the EMAC network and between the network and federal agencies and nongovernmental organizations. Our previous work identified a number of steps that can improve collaboration, including (1) clearly articulating roles and responsibilities; (2) establishing clear, consistent, and compatible standards; and (3) identifying opportunities to leverage and share resources. While the compact itself and the policies and procedures adopted by the EMAC network have clarified roles and responsibilities for some key operations, coordination can be improved among EMAC members to reduce confusion and delays when deploying resources. EMAC members have also adopted protocols, standards, and systems that work well for smaller-scale deployments, though gaps still exist with regard to requesting resource needs, tracking resource requests, and facilitating reimbursement following catastrophic disasters. Finally, some members have developed practices that may provide models or insights to other members to enhance their ability to leverage resources under EMAC. As we have previously found, to overcome differences in organizational cultures and established ways of doing business, collaborating organizations must have a clear and compelling rationale to work together. This compelling rationale can be imposed through legislation or other directives or can come from the organizations’ own perceptions of the benefits they can obtain from working together. Collaborating organizations must also work across organizational lines to define and articulate a common outcome consistent with their respective goals. EMAC provides a framework that helps its members to overcome differences in missions, organizational cultures, and established ways of doing business in order to achieve a common outcome—streamlining and expediting the delivery of resources among members during emergencies. Each member must enact identical legislation to that of the EMAC legislation passed by Congress in 1996, ensuring that member states’ goals are aligned with the goals outlined in the compact. The EMAC language sets the foundation for members to provide mutual assistance in a disaster or emergency, regardless whether it is a natural disaster or a man-made disaster, such as technological hazard, civil emergency, community disorder, or enemy attack. In addition, the compact language: outlines responsibilities for the members to formulate procedural plans and programs for interstate cooperation through EMAC; affords personnel from assisting states the same duties, rights, and privileges afforded to similar personnel within the requesting state (except for the power of arrest); accepts licenses, certificates, or other permits for skills requested; provides liability protection to responders from assisting states as agents of the requesting state for tort liability and immunity purposes; requires that assisting states provide workers’ compensation for resources deployed from their states; and calls for the reimbursement of services rendered through EMAC. By streamlining legal and other administrative requirements associated with sharing resources across state lines, EMAC enables states to more quickly provide emergency assistance in times of disaster than if these states worked outside of EMAC to seek and provide assistance. For example, although New York was not a member of EMAC prior to the September 11, 2001 terrorist attacks, it joined shortly thereafter. New York officials stated that they expedited the arrival of the supplemental assistance by requesting assistance from EMAC members. While the compact and its accompanying protocols establish roles and responsibilities that have worked well for smaller-scale deployments, they have not kept pace with the growing use of EMAC, sometimes resulting in delays and limiting EMAC’s overall effectiveness. Our previous work has shown that defining roles and responsibilities among collaborating organizations both enhances and sustains collaboration. In doing so, organizations clarify who will do what, thereby better organizing both joint and individual efforts and facilitating decision making. In 2004 and 2005, the lack of clearly defined roles and responsibilities with regard to receiving and integrating resources deployed under EMAC resulted in delays and confusion. During this same period, the EMAC network and NGB experienced challenges in effectively coordinating, though they have since made improvements. The EMAC network delineates roles and responsibilities for requesting states to receive and integrate emergency management personnel deployed under EMAC through its protocols into states’ emergency operations centers. For example, the EMAC Operations Manual recommends that requesting states provide workstations, equipment, and technology for emergency managers deployed to their states’ emergency operations centers and that these resources be integrated into their states’ emergency operations centers’ organizational charts. However, the roles and responsibilities of member states have not kept pace with the changing use of EMAC. While roles and responsibilities do exist for member states to receive and integrate emergency management personnel into state emergency operations centers, similar guidelines do not exist to define the roles and responsibilities of requesting states regarding how to receive and integrate first responders deployed under EMAC into impacted areas, leading to confusion and delays—this is especially important since most of the resources deployed under EMAC in 2004 and 2005 were deployed to areas outside state emergency operations centers. This, in turn, affected the overall ability of resources deployed under EMAC to provide the necessary assistance in response to the 2004 Florida hurricanes and the 2005 Gulf Coast hurricanes. During the response to the 2005 Gulf Coast hurricanes, state officials managing response efforts on the ground were sometimes unaware of general EMAC policies and unprepared to receive or integrate resources deployed under EMAC into impacted areas. For example, although resources deployed under EMAC do not require additional certification to practice their respective professions in the impacted state, confusion arose when an emergency medical response team deployed, because Mississippi state health officials required the medical team to complete supplemental medical licensure applications. In addition, Florida health officials told us that they were initially not prepared to receive or integrate resources deployed under EMAC in response to the 2004 Florida hurricanes, causing some confusion and delaying deployments. Learning from their experiences in 2004, Florida officials stated that they resolved these shortcomings and had policies and procedures in place to receive and integrate out-of-state resources when Hurricane Katrina was approaching Florida in 2005. Local officials we spoke with responsible for receiving and integrating resources deployed under EMAC—and many state and local responders who interacted with local officials responsible for receiving and integrating resources deployed under EMAC—stated that they had limited or no knowledge of what EMAC was or how it functioned, were not aware that resources had been requested or deployed to assist them, and did not have plans for how to employ these resources once they arrived. For example, local officials from counties in southern Mississippi told us they were unaware that emergency response teams from Florida or New York were deployed and were not sure how to employ their assistance. As a result, rather than providing immediate assistance at full capacity, the emergency response teams spent critical time briefing local officials on basic EMAC processes and emergency procedures. In other circumstances, resources that were deployed to impacted areas experienced challenges in locating points of contact and integrating into local command structures. For example, a South Carolina National Guard Unit deployed under EMAC told us that it “wasted valuable time” waiting for mission assignments from local authorities following Hurricane Katrina. EMAC leadership has taken steps in the past year to address the lack of clarity regarding roles and responsibilities of states receiving and integrating assistance. These include updating the EMAC Operations Manual to include specific language suggesting the need for members to establish procedures for requesting and receiving assistance. EMAC leadership has also taken steps to address EMAC knowledge gaps among state and local officials by creating an ad hoc task force to evaluate and improve training materials available to member states, such as a brochure to help personnel deployed under EMAC understand basic EMAC protocols. However, the EMAC network has not developed as clear guidance for receiving and integrating resources into impacted areas as it has for receiving and integrating emergency managers into state emergency operations centers. In 2005, the EMAC network and NGB experienced coordination challenges. Although both the EMAC network and NGB facilitate the sharing of resources across state lines, they had limited visibility into each others’ systems for initiating and fulfilling requests. For example, emergency management officials responsible for coordinating requests for assistance under EMAC in the first 3 weeks after Hurricane Katrina made landfall stated that they were frequently unaware of National Guard deployments under EMAC until after the resources had already returned to their home states. In addition, NGB officials responsible for coordinating deployments of National Guard resources stated that they were unaware of requests for assistance made through EMAC. Learning from these challenges, the EMAC network and NGB have begun to work together to develop a better understanding of their mutual roles and responsibilities, as well as how they can collaborate to achieve an outcome that benefits their respective missions. For example, to improve coordination between the EMAC network and key partners such as NGB, EMAC leadership created the EMAC Advisory Group in 2006. NGB, along with other advisory group members, has recently been granted access to view reports on requests and deployments under EMAC during a disaster. We previously reported that collaborating organizations need to address the compatibility of standards, policies, procedures, and data systems in their efforts to facilitate working across boundaries and prevent misunderstanding. While the EMAC network has developed protocols, standards, and systems that have generally worked well for smaller-scale deployments, gaps emerged with the rapid growth in the number and types of resources deployed under EMAC. In addition, gaps in federal guidance and protocols resulted in administrative burdens and reimbursement delays. We identified challenges in five areas: (1) gaps in EMAC protocols with regard to communicating resource needs sometimes yielded deployment delays and confusion among requesting state officials and resource providers; (2) the lack of a comprehensive system to support the tracking of resource requests from initial offers of assistance through mission completion in 2005 caused delays, duplications of effort, and frustration; (3) existing reimbursement standards are not designed to facilitate timely reimbursement following catastrophic disasters; (4) the lack of federal guidance to obtain advance funding resulted in delaying some state-to-state reimbursements under EMAC; and (5) deployment of National Guard troops under two different authorities resulted in delays in reimbursement and additional administrative burdens. To facilitate collaboration in times of a disaster, the EMAC network has established standard processes and systems regarding how its members request resources through EMAC. EMAC processes enable members to solicit assistance through the use of standardized e-mail requests which are broadcast to everyone in the network, or directly from a specific member either in writing or verbally. When an assisting state responds to a request for assistance, the requesting and assisting states communicate back and forth to negotiate mission details: (1) officials from the requesting state approve, sign, and fax the request to an assisting state; (2) officials from the assisting state provide details on the assistance they intend to provide, sign the request, and fax it back to the requesting state; and (3) once the agreement is finalized, requesting state officials approve, sign, and fax the finalized request for assistance back to the assisting state. Although the EMAC network has developed these basic processes, gaps in some areas have led to confusion and delays among member states regarding the effective communication of resource needs when responding to the 2005 Gulf Coast hurricanes. For example, emergency managers deployed under EMAC to Louisiana told us that they received repeated requests simply for “search and rescue” teams and that these initial requests did not initially contain sufficient detail regarding the type of skills and equipment needed to carry out the particular operation that was needed. Search and rescue missions can vary significantly—one type of mission might require an aerial search and rescue team, while another might require a canine search and rescue team. Therefore, identifying and then clearly communicating the specific skills and equipment required is critical. According to these officials, requests that initially omitted critical mission details had to be clarified, causing delays in resource deployments of up to 3 or more days as requesting and assisting state officials went back and forth to clarify these details. A second shortcoming in how requests were communicated during the 2005 Gulf Coast hurricanes was that requesting states did not provide sufficient details regarding conditions at the locations to which resources were deployed. This led to teams arriving in the area of operations without necessary support for responders. For example, the first firefighters deployed to New Orleans under EMAC were given incorrect information regarding the availability of food supplies and housing. Accordingly, these firefighters were told they would receive transportation, food, and lodging when they arrived. However, once they arrived at the initial staging area, they quickly realized that they were not going to receive any of these resources. As a result, they were delayed at the initial staging area until they located necessary supplies on their own. Responding to concerns raised regarding the clarity of resource requests, the EMAC network has taken several steps to improve its processes and systems. For example, the EMAC network has adopted changes to the EMAC Operations Manual that require requesting states to include additional details such as the type of resources requested, specifying the particular skills, abilities, or equipment needed. EMAC leadership updated the basic form used to request assistance so that it now includes additional mission details, such as the severity of conditions within the area of operations. EMAC leadership is currently transitioning part of the process to an online format with templates, pull-down menus, and other tools to help further specify mission details and improve the consistency of language used in the request process. The new version of the form to request assistance more effectively captures personnel deployment considerations (e.g., recommended immunizations), but it does not capture equipment considerations (e.g., fuel supplies, maintenance provisions, and ownership of equipment purchased for the activation). The EMAC network does not have a comprehensive system in place to support the tracking of resources from initial offers of assistance through mission completion. Data systems in place to track resource requests and deployments when Hurricane Katrina made landfall in 2005 did not provide efficient tracking of resources deployed under EMAC. In addition, requesting states maintained duplicate and ad hoc systems for tracking resource requests and deployments. For example, when responding to the 2005 Gulf Coast hurricanes emergency management support personnel responsible for facilitating requests for assistance recorded the same mission-related information in two separate systems: an EMAC system that cataloged all resource requests and a state-specific spreadsheet to track resource requests solely for that individual state. In 2005, the EMAC network itself found that these separate systems were often not aligned with each other and required emergency managers to manually reenter data into the EMAC system. Immediate access to these data systems was not given to personnel deployed to state operations centers to facilitate requests under EMAC, causing some to create ad hoc systems for tracking requests. In addition, emergency managers deployed to state emergency operations centers to facilitate requests under EMAC in the first weeks of the Hurricane Katrina response efforts told us that they maintained duplicative systems to track these requests, including using Post-ItsTM and notepads. Emergency management officials responsible for coordinating assistance provided under EMAC with other efforts at the federal level did not have accurate information. In addition, there are no mechanisms in place to ensure that data electronically cataloged by the EMAC network are complete or accurate; of the 57 events for which the EMAC process was activated since 1995, the EMAC network has incomplete information for 72 percent of these events. As a result, aggregate data used to report on activities conducted by the EMAC network may not accurately reflect the number of deployments, personnel deployed, or estimated costs of resources deployed under EMAC. Officials from assisting states also expressed frustration at not knowing whether their offers for assistance had been accepted or rejected. For example, after responding to a broadcast message to EMAC members for assistance in responding to Hurricane Katrina, emergency management officials from two states said that they sometimes had to wait several days before finding out whether their offers to assist were ultimately accepted. During this period, both states continued to ready their resources for deployment although it had turned out that their offers to assist were not selected by the requesting state. Because these officials were not informed in a timely manner that they were not selected to provide assistance on these missions, they incurred additional, nonreimbursable costs. As a result, these officials stated that they were less likely to mobilize resources in advance of a finalized agreement—resulting in additional time to deploy once an agreement was reached. In addition, some state officials stated they were less likely to deploy resources under EMAC in the future as a result of this lack of communication. Recognizing the need for a more coordinated data system, EMAC leadership has taken steps to link requests for assistance with its existing resource tracking system. EMAC leadership stated that by migrating part of the request process online, they hope to reduce steps and simplify the EMAC network’s abilities to capture initial requests electronically. However, progress remains to be made in developing an integrated system that incorporates EMAC mission details into the existing resource tracking system. The EMAC network developed a process for establishing basic standards and procedures for how states request and make reimbursements. While these standards and procedures worked sufficiently for smaller-scale deployments, shortcomings emerged when they were applied to larger- scale deployments in response to catastrophic disasters. These reimbursement delays caused some assisting states and localities to forgo or delay expenditures for equipment and other critical purchases. In some cases, these delays have caused states and localities to reconsider whether they would provide assistance through EMAC in the future. Following the 2005 Gulf Coast hurricanes, the EMAC network has taken steps to address some of the concerns associated with the reimbursement process and standards. To facilitate reimbursement between states following a disaster, the EMAC network developed a process for establishing basic standards and procedures for how states request and award reimbursements. While EMAC leadership and state emergency managers stated that this process has worked reasonably well for smaller-scale deployments, EMAC members encountered significant challenges with it during the large-scale deployments in response to the Florida hurricanes of 2004 and the Gulf Coast hurricanes of 2005. For example, although EMAC standards in effect during these events required that disbursement of funds be made within 30 days after a mission ended, it took considerably longer to actually do so. Specifically, assisting states were not completely reimbursed until 10 months after the conclusion of their missions following the 2004 hurricanes, and according to the latest data from provided to us by Louisiana and Mississippi, 57 percent, or about $119 million, remains outstanding for missions completed in Mississippi and Louisiana following the 2005 Gulf Coast hurricanes. One of the causes of these delays is the lack of awareness among EMAC members regarding recordkeeping requirements and how to process reimbursement packages. For example, while EMAC protocols state that the requesting state is obligated to reimburse assisting states for approved missions deployed under EMAC, assisting states must first file reimbursement packages with the requesting state documenting their expenses and providing supporting documentation. After the 2005 Gulf Coast hurricanes, the lack of awareness of this requirement on the part of several assisting states resulted in additional burdens for requesting states. In July 2006, Louisiana officials sent letters to 37 assisting states that had not yet submitted reimbursement packages with the state—11 months after Hurricane Katrina. In addition, assisting states were not always fully aware of the documentation required to support deployment activities. For example, officials from one state told us that they were not aware that under EMAC protocols they were expected to complete a predeployment inventory of all equipment and personnel taken into the impacted area. As a result, these officials encountered reimbursement challenges because the state could not document equipment lost during its response to the 2005 Gulf Coast hurricanes. Reimbursement was further complicated by the lack of consistent understanding as to what is considered reasonably reimbursable according to criteria outlined in the EMAC Operations Manual. While EMAC protocols detailing reimbursement guidelines did identify a number of broad eligible costs—personnel costs, travel costs, equipment costs, contractual costs, commodities, and other expenses—they did not provide any standards for how states were to determine what types of costs under these broad categories were considered reasonable. The delays in reimbursing assisting state and local agencies in turn delayed or eliminated planned expenditures to cover budgetary shortfalls. For example, officials with the Virginia State Police told us that delays in receiving reimbursement for $1.8 million in assistance they provided in response to the 2005 Gulf Coast hurricanes forced them to delay or cancel the maintenance and purchase of critical equipment and supplies, such as ammunition, uniforms, and office supplies. Additionally, state and local officials told us that these reimbursement delays have caused them to reconsider the level of assistance they would be willing to provide through EMAC in the future. Following the 2005 hurricane season, the EMAC network has taken steps to address some of these reimbursement concerns. For example, the EMAC network recently updated the EMAC Operations Manual to incorporate additional specificity on the types of costs eligible and not eligible for reimbursement. The manual also contains new flexibilities, including the elimination of the 30-day reimbursement requirement and the option for an assisting state to delay paying actual service providers, such as state agencies and local governments, until it first receives funds to cover these expenses from the requesting state. Although EMAC is an agreement between states, the involvement of the federal government following presidentially declared disasters can affect state-to-state reimbursements. Under EMAC, requesting states are obligated to reimburse assisting states for missions performed under the compact. However, catastrophic disasters can overwhelm the resources of an impacted state, requiring it to seek financial assistance. While the EMAC reimbursement process is intended to be independent of any efforts by a requesting state to seek federal assistance, the federal government, through FEMA, can offer funding for eligible response efforts following a presidentially declared disaster. In such circumstances, a requesting state works with FEMA to obtain financial assistance for eligible missions. Once it receives this assistance, a requesting state can then reimburse assisting states for missions performed under EMAC. Shortly after a presidentially declared disaster occurs, impacted states can work with FEMA to seek financial assistance while response and recovery efforts are under way to help cover anticipated costs. In 2004, in an effort to expedite the reimbursement of localities that responded to the 2004 Florida hurricanes, FEMA developed a process for impacted states to request and receive advance funding based on disaster estimates included in an expedited project worksheet. Unlike standard project worksheets, expedited project worksheets require less specificity as to how funding should be spent, so long as the expedited project worksheets are reconciled against actual, authorized spending at a later point. These funds could be used to reimburse assisting states for responses provided under EMAC or cover other anticipated costs. According to a senior FEMA official for the Public Assistance Program, guidance on how to seek expedited project worksheets does not exist. In 2005, neither Louisiana officials nor Mississippi officials were aware that such payments existed. According to Louisiana officials, FEMA officials suggested that they obtain advance funding of $70 million to alleviate response and recovery costs— including assistance provided under EMAC. These officials added that this advanced funding enabled them to reimburse assisting states in the amount of almost $25 million, or slightly more than half of all reimbursements Louisiana provided to assisting states for missions conducted under EMAC in response to Hurricane Katrina. In contrast, Mississippi officials stated that they were not aware that expedited project worksheets could be used to cover eligible EMAC-related costs. Accordingly, they did not pursue the same opportunity, and as a result, Mississippi has only been able to pay 38 percent of the $113 million for missions provided under EMAC. During Hurricane Katrina, National Guard troops provided assistance in their State Active Duty status as well as in Title 32 status, and the EMAC process was used for the deployment of National Guard resources. When units operate in State Active Duty status, they are under the command and control of the assisting state’s governor and missions are funded by the state. When units are in Title 32 status, units remain under the command and control of the governor and continue to deploy under EMAC, but their missions are federally funded. Under EMAC, the governor of the assisting state delegates operational control to the emergency services authorities of the state receiving assistance. If deemed appropriate, the Secretary of Defense can approve federal funding of National Guard troops under Title 32. The first National Guard units that responded after Hurricane Katrina deployed under State Active Duty status. Then, on September 7, 2005—9 days after Hurricane Katrina made landfall in Louisiana—the Deputy Secretary of Defense authorized the use of DOD funding for National Guard troops through Title 32, retroactive to August 29, 2005; all but two states elected to do so. While both requesting and assisting states were faced with administrative burdens and costs as they transitioned from State Active Duty status to Title 32 status, National Guard units deployed in State Active Duty status had more administrative requirements than those deployed in Title 32 status. Units that remained in State Active Duty status were required by EMAC procedures and their state emergency operations plans or other guidance to maintain cost-supporting documentation throughout their deployment, which was later used for reimbursement purposes. Following the disaster, states that deployed National Guard units in State Active Duty status submitted this documentation to the requesting state to obtain reimbursement, negotiating the final amount of the reimbursement with the requesting state. The requesting state, in turn, sought federal reimbursement through the Public Assistance Program at FEMA. In contrast, states that deployed their units under EMAC in Title 32 status were not required to seek reimbursement from the requesting state directly, but were reimbursed by DOD. In Title 32 status, expenses are directly tracked against a funding-site code assigned by DOD, which enables direct payroll payment. Also, a record of equipment and maintenance costs is kept for reimbursement through charges against the funding-site code. Use of Title 32 status in response to Hurricane Katrina reduced the administrative burdens on both the requesting and the assisting states, eliminated the need for requesting states to fund National Guard assistance from outside their states, and reduced the time assisting states had to wait to be reimbursed. Iowa’s and South Carolina’s experience during the 2005 Gulf Coast hurricanes illustrates the difference between keeping a responding state’s National Guard units in State Active Duty status and switching to Title 32 status. For those units deployed in State Active Duty status, Iowa was required to follow standard EMAC processes for seeking reimbursement as opposed to being directly reimbursed for missions performed in Title 32 status. It took until October 2006 for Iowa to be reimbursed for a water purification unit that Iowa’s National Guard sent to Mississippi while in State Active Duty status in September 2005—9 months from the time the mission was completed. South Carolina National Guard troops performed a similar mission in Title 32 status, and the state was reimbursed within a month. In addition, switching from State Active Duty status to Title 32 status has associated administrative costs. For example, one state recorded an estimated $87,000 in administrative costs for National Guard personnel and material expenses for making such a switch. Some of these costs were derived from rescinding State Active Duty orders; backing out of state payroll systems; performing audits to ensure that all data were adjusted appropriately; correcting faults discovered; compiling, reviewing, and transmitting troop personnel information for state processing; publishing Title 32 status orders; and estimating payroll expenses and equipment use costs. Following Hurricane Katrina, many reviews of lessons learned focused on the failure of the federal government to implement the Catastrophic Incident Annex and Supplement of the NRP, which could have rapidly provided critical resources to assist and augment state and local response efforts. However, even if the Catastrophic Incident Supplement had been implemented, the decision to authorize the use of Title 32 might not have come any sooner, because the supplement’s execution schedule does not specify a time at which DOD should consider whether it is appropriate to authorize the use of Title 32 funding for National Guard response efforts during an incident. Some states have developed practices that may provide models or insights to other members to enhance their ability to leverage resources under EMAC—including legislation and planning efforts—providing additional benefits that would not be otherwise available. We have previously reported that organizations that effectively collaborate look for opportunities to address resource needs by leveraging each others’ resources, obtaining benefits that would not be available if they were working separately. To this end, states have found ways to leverage resources, including: (1) substantially broadening the resource pool from which they can draw through intrastate mutual aid and other similar agreements and (2) proactively considering how resources deployed under EMAC might be able to fill in-state resource gaps. At the same time, states have identified other scenarios where they will not likely be able to turn to the EMAC network for assistance, such as an influenza pandemic. In addition to seeking and providing state-level resources deployed under EMAC, such as the National Guard, states are able to supplement these state-level resources with local and county resources through intrastate mutual aid and similar agreements. Intrastate mutual aid agreements create a system for mutual aid between participating state counties, parishes, or other political subdivisions in the prevention of, response to, and recovery from any disaster that results in a formal state of emergency. Firefighting, police, and medical personnel and equipment are examples of emergency response assets that can be leveraged within a state using such agreements. Through intrastate mutual aid, the types and volume of resources available under EMAC are substantially greater than those resources available solely at the state level. For example, in response to Hurricane Katrina in 2005, Illinois, New York, and Texas were able to deploy 1,663 local fire and hazardous materials response personnel and supporting equipment to Louisiana under EMAC—something that would likely not have been possible without these types of mutual aid. Thirty-eight states have intrastate mutual or similar agreements in place that enable states to leverage local resources under EMAC. However, only 16 EMAC members have instituted intrastate mutual aid agreements that can also leverage private sector resources and 22 can deploy volunteer resources. For example, Indiana’s intrastate mutual aid agreement includes a provision to call on state and private sector health professionals throughout the state. When this provision is applied, as in response to Hurricane Katrina deployment to Mississippi, through the Indiana Governor’s Executive Order, the private sector personnel become temporary employees of the state’s Department of Homeland Security. In this status, they are eligible to be deployed as a state asset under EMAC with all rights and licensing recognition afforded permanent state employees under that compact. Figure 6 shows which states are able to deploy private sector resources, volunteer resources, or both. Some states have begun to plan for how interstate resources deployed under EMAC can supplement in-state resources, thereby improving their ability to respond to a disaster more quickly and effectively. For example, the Florida National Guard has a standing Memorandum of Understanding with North Carolina for the use of C-130 aircraft for medical evacuation of patients from the Florida Keys if required during a disaster. By having this agreement in place, Florida is able to bypass the need to solicit assistance across the EMAC network and reduce the time it would otherwise take to negotiate mission details. Other states have also developed prescripted EMAC missions to fill in- state resource gaps. Louisiana, learning from its experiences during the 2005 Gulf Coast hurricanes, has been working with neighboring states to identify resources that can fill gaps identified through in-state planning efforts. For example, according to Louisiana National Guard officials, they have developed agreements to request security personnel from Arkansas and commodity distribution support from Oklahoma. These agreements include such details as: (1) mission description, (2) number of personnel required, (3) approximate length of deployment, (4) arrival location, (5) support/equipment requirements, (6) self-sustaining period (7) lodging arrangements, and (8) on-site point of contact information. In addition, as states are more likely to turn to EMAC to fill in-state resource gaps caused by competing deployments related to national missions, such as missions in Iraq and Afghanistan, NGB is beginning to encourage the prescripting of National Guard assets for emergency response missions across several states. For example, officials from the Florida and South Carolina National Guards told us that deployments in support of Operation Enduring Freedom, Operation Iraqi Freedom, and Operation Jump Start have reduced their availability of in-state emergency assets required for responding to disasters. These officials, citing similar and pending deployments that may diminish their emergency response capacity, stated that they expect an increased reliance on interstate assistance provided under EMAC as a result of such deployments. While some states have identified situations where they will use EMAC to supplement in-state resources, others have identified scenarios where they were unlikely to do so. For example, EMAC leadership and emergency managers from several states we spoke with cited three reasons why they believe EMAC would not work well for an influenza pandemic. First, the officials stated that they would be reluctant to send personnel into a contaminated area. Second, the officials expressed their concern that resources would not be available should the pandemic spread to their respective states. Third, since EMAC member states are not required to provide assistance under EMAC and states cannot compel emergency response personnel to participate in any disaster response, these officials believe that emergency personnel would be reluctant to volunteer to respond to a pandemic event in another state. The EMAC network has begun to develop a basic administrative capacity to support its operations; however, improvements in how it plans, tracks, and reports on its performance, along with a consistent source of funding, would help the network achieve its mission. Although the EMAC network has adopted several good management practices, such as using a structured approach to learn from past deployments and developing a 5- year strategic plan, opportunities exist to further enhance these efforts by considering the experience of leading organizations in results-oriented performance measurement. In addition, the EMAC network and FEMA entered into a cooperative agreement that provided some federal funding to help build the EMAC network’s administrative capacity, but this agreement has recently expired. The EMAC network’s ability to provide the adequate human capital, information technology, and other infrastructure required to support the collaborative efforts is likely to be affected by this loss of funds. The EMAC network has recently taken steps to develop a basic administrative capacity to support the sharing of resources between member states. Prior to 2003, the EMAC network’s administrative capacity—that is, its ability to provide adequate human capital, financial resources, and information technology to support its operations—was very limited and was confined to situations when the EMAC process was activated in response to a disaster. Under such conditions, emergency managers from states whose members were serving in EMAC senior leadership posts would temporarily take on the responsibility of facilitating requests for assistance between member states, processing paperwork, and answering questions. There was no dedicated administrative support available to support routine activities, such as training, or to maintain regular coordination between the EMAC network and key federal players. In 2003, the EMAC network, working through NEMA, entered into a cooperative agreement with FEMA that enabled it to hire a full-time staff member to serve as EMAC Coordinator. Among other things, this individual was tasked with supporting the development of training for responders deploying under EMAC and creating an information technology system that would capture mission-level information for each disaster for which EMAC was activated. In addition, these funds were used to support other capacity-building activities, including the holding of after- action reviews to capture lessons learned as well as the development of the EMAC network’s first strategic plan and operations manual. Over the last several years, EMAC leadership has taken steps to adopt a more systematic and rigorous approach to learning from its past experiences and planning for the future. These include using after-action reports following major events to identify ways in which the operation of the network might be improved and developing a strategic plan to help ensure that the activities and limited resources of the EMAC network are contributing to achieve its mission. We have previously reported that a structured, deliberate approach toward planning that includes long-term goals clearly linked to specific objectives and appropriate performance measures can provide a useful tool in helping organizations achieve their missions. In 2004 and 2005, the EMAC network conducted the first two of what it expects to be a series of after-action reviews to analyze its performance and identify areas where it performed well and issues needing improvement. As part of this process, the EMAC network contracted with an outside firm to conduct focus groups of operations and management personnel who either facilitated requests for assistance on behalf of EMAC member states or first responders who responded to requests for assistance. Federal officials from FEMA and NGB also participated in these sessions. In addition, the outside firm analyzed data from EMAC databases that cataloged requests for assistance and validated its research with EMAC leadership. Information from these reports was widely disseminated among EMAC members and also provided the foundation for several objectives and tasks contained in the EMAC Strategic Plan. In 2005, EMAC developed its first 5-year strategic plan to more clearly identify goals and objectives that would assist it in achieving its mission of “facilitating the efficient and effective sharing of resources between member states during times of disaster or emergency.” The plan, which was updated in 2006, identifies four broad goals: (1) provide leadership on mutual aid issues, (2) sustain and enhance mutual aid capabilities, (3) promote mutual aid and strengthen relationships, and (4) align EMAC capabilities with nationwide preparedness and response priorities. Under each of these goals is a series of supporting objectives and still more specific tasks. This plan represents a significant and positive first step; however, there are several areas where future efforts could be improved, particularly in the way the plan measures and reports on performance. We have previously reported on several key characteristics of effective plans, including performance measures. Performance plans that include precise and measurable objectives for resolving mission-critical management problems are important to ensuring that organizations have the capacity to achieve results-oriented programmatic goals. Appropriate performance measures, along with accompanying targets, are important tools to enable internal and external stakeholders to effectively track the progress the organization is making toward achieving its goals and objectives. To this end, organizations may use a variety of performance measures—output, efficiency, customer service, quality, and outcome—each of which focuses on a different aspect of performance. The EMAC leadership stated that they have informal mechanisms that assess targets for achieving objectives, such as regular status meetings. However, they do not have a formal implementation or action plan that operationalizes the goals and objectives outlined in the strategic plan. In the absence of such a plan, EMAC’s current strategic plan contains no quantifiable measures or targets for its many goals and objectives. For example, EMAC’s strategic plan calls for the development of a comprehensive training program, listing seven key tasks including evaluating training needs and developing training modules. However, the plan does not provide milestones for these activities or any performance measures for assessing whether these activities are in fact having their intended impact. The lack of clear and formal performance measures is compounded by the regular rotation of senior leadership within the EMAC network. As we have previously reported, sustained focus and direction from top management is a key component of effective management. Management control requires that organizations consider the effect upon their operations if key leadership is expected to leave and then establish criteria for a retention or mitigation strategy. Each year, the Chair of the Executive Task Force, responsible for the day-to-day management of EMAC, changes. EMAC has reduced some of the challenges that may be associated with such regular transitions by requiring that each new chair of the Executive Task Force first serve in an observational role for 1 year before becoming the chair and then serve as a mentor to the incoming chair following a 1-year term. However, because the leadership changes annually and there are no formal performance measures to determine whether goals and objectives are being achieved, it may be difficult to clearly assess whether the EMAC network is operating effectively and efficiently. To alleviate potential challenges that may arise by the annual rotation of its leadership, the EMAC network has recently begun transitioning more management responsibilities to NEMA. Since its inception, the EMAC network has received disparate funding to sustain its administrative capacity. From 2000 through 2002, the EMAC network received minimal financial support from its members through voluntary annual contributions of approximately $1,000 per member. In 2003, FEMA and the EMAC network entered into a 3-year, $2 million cooperative agreement to fund EMAC operations through May 31, 2007. This cooperative agreement enabled the EMAC network to develop an electronic system to collect, manage, and analyze the EMAC process; coordinate with FEMA on efforts to develop standard resource deployment packages; improve EMAC training initiatives; and hire one staff member to coordinate EMAC network operations. In October 2006, Congress for the first time specifically authorized FEMA to obligate up to $4 million in grants in fiscal year 2008 to support EMAC operations and coordination activities. In May 2007, Congress appropriated $2.5 million to FEMA for interstate mutual aid agreements, and according to FEMA officials, FEMA and EMAC leadership are in the process of finalizing a 3-year cooperative agreement to improve the use and awareness of resource typing among its members, and develop training programs to improve awareness of EMAC at the federal, state, and local levels. Present and past EMAC leadership stated that if the EMAC network does not receive additional funding to support operations, efforts to build and sustain the administrative capacity will have to be scaled back. Specifically, they stated that the EMAC network will lose day-to-day administrative support, there will be no resources to maintain the electronic systems that facilitate requests under EMAC or the EMAC Web site, training initiatives organized and led by EMAC leadership will be suspended, and coordination between the EMAC network and key federal players will be curtailed. EMAC’s success relies on effective collaboration among its members. The compact provides a broad and flexible framework that enables its members to overcome differences in missions, organizational cultures, and established ways of doing business in order to achieve a common mission. The EMAC network has built upon this framework, establishing roles and responsibilities and developing standards and systems in some key areas. At the same time, we found that opportunities exist for the EMAC network—as well as individual members—to make improvements in several areas, such as (1) developing member roles and responsibilities regarding how first responders are received and integrated into impacted areas; (2) continuing to develop electronic systems that enable the EMAC network to track resources, from request through mission completion; (3) continuing to improve understanding of reimbursement guidelines and standards among member states, especially following large-scale deployments; (4) promoting good practices across the EMAC network that improve members’ abilities to leverage resources; and (5) enhancing the EMAC network’s strategic and management planning efforts by considering more robust performance measures. In addition to helping states assist one another, EMAC has shown that it plays a critical role in our nation’s disaster response. However, there will be times when the EMAC network will be strained, and our nation’s next large-scale disaster will likely produce similar challenges to those encountered following the 2005 Gulf Coast hurricanes. With this in mind, opportunities exist at the federal level to help alleviate these challenges. One way to improve the nation’s overall capacity to respond to disasters is to build the EMAC network’s administrative capacity through mechanisms such as cooperative agreements, grants, or training initiatives. In doing so, planning and coordination within the EMAC network can be enhanced— key elements required for developing the capacities needed to respond to disasters. Valuable opportunities also exist to reflect on lessons learned to alleviate financial and administrative burdens placed on both the assisting and requesting states in response to catastrophes. Opportunities exist to reduce confusion among states with regard to seeking and obtaining advance funding through expedited project worksheets to facilitate timely reimbursements under EMAC. Additionally, early consideration of whether it would be appropriate to authorize the use of Title 32 status for National Guard units responding to catastrophic incidents could decrease the administrative and financial burdens states endure when switching between State Active Duty status and Title 32 status. We are making the following three recommendations: To further enhance the administrative capacity required to support the EMAC network, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to look for ways to build the administrative capacity required to support the EMAC network, such as cooperative agreements, grants, and training initiatives. In situations involving catastrophic disasters that require significant assistance from several states and in turn increase the financial and administrative burdens on EMAC members: We recommend that the Secretary of Homeland Security develop guidance for impacted states to efficiently seek and obtain advance funding through expedited project worksheets to facilitate more expedited reimbursement for those states providing assistance through EMAC to impacted areas. We recommend that the Secretaries of Defense and Homeland Security work together to amend the NRP’s Catastrophic Incident Supplement Execution Schedule to include early consideration of the use of Title 32 in situations where the Secretary of Defense deems it appropriate. We provided a draft of this report to the Secretary of Homeland Security and the Secretary of Defense for comment. The Director of FEMA’s Office of Policy and Program Analysis provided oral comments, concurring with all of our recommendations. FEMA also provided technical comments that were incorporated as appropriate. The Department of Defense did not concur with the recommendation that calls for an expedited consideration of whether to offer Title 32 following catastrophic disasters requiring significant assistance from several states. DOD’s response is reprinted in appendix II. In written comments on a draft of this report, the Assistant Secretary of Defense for Reserve Affairs did not concur with our recommendation that the Secretary of Defense work with the Secretary of Homeland Security to amend the National Response Plan’s Catastrophic Incident Supplement Execution Schedule to include early consideration of the use of Title 32 in situations where the Secretary of Defense deems it appropriate. The Department stated that use of National Guard forces in Title 32 status is an inherent DOD function and, in accordance with Homeland Security Presidential Directive-5, outside the purview of the Secretary of Homeland Security. We agree that the use of National Guard forces in Title 32 status is an inherent DOD function, and our recommendation recognizes the authority of the Secretary of Defense to determine when use of that authority is appropriate. While making clear that the directive in no way impairs or affects the authority of the Secretary of Defense over DOD, Homeland Security Presidential Directive-5 also states that the Secretary of Defense and the Secretary of Homeland Security shall establish appropriate relationships and mechanisms for cooperation and coordination between their two departments. The Secretary of the Department of Homeland Security has responsibility for the National Response Plan, which already assigns responsibilities to DOD, as a cooperating agency, and changes to the plan must be coordinated through his department. Our reference to the Secretary of Homeland Security was simply to acknowledge DHS’s coordinating role. DOD also stated that amending the Catastrophic Incident Supplement Execution Schedule of the National Response Plan as we suggested “could be interpreted to imply that it is DOD policy to place National Guard forces into Title 32 status when in fact, the response to the event only requires National Guard in State Active Duty status.” Our recommendation does not state that DOD should place National Guard forces into any particular status. The intent behind our recommendation is to create a mechanism that would trigger DOD’s consideration of whether authorization of Title 32 status is appropriate in the earlier stages of an event, when the event has been designated as “catastrophic” under the National Response Plan. In our view, a decision point for consideration of Title 32 status does not imply that the decision should be made in favor of or in opposition to authorizing Title 32. The Secretary of Defense may decide that it would not be appropriate to offer Title 32 status, and even if the Secretary did decide to offer Title 32, states would still be free to deploy their forces under State Active Duty status if they preferred. In addition, the Department of Defense would not be precluded from considering the issue again at a later time. However, a quicker decision from DOD concerning the appropriateness of Title 32 would, in circumstances where the authorization of Title 32 was deemed to be appropriate, allow states to deploy their National Guard forces under a single status rather than switching statuses in the midst of a catastrophe. This could enhance state responses because, as our report highlights, states face additional administrative burdens when they switch their National Guard forces from State Active Duty status to Title 32 status. We also provided a draft of this report to the Chair of the EMAC Executive Task Force and to the Executive Director of NEMA. Relevant sections of the draft report were provided to state and local emergency offices whose experiences we reference. Technical suggestions from these groups have been incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees as well as the Secretaries of Defense and Homeland Security, members of the EMAC Executive Task Force, the Executive Director of the National Emergency Management Association, and state and local officials contacted for this report. We will also make this report available to others who are interested and make copies available to others who request them. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Stanley J. Czerwinski at (202) 512-6806 or [email protected] or Sharon L. Pickup at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Emergency Management Assistance Compact’s (EMAC) membership and its use have grown since its inception in 1995, we reviewed a number of disaster responses for which the EMAC process was activated based on the type, scale, and time frame of the event from information provided by EMAC officials. We also interviewed emergency management officials and analyzed sources that provided additional details for events for which the EMAC process was activated, including after-action reports. Our work was constrained by data limitations, since EMAC leadership maintained data only sporadically prior to 2005, and data capturing deployments under EMAC for disasters since 2005 were incomplete or inconsistent. To assess the reliability of the deployment data, we reviewed additional documents and conducted additional interviews with local, state, and federal emergency management officials for selected events captured by the database. In cases where the data were inaccurate, we supplemented them with data from more reliable sources. For example, in determining the number of civilian and military personnel deployed through EMAC for the September 11, 2001 terrorist attack on New York and the 2004 Florida hurricanes, we obtained additional data from New York and Florida officials. In addition, in determining the number of out-of-state personnel deployed on September 10, 2005, in response to Hurricane Katrina, we worked with the Department of Defense (DOD) to obtain more accurate data regarding National Guard and active component, military deployment figures. We also attended conferences that addressed interstate compacts and EMAC, and we conducted literature and legal reviews of mutual assistance compact structures and governance. To determine the degree to which existing policies, procedures, and practices facilitate successful collaboration among EMAC members and between the EMAC network and federal agencies, we interviewed various local, state, and federal emergency management officials and analyzed the procedures and practices they used during their response. We focused on the 2005 Gulf Coast hurricanes emergency response since it presented the largest use of the EMAC process to date, with approximately 66,000 civilian and National Guard responders deployed across several disciplines. In addition, we also selected a cross section of disasters for further analysis based on the type, scale, and timing of the disaster. To gain firsthand knowledge of EMAC procedures, we held a combination of in person and telephone interviews with some of the actual civilian and National Guard emergency responders to the 2004 Florida hurricanes and the 2005 Gulf Coast hurricanes. In addition, we applied criteria for practices GAO previously developed to assess collaboration among EMAC members and between the EMAC network and key federal officials. We used the first six of these eight practices for this report: defining and articulating a common outcome; establishing mutually reinforcing or joint strategies; identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate developing mechanisms to monitor, evaluate, and report on results; reinforcing agency accountability for collaboration efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through performance management systems. We did not use the last two practices because they were beyond the scope of this review, and the sixth practice is discussed in our assessment of the EMAC network’s administrative capacity. We then selected examples that illustrated and supported the need for improvement in specific areas where the key practices could be used. We also spoke with individuals who were responsible for various roles during these disasters such as resource identification and requests, coordination, and reimbursement. These discussions were held with officials from the following offices and commands. California Department of Emergency Management, Sacramento, California California Highway Patrol, Sacramento, California California Incident Management Team, Sacramento, California Colorado Department of Local Affairs – Division of Emergency Management, Denver, Colorado Council of State Governments, Midwestern Region, Lombard, Illinois Delaware National Guard, Wilmington, Delaware Florida Department of Community Affairs/Division of Emergency Management, Tallahassee, Florida Florida National Guard, St. Augustine, Florida Georgia Homeland Security – Emergency Management Agency, Atlanta, Georgia Indiana State Department of Health, Indianapolis, Indiana Iowa Homeland Security and Emergency Management Division, Johnston, Iowa Iowa National Guard, Johnston, Iowa Louisiana Governor’s Office of Homeland Security and Emergency Preparedness, Baton Rouge, Louisiana Louisiana National Guard, Pineville, Louisiana Mississippi Emergency Management Agency, Pearl, Mississippi Mississippi National Guard, Jackson, Mississippi Montana Department of Emergency Affairs/Disaster and Emergency Services Division, Helena, Montana National Emergency Management Association, Lexington, Kentucky New Mexico Department of Public Safety/New Mexico State Police, Santa New York State Emergency Management Office, Albany, New York North Carolina Department of Crime Control and Public Safety, Raleigh, North Carolina Regional Coordinating Team, Raleigh, North Carolina North Dakota Department of Emergency Services-Homeland Security Division, Bismarck, North Dakota Oregon National Guard, Salem, Oregon South Carolina National Guard, Columbia, South Carolina South Carolina Department of Emergency Management, West Columbia, Texas Governor’s Division of Emergency Management, Austin, Texas Virginia Division of Emergency Management, Richmond, Virginia Washington D.C. Emergency Management Agency, Washington, D.C. Centers for Disease Control and Prevention, Atlanta, Georgia Department of Defense – Office of General Counsel, Arlington, Virginia Department of Defense – Inspector General, Arlington, Virginia Department of Homeland Security, Washington, D.C. Federal Emergency Management Agency – Public Assistance, Washington, D.C. National Guard Bureau, Arlington, Virginia National Guard Crisis Action Team (Army), Falls Church, Virginia National Guard Crisis Action Team (Air Force), Camp Springs, Maryland Furthermore, we reviewed the EMAC process through which state and local assets are requested and activated. In addition, we looked at how the deployment status of National Guard support affected the timeliness of reimbursement. To determine the extent to which the EMAC network has the administrative capacity to build and sustain the collaborative effort to achieve its mission, we interviewed a select number of former and current EMAC leaders as well as emergency management officials from EMAC member states. We also reviewed and analyzed the EMAC strategic planning documents and selected after-action reports. We performed similar reviews of state and federal after-action reports for 2004 through 2006. These discussions and reviews helped us gain an understanding of EMAC organizational structure and developmental and funding plans. We conducted our review from June 2006 through June 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We provided drafts of relevant sections of this report to state and local emergency management officials whose experiences we reference and we incorporated their technical corrections as appropriate. In addition, we requested comments on a draft of this report from DOD and DHS, as well as the Chair the EMAC Executive Task Force and the Executive Director of NEMA. Comments from DOD are reprinted in appendix II. Their comments are addressed in the Agency Comments section of this report. The Department of Homeland Security provided oral comments, concurring with all of our recommendations. In addition to the contacts named above, Peter Del Toro, Assistant Director; Michael J. Ferren, Assistant Director; Andrew C. Edelson; Gwyneth M. Blevins; James A. Driggins; K. Nicole Haeberle; K. Nicole Harms; Molly E. McNamara; Justin L Monroe; Sheila D. Rajabiun; and Nathaniel J. Taylor made key contributions to this report. | The Emergency Management Assistance Compact (EMAC) is a collaborative arrangement among member states that provides a legal framework for requesting resources. Working alongside federal players, including the Federal Emergency Management Agency (FEMA) and the National Guard Bureau, EMAC members deployed an unprecedented level of assistance in response to hurricanes Katrina and Rita. Although EMAC played a critical role in our nation's response to these hurricanes, the magnitude of these events revealed limitations. GAO was asked to (1) examine how the use of EMAC has changed since its inception; (2) assess how well existing policies, procedures, and practices facilitate collaboration; and (3) evaluate the adequacy of the EMAC network's administrative capacity to achieve its mission. GAO examined documents and interviewed officials from 45 federal, state, and local agencies and offices. Since its inception in 1995, the EMAC network has grown significantly in size, volume, and the type of resources it provides. EMAC's membership has increased from a handful of states in 1995 to 52 states and territories today, and EMAC members have used the compact to obtain support for several types of disasters including hurricanes, floods, and the September 11, 2001 terrorist attacks. The volume and variety of resources states have requested under EMAC have also grown significantly. For example, after the September 11, 2001 terrorist attacks, New York requested 26 support staff under EMAC to assist in emergency management operations; whereas, in response to the 2005 Gulf Coast hurricanes, approximately 66,000 personnel--about 46,500 National Guard and 19,500 civilian responders-- were deployed under EMAC from a wide variety of specialties, most of whom went to areas directly impacted by the storms. EMAC, along with its accompanying policies, procedures, and practices, enables its members to overcome differences to achieve a common mission--streamlining and expediting the delivery of resources among members during disasters. While these policies, procedures, and practices have worked well for smaller-scale deployments, they have not kept pace with the changing use of EMAC, sometimes resulting in confusion and deployment delays. The EMAC network has taken steps to address several of these challenges, but additional improvements can be made in a number of areas including clarifying roles and responsibilities of EMAC members and improving existing systems that track resources deployed under EMAC. In addition, a lack of sufficiently detailed federal standards and policies has led to some reimbursement delays and additional administrative burdens. While the EMAC network has developed a basic administrative capacity, opportunities exist for it to further build on and sustain these efforts. The EMAC network has adopted several good management practices, such as using after-action reports to learn from experiences and developing a 5-year strategic plan. However, the EMAC network can enhance its administrative capacity by improving how it plans, measures, and reports on its performance. FEMA provided $2 million to help build this capacity in 2003, but the agreement has recently expired. FEMA and EMAC leadership are in the process of finalizing a new 3-year cooperative agreement. Such an agreement would enhance the EMAC network's ability to support its collaborative efforts. |
Federal law enforcement agencies pursue fugitives wanted for crimes that fall within their jurisdictions. Generally, federal fugitives are persons whose whereabouts are unknown and who (1) are being sought because they have been charged with one or more federal crimes; (2) have failed to appear for a required court action or for deportation; or (3) have escaped from federal custody. The agencies we contacted generally required that information on these persons be entered quickly onto the NCIC wanted person file to facilitate location by others and enhance public and law enforcement personnel safety. NCIC has the nation’s most extensive computerized criminal justice information system. Its system consists of a central computer at FBI headquarters in Washington, D.C.; dedicated telecommunications lines; and a coordinated network of federal and state criminal justice information systems. NCIC’s system consists of millions of records in 14 files, including files on wanted persons, stolen vehicles, and missing persons. Over 19,000 federal, state, and local law enforcement and other criminal justice agencies in the United States and Canada have direct access to NCIC. An additional 51,000 agencies can access NCIC indirectly through agreements with agencies that have direct access. An Advisory Policy Board composed of representatives from criminal justice agencies throughout the United States is responsible for establishing and implementing the system’s operational policies. NCIC and the Advisory Policy Board also receive suggestions from a federal working group composed of several representatives from federal law enforcement agencies, which include ATF, the Customs Service, INS, and USMS. The FBI is responsible for the overall management of NCIC. Agencies entering data onto the NCIC files are expected to comply with the specifications and standards set by NCIC and must perform periodic reviews to ensure that the information they entered on NCIC is still valid (e.g., that a valid arrest warrant still exists). NCIC personnel are to also periodically review the agencies’ NCIC records. Despite agencies’ policies calling for entry of fugitives onto the NCIC wanted person file as early as possible after issuance of an arrest warrant, many fugitives’ data, including those fugitives classified as dangerous, were entered long after arrest warrants were issued. NCIC written policy calls for timely entries, which it defines as entry made immediately after a decision is made to (1) arrest or authorize arrest and (2) extradite the located fugitives (extradition generally involves state or local law enforcement agencies). NCIC officials said that when they review an agency’s use of NCIC records they consider entries made after 24 hours (48 hours if a weekend intervenes) as untimely. However, participating agencies are not required to adhere to the suggested NCIC criteria for timeliness. Rather, each agency sets its own criteria on when to enter fugitives onto the wanted person file. FBI, USMS, and ATF policies for entering fugitives onto the wanted person file required entry shortly after the arrest warrant, notice of escape, or other document authorizing detention was issued: the FBI and USMS required immediate entry, meaning within 24 hours; ATF allowed up to 10 days for entry if the delay served a law enforcement purpose. Customs Service policy called for entry after reasonable efforts to locate the fugitive had failed and essentially defined “reasonable” as being after all investigative leads on the fugitive’s location have been exhausted. INS’ policy provided no time frame for making the entry. Figure 1 illustrates the entry times, by time elapsed since arrest warrant issuance, for the 20,968 FBI, USMS, ATF, and Customs Service fugitive records on the wanted person file as of April 6, 1994, and for the 3,794 of those records that were entered after September 30, 1993. As figure 1 shows, only 34 percent of all fugitives were entered onto the file within 2 days and slightly more than half (54 percent) were entered within 1 week. These entry times were better for the records entered after September 30, 1993. For example, 41 percent were entered within 2 days and 61 percent within 1 week. These entry times and others are shown in appendix II (table II.1). Agencies are to enter a caution notation on the wanted person file records of fugitives who are considered dangerous or suicidal or who have a serious medical condition. According to FBI and USMS officials, most fugitives with a caution notation on their file should be considered dangerous. Figure 2 illustrates the entry times for the 7,864 FBI, USMS, ATF, and Customs Service fugitive records with a caution notation on the wanted person file as of April 6, 1994, and the entry times for the 1,838 of these caution-noted records that were entered after September 30, 1993. Despite the caution notation, as figure 2 shows, only 36 percent of all caution fugitives were entered onto the file within 2 days and slightly more than half (52 percent) were entered within 1 week. Entry times were better for the records entered after September 30, 1993. For example, 42 percent were entered within 2 days and 59 percent within 1 week. These entry times and others are shown in appendix II (table II.2). As noted earlier, except for the Customs Service, the agencies whose records we analyzed generally required that their fugitives be entered onto the wanted person file soon after the arrest warrant was issued. However, the agencies did not always comply with their own policies. For example, the FBI’s policy is to enter fugitives onto the file as soon as the decision to make an arrest is made or immediately after the arrest warrant is issued. The FBI has defined “immediately” to mean not more than 24 hours after the arrest warrant is issued because it believes that failure to promptly enter fugitive records onto the file places every member of the criminal justice system, as well as the general public, at risk. However, our comparison of NCIC entries with arrest warrant dates revealed that only 31 percent of the FBI’s entries overall and 34 percent of caution fugitive entries were made on the same day of the arrest warrant, and 48 percent and 50 percent, respectively, were made by the end of the next day. Table 1 shows the agencies’ reported policies for entering fugitives onto the NCIC wanted person file and entry times for their fugitives on the file as of April 6, 1994. Except for the Customs Service, the agencies’ entry times for the April 6, 1994, records that were entered onto the wanted person file after September 30, 1993, were somewhat shorter than the times for all of the April 6 fugitive records. For example, 79 percent of all FBI records on the file as of April 6, 1994, were entered within 4 weeks versus 84 percent of the records entered after September 30, 1993. The Customs Service entered 42 percent of its April 6 records within 4 weeks versus 31 percent of those entered after September 30, 1993. More statistics on entry times for records on the April 6 wanted person file are included in appendix II (tables II.3 through II.6). All of the agencies we contacted believed that unwarranted delays in entering fugitives onto the wanted person file could adversely affect timely apprehensions and endanger lives. Many fugitives are apprehended by an agency other than the one responsible for entering them onto the file. Therefore, timely entries onto the file allow law enforcement agencies that come into contact with the fugitives for other reasons, such as minor traffic violations, to check the file and detain these fugitives immediately. NCIC officials told us they developed a procedure to compensate, in part, for delayed entries. Under this procedure, NCIC is to compare a new wanted person file entry with all file queries made 72 hours prior to the entry. When there is a match, NCIC officials are to notify the involved agencies. While the fugitive, for example, may not have been detained after a traffic stop because he or she was not on the file when the query was made, the subsequent matching could provide leads to the person’s location. NCIC officials told us that in June 1995, for example, this procedure provided 369 leads from the wanted or missing person files that resulted in 9 persons being arrested or located. However, the officials did not know how many of the 369 leads involved fugitives who were not apprehended because they had fled before the delayed match occurred. Except for the FBI internal inspection program and USMS’ program reviews, the agencies we contacted did not systematically monitor or have information on the time taken to enter fugitives on the wanted person file. Nor did they have information on the reasons for delays in entering fugitives. Moreover, NCIC had done limited reviews of ATF’s, the Customs Service’s, and USMS’ entry times on the wanted person file. The FBI’s internal inspections are to include a review of entry times for a sample of wanted person file records. According to the FBI, 24 (or 65 percent) of the 37 FBI field office inspections completed between October 1993 and July 1995 had findings regarding the failure to make timely entries. For 21 of the 24 field office inspections, officials reported that over 10 percent of the entries they reviewed were not in compliance with entry time requirements. Of the 21 inspections, officials reported that 12 showed delays in over 30 percent of the entries reviewed and that 4 showed delays in over 50 percent of the entries reviewed. Furthermore, 7 of the 24 inspections reported a median delay of 1 week or more, 10 were less than a week, and 7 did not identify the number of days the entries had been delayed. The reports generally did not identify reasons for delayed entries, but officials recommended that the office heads strengthen administrative controls to prevent future delays. Also, 3 of the 24 reports noted that entry delays were found during the preceding review of the involved offices. The remaining 21 reports, based on data the FBI provided us, made no mention of prior inspections. Of the 21 reports, 7 were done during fiscal year 1995. On the basis of information provided by the FBI during our previous fugitive work, we determined that at least three of the seven reports involved offices that were found to have entry time problems during their prior inspections. According to a USMS program review official, its internal program reviews involved looking at some fugitive cases, and these reviews generally found that entries were made within 1 or 2 days after the arrest warrant date. NCIC officials told us that they had reviewed wanted person file use by ATF, the Customs Service, and USMS at least once since 1992. The officials do not review FBI use, relying instead on the FBI’s inspection program. An NCIC 1995 report covering various federal agencies, including ATF and the Customs Service, reported problems with one of the Customs Service communications centers that entered records onto the wanted person file.The report stated that there was a significant delay in entering records and that the average delay ranged from 1 week to 1 month. It did not identify the number of records with problems or the reasons for delays. But, it noted that a Customs Service headquarters official contacted the communications center about taking corrective action. Another NCIC 1995 report involving a review of selected USMS offices and other agencies reported that all records reviewed had been entered in a timely manner. FBI, USMS, ATF, and Customs Service officials we briefed on the results of our analyses of the wanted person file generally expressed concern about our findings. None could explain specifically why entries were delayed. They believed that some were the result of employees becoming involved with higher priority matters (e.g., responding to another more immediate case) or delaying entry for a valid law enforcement purpose (e.g., the opportunity to simultaneously arrest several suspects). However, all agreed that some delays were due to the lack of oversight or various other problems that could be addressed. For example, USMS officials said there might have been some delays in their being notified by (1) the courts of persons who failed to make a required court appearance or (2) the Drug Enforcement Administration regarding drug case fugitives that the USMS is responsible for pursuing. As a result of our work, FBI, USMS, ATF, and Customs Service officials committed to examining their more recent entry times and identifying actions they would take, if necessary, to address any problems. Because of our findings regarding these agencies’ entry times, INS officials also said they would take action to help ensure that INS field offices submit their fugitive cases for entry onto the wanted person file in a timely manner. A Supervisory Special Agent representing the FBI Violent Crime and Fugitive unit in headquarters said his unit reviewed the entry times for all entries to the wanted person file from January 1994 through June 1995. He said they found that 58 percent of their fugitives had been entered within 1 day after the date of the arrest warrant and 78 percent within 10 days. These times were better than the overall rates (48 percent by the next day and 79 percent within 4 weeks) we found for all FBI fugitives on the April 6, 1994, wanted person file. The FBI official further stated that the entry times for the persons wanted for the federal crime of unlawful flight to avoid prosecution were much better (80 percent entered in 1 day) than the entry times (40 percent entered in 1 day) for those wanted for other federal crimes, such as bank robbery. In commenting on a draft of this report, FBI officials noted that it was imperative that delays be kept to an absolute minimum and that they would continue efforts to minimize entry delays. They said that the FBI inspection program would continue to audit the field offices’ entries to help ensure the timely entry of fugitives without unmitigated delay. USMS officials said they would review their entry times and, if necessary, send out reminders to their field offices about prompt entries. Entry within 24 hours is one of the new performance measures they plan to use for field offices. The officials believe that this, along with their internal reviews and the periodic NCIC audits, should minimize any future problems with entry times. ATF officials said they reviewed some of their recent entries and the fugitive cases from our work involving entry times over 3 months, which we provided at their request. They said that their review validated our findings and that they advised the agents in charge of the involved ATF field offices of the problems and the need for corrective action. Overall, ATF officials said they would enhance their capacity to monitor entry times and identify problems. Specifically, they said their communications center will obtain more information when making entries onto the wanted person file as requested by ATF’s field offices. The field offices are to be contacted about entries made after 15 days (ATF’s 10-day period when entry may be delayed for a valid reason plus a 5-day grace period). The officials also noted that ATF’s communications center staff will review entry times during the periodic validation checks they make of the agency’s wanted person file records. They also stated that ATF’s internal inspections staff will consider looking at entry times when they conduct inspections of ATF’s field offices. ATF officials said they expected a marked improvement in their entry times within a year. “Effective immediately, whenever an arrest warrant is issued pursuant to a Customs investigation and the arrest of the subject is not anticipated within a reasonable amount of time, a Customs Fugitive Report will be faxed to the Communications Center (for entry into NCIC) within 24 hours. A reasonable amount of time should be that operationally necessary to effect the arrest of the subject, but should not exceed 10 days.” Furthermore, the Customs Service’s coordinator said the criteria will note that there can be exceptions, such as the need to avoid interference with an ongoing investigation. When the delay is no longer needed, the reason for the delay is to be identified on the submitted fugitive report. Customs Service officials also told us that their agency’s office that oversees periodic validation checks of Customs Service wanted person file records will now also look at entry times and will use “within 24 hours” as the criterion for timely entry. As a result of our findings involving other law enforcement agencies and their desire to address problems that may exist or occur, INS officials told us they will add a reminder about the need for timely entries on the form that their field offices complete and that INS headquarters officials then use to make entries to the wanted person file. Noting that INS had only been using the file since 1991, the officials said they expect, as their use of NCIC grows, to develop improved ways for promoting timely use of the wanted person file as well as other NCIC files. USMS, ATF, Customs Service, and INS officials noted that the periodic audits of the wanted person file by NCIC officials would help agencies identify problem areas. However, NCIC officials told us that they are now doing less checking of entry times because of increased workload and staff downsizing. The FBI, USMS, ATF, and the Customs Service entered many fugitives onto the wanted person file long after their arrest had been authorized. This occurred despite policies generally calling for quick entry and the view that use of the wanted person file aids apprehension and public and law enforcement personnel safety. In response to our findings, the FBI, ATF, and the Customs Service did their own reviews and noted similar entry time problems. USMS officials said they would review their entry times. Given the concern about public and law enforcement personnel safety and fugitive apprehension, we believe it is important that NCIC and its participating agencies have clear, written policies calling for and defining immediate entry and setting forth any exceptions. While there seems to be agreement on the need for prompt entry, there is no generally accepted definition of immediate entry. However, a consensus seems to be evolving, at least among the agencies we reviewed. NCIC officials consider entry after 24 hours to be untimely, although NCIC has not made this a part of its written policies. FBI and USMS officials told us that although a definition does not appear in written form, immediate entry meant within 24 hours. The Customs Service plans to adopt and put the 24-hour criterion in writing. Exceptions to immediate entry could be allowed for those cases where an arrest is expected to occur quickly or for other established operational reasons. Furthermore, adherence to the policies could be better ensured if the agencies periodically monitored and reviewed entry times and reasons for delays and communicated problems and suggested actions to their field offices. Finally, although we did not examine the entry times for all law enforcement agencies in the Departments of Justice and the Treasury, we believe that the same reasons for timely entry generally would apply to these other agencies. Moreover, it seems reasonable that timely entries would be of concern to law enforcement organizations in other federal agencies. We recommend that the Attorney General require the Directors of the FBI and USMS and the Commissioner of INS and that the Secretary of the Treasury require the Director of ATF and the Commissioner of the Customs Service to ensure that they have written policies that require immediate entry of fugitives onto the NCIC wanted person file, unless imminent arrest is expected or other mitigating reasons exist. In this regard, we also recommend that the Attorney General, as the official ultimately responsible for NCIC and the wanted person file, seek consensus among federal law enforcement agencies on a definition of immediate entry and include this definition as guidance in the NCIC operating policies on the use of the wanted person file. To ensure that timely entries are made, we recommend that the Attorney General and the Secretary of the Treasury require the agency heads to establish and implement measures for ensuring compliance with the policy for immediate entry of fugitives’ data onto the NCIC wanted person file, including periodically reviewing entry times and identifying and evaluating reasons for delays. Also, we recommend that the Attorney General and the Secretary of the Treasury require the heads of other agencies within their respective Departments that use the wanted person file to determine whether they have adequate entry time policies and monitoring mechanisms and, if not, to establish such policies and mechanisms. Furthermore, we recommend that the Attorney General require the FBI Director, working with the NCIC Advisory Policy Board, to (1) advise law enforcement organizations in federal departments and agencies outside of the Departments of Justice and the Treasury of the importance of timely entry and (2) encourage them to determine whether they have adequate entry time policies and monitoring mechanisms. We requested comments on a draft of this report from the Attorney General and the Secretary of the Treasury. Responsible Department of Justice officials from the Office of the Assistant Attorney General for Administration, the FBI, INS, and USMS provided Justice’s comments in a meeting on December 11, 1995. Responsible Department of the Treasury officials from the Office of the Under Secretary for Enforcement, ATF, and the Customs Service provided Treasury’s comments in a meeting on December 5, 1995. Justice officials said that the Department generally agreed with our findings and recommendations and that the Department’s component agencies recognize the need for timely wanted person entries to protect law enforcement officers and the general public and to assist in the location of criminal and alien absconders. They said, however, that setting a single policy for timeliness and measuring timeliness is not a simple matter. They specifically noted the following. A myriad of reasons may preclude the entry of a wanted person within 24 hours of the date of the warrant. For example, entry might be delayed because of (1) insufficient data (e.g., date of birth) to properly identify the fugitive; (2) circumstances germane to a particular case (e.g., where the subject is given opportunity to surrender in exchange for the subject’s cooperation); or (3) the involvement of sealed indictments, particularly in multiple subject cases where the government does not want to disclose ongoing investigations not ready for indictment. When modifying records, it is sometimes easier to delete the entry and reenter it. This would result in an entry date on the wanted person file that appears to be late, but does not reflect the earlier data entry. Compelling the entry of all INS fugitive alien cases within a specific time frame will not meet the criteria required for successful conclusion of the cases in many instances. For example, in INS’ failure to surrender cases, entry is dependent on meeting certain criteria (e.g., has failed to appear for deportation upon demand by INS) rather than a specific time. When measuring timeliness, it would be more useful to evaluate the reasons for delayed entry rather than reviewing the entry’s date against that of the warrant. We recognize that not all fugitives can or should be entered onto the wanted person file within a short time frame and that some entry dates on the file may be incorrect. How much of the delay we found is due to valid reasons or incorrect dates is unknown. We noted earlier in this report that the agencies could not explain specifically why entries were delayed but did identify both valid and invalid reasons why delays might occur (see pp. 11-12). Our recommendations recognize that entry policies need to allow for delays for valid reasons and that monitoring mechanisms need to identify and evaluate reasons why specific entries were delayed. Furthermore, we believe that the findings of the FBI’s inspection program, the checks made by ATF and Customs Service officials after we brought our findings to their attention, and the agencies’ overall agreement with our findings and recommendations make it clear that substantial delays have occurred for invalid reasons and that the agencies can improve upon the entry times we found. Also, the Justice officials said that despite the many reasons for delaying entry and INS’ particular situation, actions have been taken or are being taken to better define and ensure timely entry of fugitives onto the wanted person file. Concerning changes to overall NCIC policy guidance, they said that any changes must be made pursuant to established procedures and that the FBI would formally submit our recommendations for review by the NCIC’s Advisory Policy Board during meetings to be held in the spring of 1996. The Justice officials also noted that the FBI, INS, and USMS are initiating or have been using systems for ensuring timeliness of fugitive entries. They cited the FBI inspection program, which they said recently identified a 22-percent unmitigated delay in fugitive entries in one field office and led to the office taking corrective action. Referring to plans for the USMS to take over responsibility for INS’ criminal fugitives, they noted that INS plans to meet the USMS entry criteria (i.e., immediate entry, which USMS officials earlier told us meant within 24 hours). They also noted that USMS plans to evaluate entry times as part of a system to assess the performance of its field offices on fugitive cases. Treasury officials generally agreed with the recommendations we made to their Department. Furthermore, given that Treasury works closely with Justice to address federal law enforcement issues, the Under Secretary’s representative expressed Treasury’s interest in working with Justice to address our recommendations to seek a consensus on a definition of immediate entry and to bring the need for adequate entry time policies and monitoring mechanisms to the attention of other federal law enforcement organizations. Concerning specific agency actions, the ATF officials noted that ATF (1) issued a memorandum to its field offices in 1995 reiterating its entry time policy and outlining steps taken or planned to enforce it, including establishing audit and follow-up procedures, and (2) will further revise its policy guidance to call for entry within 24 hours. The Customs’ official noted that the Customs Service has issued revised policy guidance to require entry within 24 hours and will follow through on measures to ensure compliance as discussed on page 13 of this report. We are sending copies of this report to interested congressional committees and members. We are also sending copies to the heads of various other federal agencies that had records on the April 6, 1994, wanted person file for their information. These agencies include the Department of Defense, Department of State, and the U.S. Postal Service. We will also make copies available to others upon request. The major contributors to this report are listed in appendix III. If you have any questions concerning this report, please call me on (202) 512-8777. Our overall objective was to follow up on information from earlier work that seemed to show that federal law enforcement agencies were not timely entering in their fugitives onto the NCIC wanted person file. Specifically, we sought to identify (1) how long federal agencies took to enter fugitives onto the wanted person file; (2) what information the agencies had on entry times and the means used to monitor entry times; and (3) what actions agencies took, considered, or could take to reduce any entry delays. We focused on the FBI, INS, USMS, ATF, and the Customs Service. These agencies accounted for 78 percent of the records on the April 1994 wanted person file that we acquired during our earlier work. They were also the principal fugitive-hunting agencies within the Justice and Treasury Departments, the two departments mainly addressed in our earlier fugitive work. To accomplish our objectives, we analyzed principally the wanted person file data obtained on a prior review of interagency cooperation of federal fugitive activities. We also interviewed officials and reviewed various documents obtained at the headquarters offices of the FBI, INS, USMS, ATF, and the Customs Service. The wanted person data involved federal fugitive records on the wanted person file as of April 6, 1994. We did not update these data by obtaining and analyzing more recent files since the agencies expressed the willingness to look into or otherwise act to address actual or potential problems with entry time. Sufficient data were available on the wanted person file we earlier obtained to identify the elapsed time between the date of the arrest warrant, or other document authorizing apprehension, and the date of record entry for at least 99 percent of the April 6, 1994, individual records of the FBI, USMS, ATF, and the Customs Service. INS’ wanted person file records did not have this information and thus were excluded from our analysis. Table I.1 shows by agency the number of records we analyzed. We briefed FBI, INS, USMS, ATF, and Customs Service officials responsible for fugitive policies on the results of our analyses. We also interviewed them as to any (1) current information they might have on entry policies and times; (2) means their agencies had for staying abreast of entry times and for ensuring timely entries; (3) known or possible causes and effects of delayed entries; and (4) actions that had been taken to address entry time problems or actions that would be or could be taken as a result of our findings. We also interviewed NCIC officials and FBI and USMS officials responsible for conducting reviews of field office operations (called “inspections” in FBI and “program reviews” in USMS) about their findings regarding entry times. We reviewed sections of inspection reports that represented, according to FBI officials, all findings on entry time problems from inspections conducted from October 1993 to July 1995. We did not review any USMS reports since officials told us they generally did not find problems with entry times. Officials at the other agencies we contacted said they did not have such reviews (INS) or that their reviews did not look at entry times (ATF and the Customs Service). We also interviewed a representative of the International Association of Chiefs of Police about the importance of the wanted person file in fugitive apprehension and public and law enforcement personnel safety. Average (days) Median (days) Average (days) Median (days) Average (days) Median (days) Average (days) Median (days) Daniel C. Harris, Assistant Director, Administration of Justice Issues Carl Trisler, Evaluator-in-Charge Andrew Goldberg, Intern Pamela V. Williams, Communications Analyst David Alexander, Senior Social Science Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Federal Bureau of Investigation's (FBI) National Crime Information Center (NCIC) wanted person file, focusing on the: (1) length of time it takes federal law enforcement agencies to enter fugitives onto the file; (2) information that agencies have on data entry times and the means used to monitor entry times; and (3) agencies' plans to reduce entry delays. GAO found that: (1) FBI and the U.S. Marshals Service (USMS) require that fugitives be entered onto the wanted persons file within one day after an arrest warrant is issued; (2) the Bureau of Alcohol, Tobacco, and Firearms (ATF) allows up to ten days for data entry if the delay serves a valid law enforcement purpose; (3) the Customs Service requires data entry after reasonable efforts to locate a fugitive have failed; (4) the Immigration and Naturalization Service (INS) has no policy regarding the timeliness of data entry; (5) 28 percent of FBI, USMS, ATF, and Customs' entries are made one day after issuance of the arrest warrant, 54 percent within one week, and 70 percent within four weeks; (6) data entry times for dangerous fugitives do not differ substantially from overall data entry times; (7) ATF and Customs do not monitor their data entry times or know the reasons for delays in entering fugitives on the wanted persons file; (8) FBI found that there are delays in between 30 and 50 percent of its data entries, with a median delay of about one week; and (9) all the federal law enforcement agencies plan to take action to minimize data entry delays. |
Information sharing is essential to enhance the security of our nation and is a key element in developing comprehensive and practical approaches to defending against potential terrorist attacks. Having information on threats, vulnerabilities, and incidents can help an agency better understand the risks and determine what preventative measures should be implemented. The ability to share such terrorism-related information can also unify the efforts of federal, state, and local government agencies, as well as the private sector in preventing or minimizing terrorist attacks. The national commission appointed by members of Congress and the President after the September 11 terrorist attacks (the 9/11 Commission) recognized the critical role of information sharing to the reinvigorated mission to protect the homeland from future attacks. In its final report, the commission acknowledged the government has vast amounts of information but a weak system for processing and using it. The commission called on the President to provide incentives for sharing, restore a better balance between security and shared knowledge, and lead a governmentwide effort to address shortcomings in this area. Since 2001, the President has called for a number of terrorism-related information-sharing initiatives in response to legislative mandates passed by Congress. Relatedly, over the past several years, we have identified potential information-sharing barriers, critical success factors, and other key management issues, including the processes, procedures, and systems to facilitate information sharing between and among government entities and the private sector. Efforts to promote more effective sharing of terrorism-related information must also balance the need to protect and secure it. The executive branch has established requirements for protecting information that is deemed to be critical to our national security. Since the information-sharing weaknesses of September 11, the President and the Administration have called for a number of terrorism-related information-sharing initiatives driven predominately by two statutory mandates—The Homeland Security Act of 2002 and the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act). Section 892 of the Homeland Security Act requires that the President, among other things, prescribe and implement procedures under which federal agencies can share relevant homeland security information, as defined in the Homeland Security Act, with other federal agencies, including DHS, and with appropriate state and local personnel, such as law enforcement. Congress subsequently mandated a more extensive information-sharing regimen through section 1016 of the Intelligence Reform Act, requiring that the President take action to facilitate the sharing of terrorism information, as defined in the act, by establishing an Information Sharing Environment (ISE) that will combine policies, procedures, and technologies that link people, systems, and information among all appropriate federal, state, local, and tribal entities, and the private sector. The act also requires the President to, among other things, appoint a program manager to oversee development of the ISE and establishes an Information Sharing Council to support the President and the program manager with advice on developing the policies, procedures, guidelines, roles, and environment. Together, the mandates call for initiatives designed to facilitate the sharing of terrorism-related information—which encompasses both homeland security and terrorism information—within and among all appropriate federal, state, local, and tribal entities, and the private sector. These and other actions are explained in more detail in table 1. In January 2005, GAO designated information sharing for homeland security as a governmentwide high-risk area because, although it was receiving increased attention, this area still faced significant challenges. Since 1998, we have recommended the development of a comprehensive plan for information sharing to support critical infrastructure protection efforts. Key elements of our recommendation can be applied to broader terrorism-related information sharing, including clearly delineating the roles and responsibilities of federal and nonfederal entities, defining interim objectives and milestones, and establishing performance metrics. Over the past several years, we have also issued several reports on challenges related to information sharing. In June 2005, we reported that as federal agencies work with state and local public health agencies to improve the public health infrastructure’s ability to respond to terrorist threats, including acts of bioterrorism, they faced several challenges. First, the national health information technology (IT) strategy and federal health architecture were still being developed. Second, although federal efforts continue to promote the adoption of data standards, developing such standards and then implementing them were challenges for the health care community. Third, these initiatives involved the need to coordinate among federal, state, and local public health agencies, but establishing effective coordination among the large number of disparate agencies would be a major undertaking. In May 2005, we reported that DHS had undertaken numerous initiatives to foster partnerships and enhance information sharing with other federal agencies, state and local governments, and the private sector concerning cyber attacks, threats, and vulnerabilities, but it still needed to address underlying barriers to information sharing. At that time, critical infrastructure sector representatives identified as barriers to sharing information with the government fear of release of sensitive information, uncertainty about how the information would be used or protected, lack of trust in DHS, and inconsistency in the usefulness of the information shared by DHS. We made recommendations to the Secretary of Homeland Security to strengthen the department’s ability to implement key cybersecurity responsibilities by completing critical activities and resolving underlying challenges. In September 2004, we reported that nine federal agencies had identified 34 major networks—32 operational and 2 in development—supporting homeland security functions, including information sharing. The total cost of the networks for which cost estimates were available was approximately $1 billion per year for fiscal years 2003 and 2004. Among the networks identified, DHS’s Homeland Secure Data Network appeared to be a significant initiative for future sharing of classified homeland security information among civilian agencies and DOD. In July 2004, we reported on the status of the information sharing and analysis centers that were voluntarily created by the private sector owners of critical infrastructure assets to provide an information-sharing and analysis capability. The information-sharing center community had identified a number of challenges, including increasing participation, building a trusted relationship, and sharing information between the federal government and the private sector. We recommended that DHS proceed with the development of an information-sharing plan that, among other things, defines the roles and responsibilities of the various stakeholders and establishes criteria for providing the appropriate incentives to address the challenges. In October 2001, we identified critical success factors and challenges in building successful information-sharing relationships. In addition, we identified practices that could be applied to other entities trying to develop the means of appropriately sharing information. One of the most difficult challenges to effective information sharing we identified was overcoming new entities’ initial reluctance to share. Among the best practices we identified were (1) establishing trusted relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice, (2) developing standards and agreements on how shared information will be used and protected, and (3) taking steps to ensure that sensitive information is not inappropriately disseminated. The federal government utilizes a variety of policies and procedures, whether prescribed by statute, executive order, or other authority, to limit dissemination and protect against the inadvertent disclosure of sensitive information. For information the government considers critical to our national security, the government may take steps to protect such information by classifying it—for example, Top Secret, Secret, or Confidential—pursuant to criteria established by executive order. The executive order prescribes uniform standards for making all classification decisions across the federal government. Specifically, it prescribes the categories of information that warrant classification, establishes criteria for persons with classification authority, limits the duration of classification decisions, establishes procedures for declassifying or downgrading classified information, prescribes standards for identifying and safeguarding classified materials, requires that agencies prepare classification guides to facilitate proper and uniform classification decisions, and provides for oversight of agency classification decisions. Information that does not meet the standards established by executive order for classified national security information but that an agency nonetheless considers sufficiently sensitive to warrant restricted dissemination is generally referred to as sensitive but unclassified. In designating information this way, agencies determine that the information they use must therefore be safeguarded from public release. Such information could include, for example, information at DOJ that is critical to a criminal prosecution. DOJ would protect this information from inappropriate dissemination by identifying it with a designation, such as Law Enforcement Sensitive, and prescribing restricted handling procedures for information with this designation. Some specific designations—such as Sensitive Security Information (SSI), used for certain transportation-related information, and Protected Critical Infrastructure Information (PCII), used for information that has been voluntarily submitted to DHS by the private sector and is related to the security of the nation’s critical infrastructure—have a specific basis in statute, but many other designations that agencies use do not. For example, some agencies use the provisions of the Freedom of Information Act (FOIA), which establishes the public’s legal right of access to government information but also enables the government to withhold certain information from public release, as their basis for designating information sensitive but unclassified. OMB has primary governmentwide oversight responsibility for information management and information security. No governmentwide policies or processes have been established by the executive branch to date to define how to integrate and manage the sharing of terrorism-related information across all levels of government and the private sector despite legislation and executive orders dating back to September 11. This is due, in part, to the difficulty of the challenge, as well as the fact that responsibility for creating these policies has shifted among various executive agencies. Most recently in December 2005, the President once again tried to better clarify the roles and responsibilities of the ODNI program manager, Information Sharing Council, DHS, and other agencies in support of the Information Sharing Environment (ISE). The program manager is in the early stages of addressing the mandate and issued an interim implementation plan to Congress in January 2006 that lays out a number of steps and deadlines for deliverables. However, until governmentwide policies and processes on sharing are in place, the federal government will lack a comprehensive road map to improve the exchange of critical information needed to protect the homeland. Following September 11, the White House and OMB first began to work on information-sharing policies. Following passage of the Homeland Security Act in November 2002, the presidential responsibility for developing policies and processes for information sharing under section 892 of the act was not immediately assigned. On July 29, 2003, the President issued Executive Order 13311 delegating to the Secretary of DHS the responsibility to create and implement policies for sharing sensitive homeland security information, and to report to Congress by November 2003 on implementation of section 892 of the Homeland Security Act. DHS began its efforts, but did not provide the implementation report to Congress until February 2004. The report primarily discussed several small-scale efforts within DHS associated with sensitive but unclassified information. It did not provide recommendations for additional legislative measures to increase the effectiveness of the sharing of information between and among federal, state, and local entities. The report concluded that to avoid uncertainty and confusion, federal agencies must have a consistent set of policies and procedures for identifying the information to be shared as well as to be safeguarded, but it did not define those policies and procedures or DHS’s actions to develop them. Subsequently, DHS developed a notice of proposed rule making laying out a proposed policy framework to govern sharing sensitive homeland security information in response to the mandate, but after internal Executive Branch review it was not formally transmitted to OMB and, according to DHS officials, it was never issued. When the new Secretary assumed leadership of DHS in February 2005, a reassessment of the proposed rule making was requested in part to assure harmonization with the related requirements of the more recent Intelligence Reform Act, according to DHS’s Deputy Director for Information Sharing and Collaboration. Then, in response to the December 2004 Intelligence Reform Act, the President issued a series of directives to better clarify responsibilities and time frames for achieving a governmentwide road map for information sharing. On April 15, 2005, the President designated a program manager responsible for information sharing across the federal government, as required by the Intelligence Reform Act. On June 2, 2005, the President issued a memorandum directing that during the initial 2-year term of the program manager, the DNI would exercise authority, direction, and control over the program manager. The memorandum also directed the DNI to provide the program manager all personnel, funds, and other resources as assigned. The Intelligence Reform Act had authorized an appropriation of $20 million for each of fiscal years 2005 and 2006. On October 25, 2005, the President issued Executive Order 13388, which established, among other things, priorities for facilitating the sharing of terrorism information and an Information Sharing Council, chaired by the program manager. The order also revoked the President’s earlier direction, Executive Order 13356, which had addressed similar issues and imposed similar requirements with respect to—the Director of Central Intelligence, OMB, and other agencies. The present order, however, calls for the use of standards and plans developed pursuant to the revoked order. In November 2005, the new Information Sharing Council, tasked with planning for and overseeing the establishment of an ISE for sharing terrorism information, had its first meeting and took over for the former Information Systems Council that OMB had chaired. On December 16, 2005, the President issued a memorandum providing guidance and imposing requirements on the heads of all executive departments and agencies in support of the development of the ISE. The memo delineates roles and responsibilities as well as sets deadlines for an effort to leverage ongoing efforts consistent with establishing the ISE as required by the Intelligence Reform Act and in accordance with requirements of the Homeland Security Act and related executive orders. For example, the memorandum requires the program manager, in consultation with the council, to conduct and complete, within 90 days of the memorandum’s issuance, a comprehensive evaluation of existing resources pertaining to terrorism information sharing employed by individual or multiple executive departments and agencies. It also tasked the ODNI with developing the policies, procedures, and architectures needed to create the ISE by December 16, 2006. ODNI is in the early stages of addressing the mandate under the Intelligence Reform Act to create an ISE. Soon after the appointment of the program manager in April 2005, he issued a preliminary report on its plans to establish the ISE as required by the act. The program manager later outlined the priorities for his office’s work in establishing the ISE: clarifying the differing standards among agencies for the designation and dissemination of terrorism information, ensuring two-way flow of information from the federal level to the state and local level as well as from state and local agencies to the federal level, providing fast-paced, value-added dissemination of information and informational expertise from the intelligence community, overcoming the hesitancy of the intelligence community to share ensuring the protection of information privacy and other legal rights of identifying and removing impediments to information sharing. On January 9, 2006, ODNI issued an Information Sharing Environment Interim Implementation Plan to Congress that lays out a number of steps and deadlines for deliverables. ODNI noted in the interim plan the need for more time to develop the final implementation plan because the Intelligence Reform Act requirements call for detailed answers that can be provided only after significant coordination between the program manager and all departments and agencies that are ultimately responsible for implementing the ISE. In the plan, ODNI acknowledged that it recognizes the value and challenge in building ownership for the ISE among all of the federal agencies that have a role in homeland security. The plan also stated that adding to the complexity of the task is the fact that the needs of state, local, and tribal governments and private sector entities must also be taken into account as well. ODNI plans to issue a more comprehensive implementation plan to Congress in July 2006. The interim plan noted that while a large amount of terrorism information is already stored electronically in systems, many users are not connected to those systems. In addition, there remains an unknown quantity of relevant information not captured and stored electronically. Thus, the information about terrorists, their plans, and their activities is fragmentary. The interim plan states that the ISE will connect disparate electronic storehouses to take advantage of what already exists. Additionally, it will provide mechanisms for capturing and providing access to terrorism information not currently available electronically. According to the interim plan, ISE implementation will be based on a three-pronged strategy: Implementation of the presidential guidelines and requirements. Support and augmentation for existing information-sharing environments, such as the National Counterterrorism Center (NCTC). NCTC was selected to serve as one of the initial information-sharing environments because it is the primary organization in the U.S. government for analyzing and integrating all information pertaining to terrorism and counterterrorism. Moreover, DHS and DOJ will identify one or more environments run by states and major urban areas for evaluation of the effectiveness of the flow of terrorism information between federal, state and local governments and the private sector. A process for integrating the President’s guidelines and requirements with the needs of the broader ISE, which includes addressing the overall ISE’s functions, capabilities, resources, conceptual design, architecture, budget, and performance management process. While recognizing that creating a fully functioning ISE will take time, the interim plan includes a schedule for completing a number of key milestones. For example, by June 14, 2006, the program manager and the Director of NCTC are to have conducted a comprehensive review of all agency missions, roles, and responsibilities related to any aspects of information sharing, especially sharing with state, local, and private entities; developed and disseminated information-sharing standards across the federal, state, local, and private sectors; developed recommendations for sharing with foreign partners and allies; developed privacy guidelines to govern sharing; developed guidelines, training, and incentives to hold personnel accountable for improved information sharing; and developed the ISE investment strategy, among other things. As part of its efforts to provide end-user input to the technical development of the ISE, ODNI plans to continue to expand the use of information access pilot programs at the state and local levels. Currently, ODNI has two ongoing information-sharing technology pilot programs involving the Federal Bureau of Investigation (FBI) and the Department of Energy (DOE). The FBI’s New York Field Office’s Special Operations Division is using handheld wireless devices for field operations to facilitate enhanced communications among counterterrorism personnel by providing rapid wireless access to sensitive but unclassified data sources. DOE is sponsoring a pilot project that will apply technical analytic expertise to intelligence pertaining to nuclear terrorism. The project has established a core group of nuclear expert analysts, across five national laboratories, whose focus is on providing both long-term, strategic analysis of potential sources of nuclear terrorism and better short-term tactical intelligence on this issue. Central to the success of this effort is the sharing of all relevant sensitive information with these laboratories. Despite this progress, when the program manager testified before the Subcommittee on Intelligence, Information Sharing, and Terrorism Risk Assessment, Committee on Homeland Security, in November 2005, he expressed concern about whether he had enough resources to meet the mandates in the Intelligence Reform Act. For example, he said that for 2006, he did not have a budget line item and was continuing to work with the DNI on his budget. The Intelligence Reform Act authorized $20 million for fiscal year 2006, but the program manager said he needed $30 million a year at a minimum. At the time, the program manager also said that although he planned to have a staff of 25, he had only 11 federal employees and 6 contractors on board. On January 26, 2006, the program manager announced his resignation from his position. At the time of our review, a new program manager had not yet been appointed. Once a new program manager is named, it will be important for the DNI to monitor milestones set in the interim implementation plan; identify any barriers to achieving the milestones, such as insufficient resources; and recommend to the oversight committees with jurisdiction any necessary changes to the organizational structure or approach to the ISE. Despite the lack of governmentwide policies and procedures for information sharing, many agencies have their own information-sharing initiatives under way. The following are examples of agency-based terrorism-related information-sharing efforts. The FBI leads Joint Terrorism Task Forces, which are one of the means by which the FBI shares information with federal, state, and local law enforcement agencies and officers. At the time of our review, the FBI had 103 Joint Terrorism Task Forces around the country, staffed by bureau officers as well as state and local law enforcement officers. The mission of the task forces is to respond to terrorism by combining the national and international investigative resources of federal agencies with the street- level expertise of state and local law enforcement agencies. The FBI and DHS also collaborate to circulate sensitive intelligence information, through bulletins, to state and local officials. These bulletins are intended to alert state and local governments to information that is being noted at the federal level. As part of this effort, they have provided state and local officials guidance about appropriate control and sharing of this information. Multiple other mechanisms exist to share terrorism-related information. For example, through our prior work in 2004 we have identified at least 34 major networks that support homeland security functions. Some of the major technology systems we identified in this review and in our other work are described below: DHS’s Homeland Secure Data Network grew out of a former U.S. Customs Service system that was consolidated with the DHS IT network when the department was created. The system is composed of secure network connections on a data communications framework that connects users to data centers to allow them to share intelligence and other information securely. The network is eventually intended to connect 600 geographically dispersed DHS intelligence-gathering units; operational components; and other federal, state, and local agencies involved in homeland security activities. The DOJ Regional Information Sharing System (RISS) links thousands of local, state, and federal law enforcement agencies throughout the nation, providing secure communications, information-sharing resources, and investigative support to combat multijurisdictional crime and terrorist threats. RISS was integrated with the DOJ Law Enforcement Online system in 2002 and with the Automated Trusted Information Exchange in 2003, to provide users with access to homeland security, disaster, and terrorist threat information. One of the first steps ODNI plans to undertake in developing the ISE is to perform a review of the existing systems such as these so that it can leverage what has already been done and find ways to connect existing systems. Federal agencies report that they are using a total of 56 different designations for information they determined is sensitive but unclassified, and agencies that account for a large percentage of the homeland security budget reported using most of these designations. There are no governmentwide policies or procedures that describe the basis on which agencies should designate, mark, and handle this information. In this absence, the agency determines what designations to apply to its sensitive but unclassified information. Such inconsistency can lead to challenges in information sharing. In fact, more than half of the agencies reported encountering challenges in sharing sensitive but unclassified information. Furthermore, most agencies do not determine who and how many employees can make such designations, provide them training on how to do so, or perform periodic reviews of how well their practices are working, nor are there governmentwide policies that require such internal control practices. By not providing guidance and monitoring, there is a probability that the designation will be misapplied, potentially restricting material unnecessarily or resulting in dissemination of information that should be restricted. As table 2 shows, agencies reported using 56 different designations to identify categories of sensitive but unclassified information—including, for example, For Official Use Only (FOUO) and Protected Critical Infrastructure Information (PCII). Most of these designations are in use by agencies that account for a large percentage of the homeland security budget (those shown in bold in the table). However, other agencies in the list, such as the Environmental Protection Agency (EPA) and the U.S. Department of Agriculture (USDA) also have homeland security-related sensitive but unclassified information. The numerous designations can be confusing for recipients of this information, such as state and local law enforcement agencies, which must understand and protect the information according to each agency’s own rules. For most of these designations, there are no governmentwide policies or procedures to guide agency decision making on using the designations, explaining what they mean across agencies, and assuring that the information is protected and shared consistently from one agency to another. Different agencies and departments currently define sensitive but unclassified information in many different ways in accordance with their unique missions and authorities. As a result of the lack of standard criteria for sensitive but unclassified information, multiple agencies often use the same or similar terms to designate information, but they define these terms differently. For example, there are at least 13 agencies that use the designation For Official Use Only, but there are at least five different definitions of FOUO. At least seven agencies or agency components use the term Law Enforcement Sensitive (LES), including the U.S. Marshals Service, the Department of Homeland Security (DHS); the Department of Commerce, and the Office of Personnel Management (OPM). These agencies gave differing definitions for the term. While DHS does not formally define the designation, the Department of Commerce defines it to include information pertaining to the protection of senior government officials, and OPM defines it as unclassified information used by law enforcement personnel that requires protection against unauthorized disclosure to protect the sources and methods of investigative activity, evidence, and the integrity of pretrial investigative reports. Agencies also use different terminology or restrictive phrases for what is essentially the same type of information. According to a senior official in the Delaware Department of Homeland Security, the multiple designations are a problem. He said that often multiple terms or phrases are used by different agencies for the same material. For example, information about a narcotics-smuggling ring that was financing terrorism might be considered sensitive by the DHS Customs and Border Protection component, which would mark it as FOUO or LES and require it to be kept in a locked file, cabinet, or desk when not in use. The same information might be marked DEA-Sensitive by DOJ’s Drug Enforcement Administration (DEA), which under its policy, requires a higher level of protection than normally afforded sensitive but unclassified information. Additionally, the Department of Defense, the Department of State, the Environmental Protection Agency, and the U.S. Agency for International Development all use the categories under FOIA that exempt information from public disclosure as basic criteria for designating some of its sensitive information. However, for FOIA-exempt material, DOD uses the term For Official Use Only, State uses Sensitive But Unclassified, EPA uses FOIA, and the U.S. Agency for International Development (USAID) uses Sensitive But Unclassified. Use of multiple designations such as this can hamper sharing efforts and confuse end users about the information. More than half of the agencies reported challenges in sharing sensitive but unclassified information. For example, 11 of the 26 agencies that we surveyed said that they had concerns about the ability of other parties to protect sensitive but unclassified information. These concerns could lead them to share less information than they could. DHS said that sensitive but unclassified information disseminated to its state and local partners had, on occasion, been posted to public Internet sites or otherwise compromised, potentially revealing possible vulnerabilities to business competitors. The Department of Transportation (DOT) said that the time it takes to determine whether other departments’ handling and protection requirements meet or exceed DOT’s requirements for Sensitive Security Information represents a challenge. Six agencies said that the lack of standardized criteria for defining what constitutes sensitive but unclassified information was a challenge in their efforts to share information, and DOD said that standardizing the designations and definitions used by federal agencies for sensitive but unclassified information might facilitate the handling and safeguarding of the information, thereby strengthening information-sharing efforts. Four agencies reported that they struggle with balancing the trade-off between limited dissemination of sensitive but unclassified information in order to protect it and broader dissemination to more stakeholders, who could use it for their efforts. Finally, 3 agencies reported challenges in using their designations that were not related to identifying, sharing, and safeguarding sensitive information, and 9 agencies reported no challenges. First responders reported that the multiplicity of designations and definitions not only causes confusion but leads to an alternating feast or famine of information. Lack of clarity on the dissemination rules and lack of common standards for controlling sensitive but unclassified information have led to periods of oversharing of information, often overwhelming end users with the same or similar information from multiple sources, according to an Illinois State Police Officer. Of the 20 agencies that reported on who is authorized to make sensitive but unclassified designations at their agency, 13 did not limit which employees could apply at least one of their sensitive but unclassified designations. For example, DHS does not limit which employees may decide whether to designate a document For Official Use Only. At the Department of State, there are no limits on which personnel can designate information as sensitive but unclassified. At the National Aeronautics and Space Administration (NASA), approximately 20,000 civil servants and 80,000 contract employees are authorized to designate information as sensitive but unclassified using the Administratively Controlled Information designation of the agency. In addition, 12 of 23 agencies (or 52 percent) reported that they did not have policies or procedures for specialized training for personnel making sensitive but unclassified designations. Several agencies, however, have taken steps to limit the number of designators or have provided at least some limited training to their employees. The U.S. Secret Service limits its designation authority solely to those individuals in the organization with the authority to classify information at the Confidential level under the National Security Information program. DOE restricts the application and removal authority for the Unclassified Controlled Nuclear Information (UCNI) designation to specially trained UCNI reviewing officials. Also, the Department of State provides training for its designators, and the Department of the Treasury provides training for designators and users of one of its designations. Eighteen of the 23 agencies that provided us with information do not have policies or procedures for periodically reviewing how well the agency’s designation practices are working and how accurately employees are making these decisions. Without oversight, agencies have no way to know the level of compliance or the effectiveness of the policies and procedures they have set. In addition, only 2 of the agencies that provided information on the issue of time limits for sensitive but unclassified information set such limits. In contrast, classified national security information is declassified as specified by the governing executive order. The U.S. Postal Service (USPS) set a limit of 5 years, and USDA set a limit of 10 years, after which the designation would no longer be valid, and the information could become publicly available. Two agencies, the General Services Administration and the Department of Commerce, indicated that if it was possible to foresee a specific event that could remove the need for continued protection of the information—for example, a document concerning trade negotiations would be considered sensitive until the negotiations were ended—the agency marked the document in such a way so that the designation was removed upon the completion of the event. Documents designated sensitive but unclassified at the other agencies that did not set time limits will remain so designated until a review of the document’s status is triggered by an action such as a FOIA request by a private citizen. Continued restriction limits access to this information over the long term. To address the obstacles to information sharing, the Homeland Security Act required the President to, among other things, develop policies for sharing homeland security information, including sensitive but unclassified information, with appropriate state and local personnel. He delegated this responsibility to the Secretary of the newly created DHS in July 2003. Later, in his December 2005 memo, the President gave agencies 90 days to inventory their sensitive but unclassified procedures and report them to ODNI, which in turn is to provide them to the Secretary of DHS and the Attorney General. Working in coordination with the Secretaries of State, Defense, and Energy and with the DNI, they have 90 days from when they receive the inventories to develop recommended procedures that will provide a more standardized approach for designating homeland security information, law enforcement information, and terrorism information as sensitive but unclassified. The memorandum also requires that ODNI, in coordination and consultation with other agencies, develop recommendations for standardizing sensitive but unclassified procedures for all information not addressed by the first set of recommendations. In part because of the complexity of the task, shifting responsibilities, and missed deadlines, more than 4 years after September 11 the federal government still lacks comprehensive policies and processes to improve the sharing of information that is critical to protecting our homeland. After the 9/11 Commission’s recommendation that the sharing and uses of information be guided by a set of practical policy guidelines, Congress passed the Intelligence Reform Act and mandated the creation of an Information Sharing Environment (ISE), to be planned for and overseen by a program manager. While recognizing that creating a fully functioning ISE will take time, the program manager’s interim implementation plan includes a schedule for meeting a number of key deadlines. For example, by June 14, 2006, the program manager and the Director of NCTC are to have conducted a comprehensive review of all agency missions, roles, and responsibilities both as producers and users of terrorism information. Given that the program manager resigned and, at the time of our review, a new one had not been appointed, meeting this deadline will be difficult. When a new program manager is appointed, ensuring the success of this project will require support and vigilance from ODNI as well as the other agencies mentioned in the President’s memorandum. It will be essential that the DNI assess progress toward meeting the milestones in the interim plan, identify and address any barriers to progress, and recommend to the congressional oversight committees with jurisdiction any changes necessary to achieve the goals of the mandates. The President’s December 2005 memorandum recognizes the need to standardize procedures for sensitive but unclassified information. Currently, no governmentwide policies or procedures exist for most sensitive but unclassified designations. Our work on the policies and procedures agencies currently use can help validate ODNI’s efforts in this area. It will be important that the new policies and procedures provide for consistent application of the designations and consistent handling requirements. Establishing governmentwide policies and procedures is a critical first step, but unless agencies, when implementing designations, ensure employees have the tools they need to use the designations accurately, and establish a monitoring system for their use, designations could be misapplied and information might be unnecessarily restricted or released when it should be protected. In the end, agencies need the flexibility to use designations that meet their mission needs, but where feasible using the same designation and handling procedures across agencies for similar information will provide for more consistent sharing and protection of sensitive information. Without continued vigilance, there is danger that there will be further delays in developing a governmentwide information-sharing policy and in establishing sensitive but unclassified policies that better enable the sharing of the information critical to the protection of the homeland. To ensure effective implementation of the Intelligence Reform Act, we recommend that the following six actions be taken: We recommend that the Director of National Intelligence (1) assess progress toward the milestones set in its Interim Implementation Plan; (2) identify any barriers to achieving these milestones, such as insufficient resources and determine ways to resolve them; and (3) recommend to the oversight committees with jurisdiction any necessary changes to the organizational structure or approach to creating the ISE. In carrying out the President’s December 2005 mandates for standardizing sensitive but unclassified information, we recommend that the Director of National Intelligence and the Director of OMB (1) use the results of our work to validate the inventory of designations that agencies are required to conduct in accordance with the memo and (2) issue a policy that consolidates sensitive but unclassified designations where possible and addresses their consistent application across agencies. We recommend that the Director of OMB, in his oversight role with respect to federal information management, work with other agencies to develop and issue a directive requiring that agencies have in place internal controls that meet the standards set forth in GAO’s Standards for Internal Controls in the Federal Government. This directive should include guidance for employees to use in deciding what information to protect with sensitive but unclassified designations; provisions for training on making designations, controlling, and sharing such information with other entities; and a review process to determine how well the program is working. We requested comments on a draft of this report from the Director of OMB and the Director of National Intelligence or their designees. We received comments from OMB that neither agreed nor disagreed with our findings and recommendations. OMB commented that once the program manager and others completed their work to establish governmentwide policies, procedures, or protocols to guide the sharing of information as it relates to terrorism and homeland security, they would work with the program manager and all agencies to determine what additional steps are necessary, if any. ODNI, however, declined to comment on our draft report, stating that the review of intelligence activities is beyond GAO’s purview. We are disappointed by the lack of an ODNI response to our report on the critical issue of information-sharing efforts in the federal government. We have placed information sharing for homeland security on GAO’s high-risk list, in part because federal agencies have not done an adequate job of sharing critical information in the past and because success in this area will involve the combined efforts of multiple agencies and key stakeholders. The President has tasked ODNI with key coordinating roles in furtherance of this effort. In declining to comment, ODNI stated that our draft report was “very broad” and that it “addresses a number of intelligence-related issues, including a discussion of the management of and specific recommendations to the Director of National Intelligence (DNI).” ODNI then made a general reference to the DOJ having “previously advised” GAO that “the review of intelligence activities is beyond the GAO’s purview.” In DOJ’s comments on a 2003 GAO report on information sharing, DOJ similarly said “the review of intelligence activities is an arena beyond GAO’s purview.” However, there was no legal analysis attached to either of these statements. There is a 1988 DOJ Office of Legal Counsel (OLC) opinion that offers DOJ’s views on our authority to review intelligence activities in the context of foreign policy. In the 1988 opinion, OLC asserted that by enacting the current intelligence oversight framework, codified at 50 U.S.C. § 413, Congress intended the intelligence committees to maintain exclusive oversight with respect to intelligence activities, foreclosing reviews by GAO. Although we recognize that section 413 codified practices to simplify the congressional intelligence oversight process, we do not agree with DOJ’s view that the intelligence oversight framework precludes GAO reviews in the intelligence arena. Neither section 413 nor its legislative history states that the procedures established therein constitute the exclusive mechanism for congressional oversight of intelligence activities, to the exclusion of other relevant committees or GAO. GAO has broad statutory authority to evaluate agency programs and investigate matters related to the receipt, disbursement, and use of public money. GAO also has broad authority to inspect and obtain agency information and records, subject to a few limited exceptions. In any event, we do not agree with ODNI’s characterization that our review involved “intelligence activities.” Our review did not involve evaluation of the conduct of actual intelligence activities. Rather, our review addresses the procedures in place to facilitate the sharing of a broad range of information across all levels of government. In our view ODNI’s concept of “intelligence activities” is overly broad and would extend to governmentwide information-sharing efforts clearly outside the traditional intelligence arena—including, for example, procedures for sharing sensitive but unclassified information unrelated to homeland security. The use of such a sweeping definition to limit GAO’s work would seriously impair Congress’s oversight of executive branch information-sharing activities. Given the above, we strongly disagree with ODNI’s reasons for declining to comment on our report. ODNI’s letter is reprinted in appendix III. As agreed with your offices, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to the Director, Office of Management and Budget; the Director of National Intelligence; the Secretaries and heads of the 26 departments and agencies in our review; and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact either David Powner at 202-512-9286 or [email protected], or Eileen Larence at 202-512-6510 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our review were to (1) determine the status of efforts to establish governmentwide policies and processes for sharing terrorism- related information between the federal government and its state, local, and private sector partners and (2) identify the universe of different sensitive but unclassified designations agencies apply to homeland security and to other sensitive information and determine the extent to which these agencies have policies and procedures in place to ensure their consistent use. To determine the status of efforts to establish governmentwide policies and processes for sharing terrorism information,, we reviewed applicable federal laws, executive orders, presidential directives, memorandums, reports, and testimony. Because they have roles in cross-government information sharing, we also interviewed the Deputy Director and Chief of Staff of the Information Sharing and Collaboration Office at the Department of Homeland Security and the Chief of the Information Policy and Technology Branch, Office of Management and Budget, to determine efforts to date and the current status of required actions. We also interviewed Congressional Research Service staff who work on information-sharing issues and a member of the 9/11 Public Discourse Project, a privately funded continuation of the 9/11 Commission. We gathered publicly available documents on the establishment of the Office of the Director of National Intelligence’s (ODNI) on the establishment of the Information Sharing Council and the Information Sharing Environment, met informally with a senior ODNI official who provided us with the interim implementation plan. During the course of our review, we were negotiating protocols for working with ODNI. We also surveyed 26 major federal agencies, those that are subject to the requirements in the Chief Financial Officers Act as well as the Federal Energy Regulatory Commission and the U.S. Postal Service because our experience with these two agencies indicated that they used sensitive but unclassified designations. We obtained information on their sharing processes for terrorism-related information and for descriptions of any actions they had taken to encourage or improve the sharing of this information. We also asked the agencies about challenges pertaining to identifying, safeguarding, and sharing sensitive but unclassified information. We queried the agencies on the types of sensitive but unclassified designations they use; the policies, procedures, and protocols they have in place for each designation; and the extent to which they provide controls for protecting and policies for sharing these types of information. We aggregated the data by agency and sent them back to the agencies’ responding officials who reviewed the information for completeness and accuracy. We collected and reviewed applicable federal laws and regulations, policies, procedures, and documents related to the sensitive but unclassified and national security classification processes for federal agencies. We met with officials at the National Archives and Records Administration’s Information Security Oversight Office, and discussed policies and processes for handling, overseeing, and sharing national security related information as compared with policies and processes for handling, sharing, and overseeing sensitive but unclassified information. We also contacted the International Association of Police Chiefs, the International Association of Fire Chiefs, and the National Governor’s Association to obtain information from end users such as state and local law enforcement, first responders, and state-level homeland security and disaster response agencies, since such organizations are likely to require access to sensitive but unclassified information. To determine whether appropriate policies and procedures were in place, we relied on GAO’s Standards for Internal Control in the Federal Government for benchmarks and standards against which to assess each agency’s sensitive but unclassified designation policies and procedures. We conducted our work from May 2005 through February 2006 in accordance with generally accepted government auditing standards. The following information was provided by the 26 federal agencies that we surveyed. The agencies were queried on the types of sensitive but unclassified designations they use; the basis of the designations; and policies, procedures, and protocols for designating, handling, and sharing these types of information. We provided the agencies with the opportunity to review their summarized information for accuracy and completeness. Designation: Sensitive Security Information Basis for designation: Departmental Regulation 3440-2, Control and Protection of Sensitive Security Information (January 2003) Definition: The designation is used for unclassified information of a sensitive nature, that if publicly disclosed could be expected to have a harmful impact on the security of Federal operations or assets, the public health or safety of the citizens of the United States or its residents, or the nation’s long-term economic prosperity and which describes, discusses, or reflects the ability of any element of the critical infrastructure of the United States to resist intrusion, interference, compromises, theft, or incapacitation by either physical or computer-based attack or other similar conduct that violates federal, state, or local law; harms interstate or international commerce of the United States; or threatens public health or safety; any currently viable assessment, projection, or estimate of the security vulnerability of any element of the critical infrastructure of the United States, specifically including—but not limited to—vulnerability assessment, security testing, risk evaluation, risk management planning, or risk audit; or any currently applicable operational problem or solution regarding the security of any element of the critical infrastructure of the United States, specifically including—but not limited to—the repair, recovery, redesign, reconstruction, relocation, insurance, and continuity of operations of any element. Department of Commerce (continued) Department of Defense (continued) Department of Defense (continued) Systematic review process: No Designation: For Official Use Only Basis for designation: FOIA, as amended; Privacy Act of 1974, as amended; Section 208 of the E-Government Act of 2002 (44 U.S.C. § 3501, note); Handbook for Information Technology Security Risk Assessment Procedures OCIO-07 (January 2004); and Handbook for Information Assurance Security OCIO-01 (December 2005) Definition: The designation is used for information that (1) falls within one or more of the nine exemptions or three exclusions of the Freedom of Information Act (FOIA), (2) is protected by the Privacy Act of 1974, or (3) is marked by the Office of the Inspector General to prohibit distribution to unauthorized persons. Designating authority: The owner of the information. Policies and procedures for specialized training for designators: No Systematic review process: No Designation: Official Use Only Basis for designation: DOE Order 471.3 (April 2003) Definition: Certain unclassified information that may be exempt from public release under the Freedom of Information Act and has the potential to do damage to governmental, commercial or private interests if disseminated to people who do not need the information to perform their jobs or other DOE authorized functions. Designating authority: Any DOE or DOE contractor employee. Polices or procedures for specialized training for Systematic review process: No designators: No Designation: Unclassified Controlled Nuclear Information Basis for designation: Section 148 of the Atomic Energy Act of 1954, as amended (42 U.S.C. § 2168), 10 C.F.R. pt.1017, DOE Order 471.1A (June 2000) Definition: The designation is used for certain unclassified government information prohibited from unauthorized dissemination under section 148 of the Atomic Energy Act which concerns atomic energy defense programs which pertains to (i) the design of production or utilization facilities (ii) security measures for the physical protection of production or utilization facilities or nuclear material contained in these facilities or in transit (iii) the design, manufacture or utilization of nuclear weapons or components that were once classified as Restricted Data whose unauthorized dissemination could reasonably be expected to have a significant adverse effect on the health and safety of the public or the common defense and security by significantly increasing the likelihood of (i) illegal production of nuclear weapons or (ii) theft, diversion, or sabotage of nuclear materials, equipment or facilities. Designation: Sensitive But Unclassified Basis for designation: Section 201(a) of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, (42 U.S.C. § 262a (h)), and 42 C.F.R. pt. 73 (Select Agents and Toxins) (new policy in draft) Centers for Disease Control and Prevention (continued) Designation: Contractor Access Restricted Information Basis for designation: 41 U.S.C. § 401; Federal Acquisition Regulations 1.102; Executive Order 11222 (May 8, 1965) (new policy in draft) Definition: Unclassified information that involves functions reserved to the federal government as vested by the Constitution as inherent power or as implied power as necessary for the proper performance of its duties. Designating authority: Not specified Policies or procedures for specialized training for designators: No Systematic review process: No Designation: For Official Use Only Basis for designation: FOIA, as amended (new policy in draft) Definition: This designation is applied to unclassified information that is exempt from mandatory release to the public under FOIA. Designating authority: Not specified Policies or procedures for specialized training for Systematic review process: No designators: No Designation: Law Enforcement Sensitive Basis for designation: Not specified (new policy in draft) Definition: The designation is used for law enforcement purposes. Information that could reasonably be expected to interfere with law enforcement proceedings, would deprive a person of a right to a fair trial or impartial adjudication, could reasonably be expected to constitute an unwarranted invasion of personal privacy of others, disclose the identity of a confidential source, disclose investigative techniques and procedures or could reasonably be expected to endanger the life or physical safety of any individual is to be marked law enforcement sensitive. Systematic review process: No Designation: Operations Security Protected Information Basis for designation: National Security Decision Directive 298, (January 1988). (new policy in draft) Definition: The designation is applied to unclassified information concerning CDC mission, functions, operations, or programs that require protection in the national interest, or security of homeland defense. Centers for Disease Control and Prevention (continued) Designation: Sensitive Security Information Basis for designation: Homeland Security Act of 2002 (Pub. L. No.107-296); Maritime Transportation Security Act of 2002 (Pub. L. No. 107-295), 49 U.S.C. § 114(s); 49 C.F.R. pt.1520 (May 2004); Management Directive (MD) 11056 (December 2005). Department of Homeland Security (continued) Department of Justice (continued) Systematic review process: No Designation: For Official Use Only Basis for designation: Intelligence Policy Manual (August 2005) Definition: The designation is used for information that may be exempt from mandatory release to the public under the Freedom of Information Act (FOIA), 5 U.S.C. 552. Designating authority: Any FBI employee or contractor in the course of performing assigned duties may designate information as FOUO. Policies or procedures for specialized training for Systematic review process: No designators: No Designation: Law Enforcement Sensitive Basis for designation: Intelligence Policy Manual (August 2005) Definition: The designation is used to protect information compiled for law enforcement purposes. LES is a subset of FOUO. Designating authority: Any FBI employee or contractor in the course of performing assigned duties may designate information as LES. Policies or procedures for specialized training for Systematic review process: No designators: No Designation: Limited Official Use Basis for designation: DOJ Order 2620.7, Control and Protection of Limited Official Use Information (September 1982) Federal Bureau of Investigation (continued) Environmental Protection Agency (continued) Designation: For Official Use Only Basis For designation: NHSRC-70-01, Rev.0 (November 2004) Definition: For Official Use Only (FOUO) is applied by the NHSRC as the sole designator for sensitive but unclassified (SBU) information. The NHRSC uses the following definition of sensitive but unclassified, taken from the Computer Security Act of 1987, Public Law 100-235, which defines “sensitive information” as “any information, the loss, misuse, or unauthorized access to or modification of which could adversely affect the national interest or the conduct of federal programs, or the privacy to which individuals are entitled under section 552a of Title 5 (Privacy Act) but which has not been specifically authorized under criteria established by an Executive order or an Act of Congress to be kept secret in the interest of national defense or foreign policy”. Designating authority: Any National Homeland Security Research Center employee, contractor, subcontractor, or grantee may designate information FOUO. However, such designations must be certified by a NHSRC Review Authority (DRA). Policies or procedures for specialized training for Systematic review process: Yes designators: Yes Designation: Critical Energy Infrastructure Information Basis for designation: FOIA, as amended; 18 C.F.R. §§ 388.112-.113; and Commissioner Order Nos. 630, 630-A, 649, and 662. Definition: Information about proposed or existing critical infrastructure that relates to the production, generation, transportation, transmission, or distribution of energy; could be useful to a person in planning an attack on critical infrastructure; is exempt from mandatory disclosure under the Freedom of Information Act, 5 U.S.C. § 552; and does not simply give the location of the critical infrastructure. Systematic review process: No Designation: Non-Public Information Basis for designation: FOIA, as amended; 18 C.F.R. §§ 1b.9, 1b.20-.21(c), 385.410, 606, 388.112; 15 U.S.C. § 717g(b), 16 U.S.C. 825(b) Definition: Any information that is not routinely provided to the public absent a Freedom of Information Act (FOIA) request, including information that would not be released under the FOIA. Non-Public Information includes, for example, information that is submitted to the Commission with a request for non-public treatment under 18 C.F.R. § 388.112(a), which applies to information the submitter claims is exempt from mandatory disclosure under the FOIA. information concerning dispute resolution communications. See 18 C.F.R. § 385.606. information covered by a protective order. See 18 C.F.R. § 385.410. information obtained during the course of an investigation. See18 C.F.R. §§ 1b.9, 1b.20. Information and documents obtained through the Hotline Staff. See 18 C.F.R. § 1b.21(c). information obtained during the course of examination of books or other accounts. See 15 U.S.C. § 717g(b); 16 U.S.C. § 825(b). information exempt from disclosure under the FOIA, such as drafts; staff deliberative documents; attorney work product and attorney-client communications exempt from disclosure under 5 U.S.C. § 552(b)(5). Designation: Sensitive But Unclassified Basis for designation: Computer Security Act of 1987; Privacy Act, as amended; and NPR 1600.1 (November 2005) Definition: Unclassified information or material determined to have special protection requirements to preclude unauthorized disclosure to avoid compromises, risks to facilities, projects, or programs, threat to the security and/or safety of the source of information, or to meet access restrictions established by laws, directives, or regulations: ITAR—International Traffic in Arms Regulations MCTL—Militarily Critical Technologies List FOIA—Freedom of Information Act UCNI—Unclassified Controlled Nuclear Information Scientific and Technical Information (STI) Designation: Sensitive But Unclassified Basis for designation: NSF Privacy Regulations (45 C.F.R. § 613), NSF Freedom of Information Act Regulations (45 C.F.R. § 612), NSF Bulletin 05-14 (September 2005) Definition: The designation is given to information that is defined as sensitive under the Privacy Act. Designating authority: Not specified in response. Policies or procedures for specialized training for Systematic review process: No designators: No Designation: Safeguards Information Basis for designation: Section 147 of Atomic Energy Act of 1954, as amended (42 U.S.C. § 2167); 10 C.F.R. § 73- 21; Directive 12.6 (December 1999) (policy revision in draft) Definition: Safeguards Information means information, not otherwise classified as National Security Information or Restricted Data that specifically identifies a licensee’s or applicant’s detailed control and accounting procedures or security measures (including security plans, procedures, and equipment) for the physical protection of special nuclear material, by whomever possessed, whether in transit or at fixed sites, in quantities determined by the Commission to be significant to the public health and safety or the common defense and security; security measures (including security plans, procedures, and equipment) for the physical protection of source material or byproduct material, by whomever possessed, whether in transit or at fixed sites, in quantities determined by the Commission to be significant to the public health and safety or the common defense and security; or security measures (including security plans, procedures, and equipment) for the physical protection of and the location of certain plant equipment vital to the safety of production or utilization facilities involving nuclear materials covered by paragraphs (1) and (2) if the unauthorized disclosure of such information could reasonably be expected to have a significant adverse effect on the health and safety of the public or the common defense and security by significantly increasing the likelihood of theft, diversion, or sabotage of such material or such facility. In addition to the individual named above, Susan Quinlan, Assistant Director, Rochelle Burns, Joanne Fiorino, Thomas Lombardi, Lori Martinez, Vickie Miller, David Plocher, John Stradling, Morgan Walts, and Marcia Washington made key contributions to this report. | A number of initiatives to improve information sharing have been called for, including the Homeland Security Act of 2002 and in the Intelligence Reform and Terrorism Prevention Act of 2004. The 2002 act required the development of policies for sharing classified and sensitive but unclassified homeland security information. The 2004 act called for the development of an Information Sharing Environment for terrorism information. This report examines (1) the status of efforts to establish government-wide information sharing policies and processes and (2) the universe of sensitive but unclassified designations used by the 26 agencies that GAO surveyed and their related policies and procedures. More than 4 years after September 11, the nation still lacks governmentwide policies and processes to help agencies integrate the myriad of ongoing efforts, including the agency initiatives we identified, to improve the sharing of terrorism-related information that is critical to protecting our homeland. Responsibility for creating these policies and processes shifted initially from the White House to the Office of Management and Budget (OMB), and then to the Department of Homeland Security, but none has yet completed the task. Subsequently, the Intelligence Reform Act called for creation of an Information Sharing Environment, including governing policies and processes for sharing, and a program manager to oversee its development. In December 2005, the President clarified the roles and responsibilities of the program manager, now under the Director of National Intelligence, as well as the new Information Sharing Council and the other agencies in support of creating an Information Sharing Environment by December 2006. At the time of our review, the program manager was in the early stages of addressing this mandate. He issued an interim implementation report with specified tasks and milestones to Congress in January 2006, but soon after announced his resignation. This latest attempt to establish an overall information-sharing road map under the Director of National Intelligence, if it is to succeed once a new manager is appointed, will require the Director's continued vigilance in monitoring progress toward meeting key milestones, identifying any barriers to achieving them, and recommending any necessary changes to the oversight committees. The agencies that GAO reviewed are using 56 different sensitive but unclassified designations (16 of which belong to one agency) to protect information that they deem critical to their missions--for example, sensitive law or drug enforcement information or controlled nuclear information. For most designations there are no governmentwide policies or procedures that describe the basis on which an agency should assign a given designation and ensure that it will be used consistently from one agency to another. Without such policies, each agency determines what designations and associated policies to apply to the sensitive information it develops or shares. More than half the agencies reported challenges in sharing such information. Finally, most of the agencies GAO reviewed have no policies for determining who and how many employees should have authority to make sensitive but unclassified designations, providing them training on how to make these designations, or performing periodic reviews to determine how well their practices are working. The lack of such recommended internal controls increases the risk that the designations will be misapplied. This could result in either unnecessarily restricting materials that could be shared or inadvertently releasing materials that should be restricted. |
SBInet includes the acquisition, development, integration, deployment, and operations and maintenance of a mix of surveillance technologies, such as cameras, radars, sensors, and C3I technologies. The initial focus of SBInet has been on addressing the requirements of CBP’s Office of Border Patrol, which is responsible for securing the borders between the land ports of entry. Longer term, SBInet is to address requirements of CBP’s Office of Field Operations, which controls vehicle and pedestrian traffic at the ports of entry, and its Office of Air and Marine Operations, which operates helicopters, fixed-wing aircraft, and marine vessels used in securing the borders. (See fig. 1 for the potential long-term SBInet concept of operations.) Surveillance technologies are to include a variety of sensor systems. Specifically, unattended ground sensors are to be used to detect heat and vibrations associated with foot traffic and metal associated with vehicles. Radar mounted on fixed and mobile towers is to detect movement, and cameras on fixed and mobile towers are to be used by operators to identify and classify items of interest detected and tracked by ground sensors and radar. Aerial assets are also to be used to provide video and infrared imaging to enhance tracking targets. These technologies are generally to be acquired through the purchase of commercial off-the-shelf (COTS) products. C3I technologies (software and hardware) are to produce a common operating picture (COP)—a uniform presentation of activities within specific areas along the border. Together, the sensors, radar, and cameras are to gather information along the border and transmit this informatio COP terminals located in command centers and agents’ vehicles, which in turn are to assemble it to provide CBP agents with border situational awareness. Among other things, COP hardware and software are to allow agents to (1) view data from radar and sensors that detect and track movement in the border areas, (2) control cameras to help identify and classify illegal entries, (3) correlate entries wit h the positions of nearby agents, and (4) enhance tactical decision making regarding the appropriate r esponse to apprehend an entry, if necessary. To increase border security and decrease illegal immigration, DHS launched SBI more than 4 years ago after canceling its America’s Shield Initiative program. Since fiscal year 2006, DHS has received about $4.4 billion in appropriations for SBI, including about $2.5 billion for physical fencing and related infrastructure, about $1.5 billion for virtual fencing (surveillance systems) and related technical infrastructure (towers), and about $300 million for program management. The SBI Program Ex Office, which is organizationally within CBP, is responsible for managing key acquisition functions associated with SBInet, including prime contractor tracking and oversight. It is organized into four components: SBInet System Program Office (referred to as the SPO in this report), Systems Engineering, Business Management, and Operational Integration As of Dece mber 31, 2009, the SBI Program Executive Office was staffed with 188 people—87 government employees, 78 contractor staff, and 13 detailees. . In September 2006, CBP awarded a 3-year prime contract to the Boeing Company, with three additional 1-year options for designing, producing, testing, deploying, and sustaining SBI. In 2009, CBP exercised the first option year. Under this contract, CBP has issued 10 task orders tha to SBInet, covering for example, COP design and development, system deployment, and system maintenance and logistics support. As of December 2009, 4 of the 10 task orders had been completed and 6 were ongoing. (See table 1 for a summary of the SBInet task orders.) One of the completed task orders is for an effort known as Project 28, which is a prototype system that covers 28 miles of the border in CBP’s Tucson Sector in Arizona, and has been operating since February 2008. However, its completion took 8 months longer than planned because of problems in integrating system components (e.g., cameras and radars) with the COP software. As we have reported, these problems were attributable to, among other things, limitations in requirements development and contractor oversight. Through the task orders, CBP’s strategy is to deliver SBInet capabilities incrementally. To accomplish this, the SPO has adopted an evolutionary system life cycle management approach in which system capabilities are to be delivered to designated locations in a series of discrete subsets of system functional and performance capabilities that are referred to as blocks. The first block, which has been designated as Block 1, includes the purchase of commercially available surveillance systems, development of customized COP systems and software, and use of existing CBP communications and network capabilities. Such an incremental approach is a recognized best practice for acquiring large-scale, complex systems because it allows users access to new capabilities and tools sooner, and thus permits both their early operational use and evaluation. Subsequent increments of SBInet capabilities are to be delivered based on feedback and unmet requirements, as well as the availability of new technologies. In general, the SBInet life cycle management approach consists of four primary work flow activities: (1) Planning Activity, (2) System Block Activity, (3) Project Laydown Activity, and (4) Sustainment Activity. During the Planning Activity, the most critical user needs are to be identified and balanced against what is affordable and technologically available. The outcome of this process is to be a set of capability requirements that are to be acquired, developed, and deployed as a specific block. This set of capabilities, once agreed to by all stakeholders, is then passed to the System Block Activity, during which the baseline system solution to be fielded is designed and built. Also as part of this activity, the verification steps are to be conducted on the individual system components and the integrated system solution to ensure that they meet defined requirements. The Project Laydown Activity is performed to configure the block solution to a specific geographic area’s unique operational characteristics. This activity involves assessing the unique threats, terrain, and environmental concerns associated with a particular area, incorporating these needs into the system configuration to be deployed to that area, obtaining any needed environmental permits, and constructing the infrastructure and installing the configured system. It also involves test and evaluation activities, including system acceptance testing, to verify that the installed block system was built as designed. The final activity, Sustainment, is focused on the operations and maintenance of the deployed block solution and supporting the user community. Associated with each of these activities are various milestone or gate reviews. For example, a key review for the System Block Activity is the Critical Design Review (CDR). At this review, the block design and requirements are baselined and formally controlled to approve and track any changes. Among other things, this review is to verify that the block solution will meet the stated requirements within the program’s cost and schedule commitments. An important review conducted during the Project Laydown Activity is the Deployment Design Review. At this review, information such as the status of environmental reviews and land acquisitions for a specific geographic area is assessed, and the location- specific system configuration is determined. The Deployment Readiness Review is another key event during this activity. During this review, readiness to begin site preparation and construction is assessed. In addition to the four above described workflow activities are various key life cycle management processes, such as requirements development and management, risk management, and test management. Requirements development and management, among other things, involves defining and aligning a hierarchy of five types of SBInet requirements. These five types begin with high-level operational requirements and are followed by increasingly more detailed lower-level requirements, to include system, component, C3I/COP software, and design requirements. To help it manage the requirements, the SPO relies on Boeing’s use of a database known as the Dynamic Object-Oriented Requirements System (DOORS). The various types of SBInet requirements are described in table 2. Risk management entails taking proactive steps to identify and mitigate potential problems before they become actual problems. The SPO has defined a “risk” to be an uncertain event or condition that, if it occurs, will have a negative effect on at least one program objective, such as schedule, cost, scope, or technical performance. The SPO has defined an “issue” as a risk that has been realized (i.e., a negative event or condition that currently exists or has a 100 percent future certainty of occurring). According to SBInet’s risk management process, anyone involved in the program can identify a risk. Identified risks are submitted to the Risk Management Team, which includes both the SPO Risk Manager and Boeing Risk Manager, for preliminary review. If approved for further consideration, the risk is entered into the Boeing-owned risk database, which is accessible by SPO and Boeing officials. These risks are subsequently reviewed by the Joint Risk Review Board, which is composed of approximately 20 SPO and Boeing officials. If a risk is approved, it is to be assigned an owner who will be responsible for managing its mitigation. Test management involves planning, conducting, documenting, and reporting on a series of test events that first focus on the performance of individual system components, then on the performance of integrated system components, followed by system-level tests that focus on whether the system (or major system increments) are acceptable and operationally suitable. For SBInet, the program’s formal test events fall into two major phases: developmental test and evaluation (DT&E) and operational test and evaluation (OT&E). DT&E is to verify and validate the systems engineering process and provide confidence that the system design solution satisfies the desired capabilities. It consists of four test events— integration testing, component qualification testing, system qualification testing, and system acceptance testing. OT&E is to ensure that the system is effective and suitable in its operational environment with respect to key considerations, including reliability, availability, compatibility, and maintainability. SBInet defines three operational testing events—User Assessment, Operational Test, and Follow-on Operational Test and Evaluation. (See table 3 for each test event’s purpose, responsible parties, and location.) As of December 2009, the program was in the Project Laydown Activity. Specifically, the SBInet CDR was completed in October 2008, and the Block 1 design has been configured and is being tested and readied for deployment to the Tucson Border Patrol Station (TUS-1), and then to the Ajo Border Patrol Station (AJO-1), both of which are located in the CBP’s Tucson Sector of the southwest border. More specifically, the Deployment Design Review covering both TUS-1 and AJO-1 was completed in June 2007, the TUS-1 Deployment Readiness Review was completed in April 2009, and the AJO-1 Deployment Readiness Review was completed in December 2009. Together, these two deployments are to cover 53 miles of the 1,989-mile-long southern border (see fig. 2). Once a deployed configuration has been accepted and is operational, the program will be in the Sustainment Activity. As of November 2009, program documentation showed that TUS-1 and AJO-1 were to be accepted in January and July 2010, respectively. However, the SBI Executive Director told us in December 2009 that these and other SBInet scheduled milestones are currently being re-evaluated. As of February 2010, TUS-1 and AJO-1 were proposed to be accepted in September 2010 and November 2010, respectively. However, this proposed schedule has yet to be approved by CBP. Since 2007, we have identified a range of management weaknesses and risks facing SBInet and we have made a number of recommendations to address them that DHS has largely agreed with and, to varying degrees, taken actions to address. For example, in February 2007, we reported that DHS had not fully defined activities, milestones, and costs for implementing the program; demonstrated how program activities would further the strategic goals and objectives of SBI; and reported on the costs incurred, activities, and progress made by the program in obtaining operational control of the border. Further, we reported that the program’s schedule contained a high level of concurrency among related tasks and activities, which introduced considerable risk. Accordingly, we recommended that DHS define explicit and measurable commitments relative to, among other things, program capabilities, schedules, and costs, and re-examine the level of concurrency in the schedule and adjust the acquisition strategy appropriately. We are currently reviewing DHS’s Fiscal Year 2010 SBI Expenditure Plan to, among other things, determine the status of DHS’s actions to address these recommendations. In October 2007, we testified that DHS had fallen behind in implementing Project 28 due to software integration problems, although program officials stated at that time that Boeing was making progress in correcting the problems. Shortly thereafter, we testified that while DHS had accepted Project 28, it did not fully meet expectations. To benefit from this experience, program officials stated that they identified a number of lessons learned, including the need to increase input from Border Patrol agents and other users in SBInet design and development. In September 2008, we reported that important aspects of SBInet were ambiguous and in a continued state of flux, making it unclear and uncertain what technological capabilities were to be delivered when. We concluded that the absence of clarity and stability in key aspects of SBInet impaired the ability of Congress to oversee the program and hold DHS accountable for results, and hampered DHS’s ability to measure program performance. As a result, we recommended that the SPO establish and baseline the specific program commitments, including the specific system functional and performance capabilities that are to be deployed, and when they were to be deployed. Also, we reported that the SPO had not effectively performed key requirements definition and management practices. For example, it had not ensured that different levels of requirements were properly aligned, as evidenced by our analysis of a random probability sample of component requirements showing that a large percentage of them could not be traced to higher-level system and operational requirements. Also, some of SBInet’s operational requirements, which are the basis for all lower-level requirements, were found by an independent DHS review to be unaffordable and unverifiable, thus casting doubt on the quality of lower- level requirements that were derived from them. As a result of these limitations, we concluded that the risk of SBInet not meeting mission needs and performing as intended was increased, as were the chances of the program needing expensive and time-consuming system rework. We recommended that the SPO implement key requirements development and management practices to include (1) baselining requirements before system design and development efforts begin; (2) analyzing requirements prior to being baselined to ensure that that they are complete, achievable, and verifiable; and (3) tracing requirements to higher-level requirements, lower-level requirements, and test cases. We also reported that SBInet testing was not being effectively managed. For example, the SPO had not tested the individual system components to be deployed to the initial deployment locations, even though the contractor had initiated integration testing of these components with other system components and subsystems. Further, while a test management strategy was drafted, it had not been finalized and approved, and it did not contain, among other things, a clear definition of testing roles and responsibilities; a high-level master schedule of SBInet test activities; or sufficient detail to effectively guide project-specific test planning, such as milestones and metrics for specific project testing. We concluded that without a structured and disciplined approach to testing, the risk that SBInet would not satisfy user needs and operational requirements, thus requiring system rework, was increased. We recommended that the SPO (1) develop and document test practices prior to the start of testing; (2) conduct appropriate component-level testing prior to integrating system components; and (3) approve a test management strategy that, at a minimum, includes a relevant testing schedule, establishes accountability for testing activities by clearly defining testing roles and responsibilities, and includes sufficient detail to allow for testing and oversight activities to be clearly understood and communicated to test stakeholders. In light of these weaknesses and risks, we further recommended that (1) the risks associated with planned SBInet acquisition, development, testing, and deployment activities be immediately assessed and (2) the results, including proposed alternative courses of action for mitigating the risks, be provided to the CBP Commissioner and DHS’s senior leadership, as well as to the department’s congressional authorization and appropriations committees. DHS agreed with all but one of the recommendations in our September 2008 report. The status of DHS’s efforts to implement these recommendations is summarized later in this report and discussed in detail in appendix III. In September 2009, we reported that SBInet had continued to experience delays. For example, deployment to the entire southwest border had slipped from early 2009 to 2016, and final acceptance of TUS-1 and AJO-1 had slipped from November 2009 and March 2010 to December 2009 and June 2010, respectively. We did not make additional SBInet recommendations at that time. Most recently, we reported in January 2010 that SBInet testing was not being effectively managed. Specifically, while DHS’s approach to testing appropriately consisted of a series of progressively expansive developmental and operational test events, the test plans, cases, and procedures for the most recent test events were not defined in accordance with important elements of relevant guidance. For example, none of the plans adequately described testing risks and only two of the plans included quality assurance procedures for making changes to test plans during their execution. Further, a relatively small percentage of test cases for these events described the test inputs and the test environment (e.g., facilities and personnel to be used), both of which are essential to effective testing. In addition, a large percentage of the test cases for these events were changed extemporaneously during execution. While some of the changes were minor, others were more significant, such as re-writing entire procedures and changing the mapping of requirements to test cases. Moreover, these changes to procedures were not made in accordance with documented quality assurance processes, but rather were based on an undocumented understanding that program officials said they established with the contractor. Compounding the number and significance of changes were questions raised by the SPO and a support contractor about the appropriateness of some changes. For example, the SPO wrote to the prime contractor that changes made to system qualification test cases and procedures appeared to be designed to pass the test instead of being designed to qualify the system. Further, we reported that from March 2008 through July 2009, that about 1,300 SBInet defects had been found, with the number of new defects identified during this time generally increasing faster than the number being fixed—a trend that is not indicative of a system that is maturing and ready for deployment. While the full magnitude of these unresolved defects was unclear because the majority were not assigned a priority for resolution, some of the defects that had been found were significant. Although DHS reported that these defects had been resolved, they had nevertheless caused program delays, and related problems had surfaced that continued to impact the program’s schedule. Further, an early user assessment of SBInet had raised significant concerns about the performance of key system components and the system’s operational suitability. In light of these weaknesses, we recommended that DHS (1) revise the program’s overall test plan to include (a) explicit criteria for assessing the quality of test documentation, including test plans and test cases, and (b) a process for analyzing, prioritizing, and resolving program defects; (2) ensure that test schedules, plans, cases, and procedures are adequately reviewed and approved consistent with the revised test plan; (3) ensure that sufficient time is provided for reviewing and approving test documents prior to beginning a given test event; and (4) triage the full inventory of unresolved system problems, including identified user concerns, and periodically report on their status to CBP and DHS leadership. DHS fully agreed with the last three recommendations and partially agreed with the first. For Block 1, functional and performance capabilities and the number of geographic locations to which they are to be deployed have continued to decrease. We reported in September 2008 that the capabilities and deployment locations of SBInet were decreasing. Since that time, the number of component-level requirements to be deployed to TUS-1 and AJO-1 has decreased by about 32 percent. In addition, the number of sectors that the system is to be deployed to has been reduced from three to two, and the stringency of the system performance measures that the deployed system is to meet has been reduced. According to program officials, the decreases are due to poorly defined requirements and limitations in the capabilities of commercially available system components. The result will be a deployed and operational system that, like Project 28, does not live up to user expectations and provides less mission support than was envisioned. Since our September 2008 report, the number of requirements that Block 1 is to meet has dropped considerably. Specifically, in September 2008, DHS directed the SPO to identify the operational requirements to be allocated to Block 1. In response, 106 operational requirements were established, such as providing border surveillance, facilitating decision support and situational awareness, enabling communications, providing operational status and readiness metrics, and enabling system audits. Of the 106 requirements, 69 were to be included in the initial technology deployments planned for TUS-1 and AJO-1. The remaining 37 were to be addressed in future blocks. To implement the 69 operational requirements, the SPO developed a system-level requirement specification and 12 component-level requirements specifications. More specifically, as part of CDR, which concluded in October 2008, the 69 operational requirements for TUS-1 and AJO-1 were associated with 97 system-level requirements. Also during CDR, the 97 system-level requirements were associated with 1,286 component-level requirements. However, between October 2008 and September 2009, the number of component-level requirements was reduced from 1,286 to 880, or by about 32 percent. First, 281 requirements related to the specifications for three components—communications, network operations, and network security—were eliminated, leaving 1,005 baselined requirements. Examples of the 281 requirements that were eliminated include the following: the failure in a single piece of hardware or software would not affect mission critical functions which include detection and resolution of border incursions; the failure of a Network Operations Center/Security Operations Center (NOC/SOC) workstation would not prevent the system from operating; and the failure of one network power supply would be compensated for by additional backup power supplies. In addition, another 125 component-level requirements were granted “waivers” or “deviations,” further reducing the number of Block 1 requirements to be deployed to TUS-1 and AJO-1 to 880 (as of September 2009). For example, the unattended ground sensors were required to differentiate between human, vehicle, and animal targets. However, because the sensors that are to be deployed to TUS-1 and AJO-1 are only able to identify potential vehicles and are not able to differentiate between humans and animals, this requirement was deviated. Similarly, the radar was required to classify targets as humans or vehicles. However, the radar also cannot differentiate between classes of targets (e.g., humans and vehicles). As a result, the requirement in the radar specification was also deviated. Figure 3 summarizes the roughly 32 percent drop in requirements that has occurred over the last 15 months. According to program officials, component requirements were eliminated because they were either poorly written or duplicative of other requirements, or because the capabilities of commercially available products were limited. In addition, they attributed a significant number of eliminated requirements to a decision to not use a Boeing designed and developed network and instead to use an existing DHS network. To the SPO’s credit, this decision was made to align SBInet with DHS technical standards and to increase the use of COTS products. Compounding this reduction in Block 1 requirements is the likelihood that further requirements deviations and waivers will be granted based on the results of an early user assessment of the system. According to the July 2009 assessment report, certain SBInet components did not meet requirements. For example: The daytime cameras were judged to be operationally ineffective over 5 kilometers for identifying humans, while the requirement is that the cameras be usable to 10 kilometers. The laser range finder was determined to have an effective range of less than 2 kilometers, while the requirement is for the effective range to be 10 kilometers. Program officials told us that many of the limitations found during the user assessment were previously known, and corrective actions were already under way or planned for future technology upgrades to address them. However, the officials also stated they plan to issue a waiver or deviation for the camera and the laser range finder to address the two problems discussed above. In addition, they stated that a previously known limitation of the range of the radar will also need to be addressed through a deviation. In this case, the radar is required to have a range of 20 kilometers, but testing shows a maximum range of 10 kilometers. Beyond the requirement reductions, the geographic locations to receive the initial SBInet capabilities have also been reduced. As of September 2008, the initial Block 1 deployment was to span three border patrol sectors: Tucson, Yuma, and El Paso—a total of 655 miles. According to program officials, deployment to these three areas was the expressed priority of the Border Patrol due to the high threat levels in these areas. However, the Acquisition Program Baseline, which was drafted in December 2008, states that initial deployment will be to just the Tucson and Yuma Sectors, which will cover only 387 miles. According to program officials, deployment to the 268 miles of the El Paso Sector was dropped from the initial deployment in anticipation that the sector will instead receive the capabilities slated for the next SBInet increment (i.e., build). However, plans for the next increment have not been developed. According to the SBI Executive Director in December 2009, the SPO is re-evaluating where and when future deployments of SBInet will occur, and a date for when the revised deployment plans will be available has not been set. System performance measures define how well a system is to perform certain functions, and thus are important in ensuring that the system meets mission and user needs. According to program documentation, failure to meet a key performance parameter can limit the value of the system and render it unsuccessful. In November 2008, the SPO re- evaluated its existing SBInet key performance parameters and determined that SBInet must meet three such parameters: (1) the probability of detecting items of interest between the border and the control boundary; (2) the probability of correctly identifying items of interest as human, conveyance, or others; and (3) the operational availability of the system. According to program officials, subject matter experts and CBP staff concluded that these three were critical to determining whether the system successfully meets mission and user needs. Associated with each parameter is a threshold for acceptable performance. In November 2008, the SPO re-evaluated the thresholds for its three key performance parameters, and it significantly relaxed each of the thresholds: The threshold for detecting items of interest dropped from 95 percent to 70 percent. The threshold for identifying items of interest declined from 95 percent to 70 percent. The threshold for operational availability decreased from 95 to 85 percent. These threshold reductions significantly lower what constitutes acceptable system performance. For example, the system will meet its detection and identification performance requirements if it identifies 70 percent of the 70 percent of items that it detects, thus producing a 49 percent probability of identifying items of interest that cross the border. Furthermore, the reduction in operational availability means that the time that the system can be unavailable for use has gone from 18.25 days per year to 54.75 days per year—or from approximately 2.5 weeks to about 7 weeks per year, excluding downtime for planned maintenance. The SBI Executive Director attributed the performance reductions to program officials’ limited understanding of needed operational capabilities at the time the parameters and thresholds were set. The director further stated that once Block 1 has been deployed and Border Patrol personnel gain experience operating it, decisions will be made as to what additional changes to make to the key performance parameters and associated thresholds. Until then, system performance relative to identifying items of interest and operational availability will remain as described above, which program officials agreed fall short of expectations. The success of a large-scale system acquisition program like SBInet depends in part on having a reliable schedule of when the program’s set of work activities and milestone events will occur, how long they will take, and how they are related to one another. Among other things, a reliable schedule provides a road map for systematic execution of a program and the means by which to gauge progress, identify and address potential problems, and promote accountability. Our research has identified nine best practices associated with developing and maintaining a reliable schedule. These are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying reasonable float between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations. To be considered reliable, a schedule should meet all nine practices. The August 2009 SBInet integrated master schedule, which was the most current version available for our review, is not reliable because it substantially complies with only two of the nine key schedule estimating practices and it does not comply with, or only partially or minimally complies with, the remaining seven practices (see table 4 for a summary and app. IV for the detailed results of our analysis of the extent to which the schedule meets each of the nine practices). Examples of practices that were either substantially, partially, minimally, or not met are provided below. Without having a reliable schedule, it is unlikely that actual program execution will track to plans, thus increasing the risk of cost, schedule, and performance shortfalls. Capturing all activities: The schedule does not capture all activities as defined in the program’s work breakdown structure or integrated master plan. First, 57 percent of the activities listed in the work breakdown structure (71 of 125) and 67 percent of the activities listed in the integrated master plan (46 of 69) were not in the integrated master schedule. For example, the schedule is missing efforts associated with systems engineering, sensor towers, logistics, system test and evaluation, operations support, and program management. Second, the schedule does not include key activities to be performed by the government. For example, while the schedule shows the final activity in the government process for obtaining an environmental permit in order to construct towers, it does not include the related government activities needed to obtain the permit. Sequencing all activities: The schedule identifies virtually all of the predecessor and successor activities. Specifically, only 9 of 1,512 activities (less than 1 percent) were missing predecessor links. Further, only 21 of 1,512 activities (about 1 percent) had improper predecessor and successor links. While the number of unlinked activities is very small, not linking a given activity can cause problems because changes to the durations of these activities will not accurately change the dates for related activities. More importantly, 403 of 1,512 activities (about 27 percent) are constrained by “start no earlier than” dates, which is significant because it means that these activities are not allowed to start earlier, even if their respective predecessor activities have been completed. Establishing the critical path for all activities: The schedule does not reflect a valid critical path for several reasons. First, and as noted above, it is missing government and contractor activities, and is thus not complete. Second, as mentioned above, the schedule is missing some predecessor links, and improperly establishes other predecessor and successor links. Problems with the critical path were recognized by the Defense Contract Management Agency as early as November 2008, when it reported that the contractor could not develop a true critical path that incorporates all program elements. Conducting a schedule risk analysis: An analysis of the schedule’s vulnerability to slippages in the completion of tasks has not been performed. Further, program officials described the schedule as not sufficiently stable to benefit from a risk analysis. Reasons that these practices were not fully met vary and include the program’s use of Boeing to develop and maintain the integrated master schedule, even though Boeing’s processes and tools do not allow it to include in the schedule work that it does not have under contract to perform, as well as the constantly changing nature of the work to be performed. Without a reliable schedule that includes all activities necessary to complete Block 1, the SPO cannot accurately determine the amount of time required to complete Block 1, and it does not have an adequate basis for guiding the program’s execution and measuring progress, thus reducing the likelihood of meeting the program’s completion dates. Collectively, the weaknesses in meeting the nine key practices for the program’s integrated master schedule increase the risk of schedule slippages and related cost overruns and make meaningful measurement and oversight of program status and progress, as well as accountability for results, difficult to achieve. In the case of Block 1, this risk has continued to be realized. For example, the dates presented at the December 2008 to November 2009 monthly program review meetings for government acceptance of Block 1 at TUS-1 and AJO-1 showed a pattern of delays, with TUS-1 and AJO-1 acceptance slipping by 4 months and 7 months, respectively. (See fig. 4.) Moreover, these slipped dates have not been met, and the SBI Executive Director told us in December 2009 that when Block 1 will be accepted and operational continues to change and remains uncertain. As of February 2010, TUS-1 and AJO-1 were proposed to be accepted in September 2010 and November 2010, respectively; however, this proposed schedule has yet to be approved by CBP. As we have previously reported, the decision to invest in any system or major system increment should be based on reliable estimates of costs and meaningful forecasts of quantifiable and qualitative benefits over the system’s useful life. For Block 1, DHS does not have a complete and current life cycle cost estimate. Moreover, it has not projected the mission benefits expected to accrue from Block 1 over the same life cycle. According to program officials, it is premature to project such benefits given the uncertainties surrounding the role that Block 1 will ultimately play in overall border control operations. Without a meaningful understanding of SBInet costs and benefits, DHS lacks an adequate basis for knowing whether the initial system solution on which it plans to spend at least $1.3 billion is cost-effective. Moreover, DHS and congressional decision makers continue to lack a basis for deciding what investment in SBInet beyond this initial capability is economically prudent. A reliable cost estimate is critical to successfully delivering large-scale information technology (IT) systems, like SBInet, as well as major system increments, like Block 1. Such an estimate provides the basis for informed investment decision making, realistic budget formulation, meaningful progress measurement, and accountability for results. According to the Office of Management and Budget (OMB), federal agencies must maintain current and well-documented estimates of program costs, and these estimates must encompass the program’s full life cycle. Among o things, OMB states that a reliable life cycle cost estimate is critical to the capital planning and investment control process. Without such an estimate, agencies are at increased risk of making poorly informed investment decisions and securing insufficient resources to effectively execute defined program plans and schedules, and thus experiencing program cost, schedule, and performance shortfalls. Our research has identified a number of practices that form the basis of effective program cost estimating. These practices are aligned with four characteristics of a reliable cost estimate. To be reliable, a cost estimate should possess all four characteristics, each of which is summarized below. (See app. V for the key practices associated with each characteristic, including a description of each practice and our analysis of the extent to which the SBInet cost estimate meets each practice.) Comprehensive: The cost estimate should include all government and contractor costs over the program’s full life cycle, from program inception through design, development, deployment, and operation and maintenance to retirement. It should also provide sufficient detail to ensure that cost elements are neither omitted nor double counted, and it should document all cost-influencing ground rules and assumptions. Well-documented: The cost estimate should capture in writing things such as the source and significance of the data used, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources. Finally, the cost estimate should be reviewed and accepted by management to demonstrate confidence in the estimating process and the estimate. Accurate: The cost estimate should not be overly conservative or optimistic, and should be, among other things, based on an assessment of most likely costs, adjusted properly for inflation, and validated against an independent cost estimate. In addition, the estimate should be updated regularly to reflect material changes in the program and actual cost experience on the program. Further, steps should be taken to minimize mathematical mistakes and their significance and to ground the estimate in documented assumptions and a historical record of actual cost and schedule experiences on comparable programs. Credible: The cost estimate should discuss any limitations in the analysis due to uncertainty or biases surrounding the data and assumptions. Major assumptions should be varied and other outcomes computed to determine how sensitive the estimate is to changes in the assumptions. Risk and uncertainty inherent in the estimate should be assessed and disclosed. Further, the estimate should be properly verified by, for example, comparing the results with one or more independent cost estimates. The SPO’s Block 1 life cycle cost estimate includes the costs to complete those portions of Block 1 that are to be deployed to the Tucson and Yuma Sectors, which together cover about 387 miles of the southwest border (53 miles associated with both TUS-1 and AJO-1, which are in the Tucson Sector, as well as an additional 209 miles in the Tucson Sector and 125 miles in the Yuma Sector). More specifically, this estimate, which is dated December 2008, shows the minimum cost to acquire and deploy Block 1 to the Tucson and Yuma Sectors to be $758 million, with another $544 million to operate and maintain this initial capability, for a total of about $1.3 billion. However, this Block 1 cost estimate is not reliable because it does not sufficiently possess any of the above four characteristics. Specifically: The estimate is not comprehensive because it does not include all relevant costs, such as support contractor costs and costs associated with system and software design, development, and testing activities that were incurred prior to December 2008. Moreover, it includes only 1 year of operations and maintenance costs rather than these costs over the expected life of the system. Further, the estimate does not document and assess the risks associated with all ground rules and assumptions, such as known budget constraints, staff and schedule variations, and technology maturity. The estimate is not well-documented because, among other things, the sources and significance of key data have not been captured and the quality of key data, such as historical costs and actual cost reports, is limited. For example, instead of identifying and relying on historical costs from similar programs, the estimate was based, in part, on engineering judgment. Further, the calculations performed and their results, while largely documented, did not document contingency reserves and the associated confidence level for the risk-adjusted cost estimate. Also, as noted above, assumptions integral to the estimate, such as those for budget constraints, and staff and schedule variances, were not documented. The estimate is not accurate because it was not, for example, validated against an independent cost estimate. Further, it has not been updated to reflect material program changes since the estimate was developed. For example, the estimate does not reflect development and testing activities that were added since the estimate was approved to correct problems discovered during testing. Further, the estimate has not been updated with actual cost data available from the contractor. The estimate is not credible because its inherent risk and uncertainty were not adequately assessed, and thus the estimate does not address limitations associated with the assumptions used to create it. For example, the risks associated with software development were not examined, even though such risks were known to exist. In fact, the only risks considered were those associated with uncertainty in labor rates and hardware costs, and instead of being based on historical quantitative analyses, these risks were expressed by assigning them arbitrary positive or negative percentages. In addition, and for the reasons mentioned above, the estimate did not specify contingency reserve amounts to mitigate known risks, and an independent cost estimate was not used to verify the estimate. Program officials attributed these limitations in the cost estimate’s comprehensiveness, documentation, accuracy, and credibility to a range of factors, including competing program office priorities and the department’s limited cost estimating capabilities. For example, program officials stated that the DHS Cost Analysis Division did not prepare an independent estimate because it did not have, among other things, the people and tools needed to do so. In this regard, this division reports that as of July 2009, DHS only had eight cost estimators (six in headquarters and two in program offices) for departmentwide needs. Because the estimate does not adequately display these four characteristics, it does not provide a reliable picture of Block 1’s life cycle costs. As a result, DHS does not have complete information on which to base informed investment decision making, understand system affordability, and develop justifiable budget requests. Moreover, the Block 1 cost estimate does not provide a meaningful standard against which to measure cost performance, is likely to show large cost overruns, and does not provide a good basis for informing future cost estimates. The Clinger-Cohen Act of 1996 and OMB guidance emphasize the need to ensure that IT investments actually produce tangible, observable improvements in mission performance. As we have previously reported,to accomplish this, benefits that are expected to accrue from investments need to be forecast and their actual accrual needs to be measured. In the case of Block 1, however, expected mission benefits have not been defined and measured. For example, while program officials told us that system benefits are documented in the SBInet Mission Need Statement dated October 2006, this document does not include either quantifiable or qualitative benefits. Rather, it provides general statements such as “the lack of a program such as SBInet increases the risks of terrorist threats and other illegal activities.” Congress recognized the importance of having a meaningful understanding of SBInet’s value proposition when it required DHS in 2008 to provide in its Border Security, Fencing, Infrastructure, and Technology Fiscal Year 2009 Expenditure Plan a description of how the department’s planned expenditure of funds would be linked to expected SBI mission benefits and outcomes. However, we reported that the plan DHS submitted only described links among planned activities, expenditures, and outputs. It did not link these to outcomes associated with improving operational control of the border. More recently, we reported that while SBI technology and physical infrastructure, along with increases in Border Patrol personnel, are intended to allow DHS to gain effective control of U.S. borders, CBP’s measures of effective control are limited. Thus, we recommended that CBP conduct a cost-effectiveness evaluation of the SBI tactical infrastructure’s impact on effective control of the border, and DHS agreed with this recommendation. Further, program officials noted that uncertainty about SBInet’s role in and contribution to effective control of the border makes it difficult to forecast SBInet benefits. Rather, they said that operational experience with Block 1 is first needed in order to estimate such benefits. While we recognize the value of operationally evaluating an early, prototypical version of a system in order to better understand, among other things, its mission impact, and thus to better inform investment decisions, we question the basis for spending in excess of a billion dollars to gain this operational experience. Without a meaningful understanding and disclosure of SBInet benefits, to include the extent to which expected mission benefits are known and unknown, DHS did not have the necessary basis for justifying and making informed decisions about its sizeable investment in Block 1, as well as for measuring the extent to which the deployed Block 1 will actually deliver mission value commensurate with costs. Successful management of large IT programs, like SBInet, depends in large part on having clearly defined and consistently applied life cycle management processes. Our evaluations and research show that applying system life cycle management rigor and discipline increases the likelihood of delivering expected capabilities on time and within budget. In other words, the quality of a system is greatly influenced by the quality of the processes used to manage it. To the SPO’s credit, it has defined key life cycle management processes that are largely consistent with relevant guidance and associated best practices. However, it has not effectively implemented these processes. Specifically, it has not consistently followed its systems engineering plan, requirements development and management plan, and risk management approach. Reasons cited by program officials for not implementing these processes include the decision by program officials to rely on contract task order requirements that were developed prior to the systems engineering plan, and competing SPO priorities, including meeting an aggressive deployment schedule. Until the SPO consistently implements these processes, it will remain challenged in its ability to successfully deliver SBInet. Each of the steps in a life cycle management approach serves an important purpose and has inherent dependencies with one or more other steps. In addition, the steps used in the approach should be clearly defined and repeatable. Thus, if a life cycle management step is omitted or not performed effectively, later steps can be affected, potentially resulting in costly and time-consuming rework. For example, a system can be effectively tested to determine whether it meets requirements only if these requirements have been completely and correctly defined. To the extent that interdependent life cycle management steps or activities are not effectively performed, or are performed concurrently, a program will be at increased risk of cost, schedule, and performance shortfalls. The SPO’s Systems Engineering Plan documents its life cycle management approach for SBInet definition, development, testing, deployment, and sustainment. As noted earlier, we reported in September 2008 on a number of weaknesses in the SBInet life cycle management approach and made recommendations to improve it.In response, the SPO revised its Systems Engineering Plan in November 2008, and to its credit, the revised plan is largely consistent with DHS and other relevant guidance. For example, it defines a number of key life cycle milestone or “gate” reviews that are important in managing the program, such as initial planning reviews, requirements reviews, system design reviews, and test reviews. In addition, the revised plan requires most of the key artifacts and program documents that DHS guidance identified as important to each gate review, such as a concept of operations, an operational requirements document, a deployment plan, a risk management plan, a life cycle cost estimate, requirements documentation, and test plans. To illustrate, the plan identifies CDR as the important milestone event where a design baseline is to be established, requirements traceability is to be demonstrated, and verification and testing plans are to be in place. However, the Systems Engineering Plan does not address the content of the key artifacts that it requires. For example, it does not provide a sample document or content template for the concept of operations, the operational requirements document, or the deployment plan. As a result, the likelihood of the developers and reviewers of these artifacts sharing and applying a consistent and repeatable understanding of their content is minimized, thus increasing the risk that they will require costly and time- consuming rework. As we recently reported, the absence of content guidance or criteria for assessing the quality of the prime contractor’s test- related deliverables was a primary reason that limitations were found in test plans. Beyond the content of the Systems Engineering Plan, the SPO has not consistently implemented key system life cycle management activities for Block 1 that are identified by the plan. For example, the following artifacts were not reviewed or considered during the CDR that concluded in October 2008: Security Test Plan, which describes the process for assessing the robustness of the system’s security capabilities (e.g., physical facilities, hardware, software, and communications) in light of their vulnerabilities. Quality Plan, which documents the process for verifying that the contractor deliverables satisfy contractual requirements and meet or exceed quality standards. Test Plan, which describes the overall process for the test and evaluation, including the development of detailed test event plans, test procedure instructions, data collection methods, and evaluation reports. Block Training Plan, which outlines the objectives, strategy, and curriculum for training that are specific to each block, including the activities needed to support the development of training materials, coordination of training schedules, and reservation of personnel and facilities. Block Maintenance Plan, which lays out the policies and concepts to be used to maintain the operational availability of hardware and software. To the SPO’s credit, it reviewed and considered all but one of the key artifacts for the TUS-1 Deployment Readiness Review that concluded in April 2009. The omitted artifact was the Site Specific Training Plan, which outlines the objectives, strategy, and curriculum for training that are specific to each geographic site, including the activities needed to support the development of training materials, coordination of training schedules, and reservation of personnel and facilities. According to program officials, even though the Systems Engineering Plan cites the training plan as integral to the Deployment Readiness Review, this training plan is to be reviewed as part of a later milestone review. Program officials stated that a reason that the artifacts were omitted is that they have yet to begin implementing the Systems Engineering Plan. Instead, they have, for example, enforced the CDR requirements in the System Task Order that Boeing was contractually required to follow. To address this, they added that the SPO intends to bring the task orders into alignment with the Systems Engineering Plan, but they did not specify when this would occur. As a result, key milestone reviews and decisions have not always benefited from life cycle management documentation that the SPO has determined to be relevant and important to these milestone events. More specifically, the Systems Engineering Plan states that the gate reviews are intended to identify and address problems early and thus minimize future costs and avoid subsequent operational issues. By not fully informing these gate reviews and associated decisions with key life cycle management documentation, the risk of Block 1 design and deployment problems is increased, as is the likelihood of expensive and time-consuming system rework. Well-defined and managed requirements are essential to successfully acquiring large-scale systems, like SBInet. According to relevant guidance,effective requirements development and management include establishing a baseline set of requirements that are complete, unambiguous, and testable. It also includes ensuring that system-level requirements are traceable backwards to higher-level operational requirements and forward to design requirements and the methods used to verify that they are met. Among other things, this guidance states that such traceability should be used to verify that higher-level requirements have been met by first verifying that the corresponding lower-level requirements have been satisfied. However, not all Block 1 component requirements were sufficiently defined at the time that they were baselined, and operational requirements continue to be unclear and unverifiable. In addition, while requirements are now largely traceable backwards to operational requirements and forward to design requirements and verification methods, this traceability has not been used until recently to verify that higher-level requirements have been satisfied. Program officials attributed these limitations to competing SPO priorities, including aggressive schedule demands. Without ensuring that requirements are adequately defined and managed, the risks of Block 1 not performing as intended, not meeting user needs, and costing more and taking longer than necessary to complete are increased. The SBInet Requirements Development and Management Plan states that a baseline set of requirements should be established by the time of the CDR and that these requirements should be complete, unambiguous, and testable. Further, the program’s Systems Engineering Plan states that the CDR is intended to establish the final allocated requirements baseline and ensure that system development, integration, and testing can begin. To the SPO’s credit, it established a baseline set of requirements for the TUS-1 and AJO-1 system deployments at CDR. However, the baseline requirements associated with the NOC/SOC were not adequately defined at this time, as evidenced by the fact that they were significantly changed 2 months later. Specifically, about 33 percent of the component-level requirements and 43 percent of the design specifications for NOC/SOC were eliminated from the Block 1 design after CDR. Program officials attributed these changes to the NOC/SOC requirements to (1) requirements that were duplicative of another specification, and thus were redundant; (2) requirements that were poorly written, and thus did not accurately describe needs; and (3) requirements that related to the security of a system that SBInet would not interface with, and thus were unnecessary. According to program officials, the NOC/SOC was a late addition to the program, and at the time of CDR, the component’s requirements were known to need additional work. Further, they stated that while the requirements were not adequately baselined at the time of CDR, the interface requirements were understood well enough to begin system development. Without properly baselined requirements, system testing challenges are likely to occur, and the risk of system performance shortfalls, and thus cost and schedule problems, are increased. In this regard, we recently reported that NOC/SOC testing was hampered by incorrect mapping of requirements to test cases, failure to test all of the requirements, and significant changes to test cases made during the testing events. This occurred in part because ambiguities in requirements caused testers to rewrite test steps during execution based on interpretations of what they thought the requirements meant, and they required the SPO to conduct multiple events to test NOC/SOC requirements. According to the SBInet Requirements Development and Management Plan, requirements should be achievable, verifiable, unambiguous, and complete. To ensure this, the plan contains a checklist that is to be used in verifying that each requirement possesses these characteristics. However, not all of the SBInet operational requirements that pertain to Block 1 possess these characteristics. Specifically, a November 2007 DHS assessmentdetermined that 19 operational requirements, which form the basis for the lower-level requirements used to design and build the system, were not complete, achievable, verifiable, or affordable. Further, our analysis of the 12 Block 1 requirements that are included in these 19 operational requirements shows that they have not been changed to respond to the DHS findings. According to the assessment, 6 of the 12 were unaffordable and unverifiable, and the other 6 were incomplete. Examples of these requirements and DHS’s assessment follow: A requirement that the system should provide for complete coverage of the border was determined to be unverifiable and unaffordable because defining what complete coverage meant was too difficult and ensuring complete coverage, given the varied and difficult terrain along the border, was cost prohibitive. A requirement that the system should be able to detect and identify multiple simultaneous events with different individuals or groups was determined to be incomplete because the requirement did not specify the number of events to be included, the scope of the area to be covered, and the system components to be involved. As we have previously reported,these limitations in the operational requirements affect the quality of system, component, and software requirements. This is significant because, as of September 2009, these 12 operational requirements were associated with 16 system-level requirements, which were associated with 152 component-level requirements, or approximately 15 percent of the total number of component-level requirements. According to program officials, these requirements were not updated because the SPO planned to resolve the problems through the testing process. However, we recently reported that requirements limitations actually contributed to testing challenges. Specifically, we reported that about 71 percent of combined system qualification and component qualification test cases had to be rewritten extemporaneously during test execution. According to program officials, this was partly due to ambiguities in requirements, which led to differing opinions among the program and contractor staff about what was required to effectively demonstrate that the requirements were met. Further, program officials stated that a number of requirements have been granted deviations or waivers because they were poorly written. For example: A requirement for camera equipment to “conform to the capabilities and limitations of the users to operate and maintain it in its operational environment and not exceed user capabilities” was determined to be subjective and unquantifiable and thus was waived. A requirement for the tower design to accommodate the future integration of components “without causing impact on cost, schedule, and/or technical performance” was determined to have no specific criteria to objectively demonstrate closure decision and thus was also waived. As a result of these deviations and waivers, the system capabilities that are to be delivered as part of Block 1 will be less than originally envisioned. Consistent with relevant guidance,the SBInet Requirements Development and Management Plan provides for maintaining bidirectional traceability from high-level operational requirements through detailed low- level requirements to test plans. More specifically, it states that operational requirements should trace to system requirements, which in turn should trace to component requirements that trace to design requirements, which further trace to v erification methods. Since September 2008, the SPO has worked with Boeing to manually review each requirement and develop a bidirectional traceability matrix. Further, it has used this matrix to update the DOORS requirements database. Our analysis of the traceability of a random sample of Block 1 component-level requirements in the DOORS database shows that they are largely traceable backwards to operational requirements and forward to design requirements and verification methods. For example, we estimate that only 5 percent (with a 95 percent confidence interval between 1 and 14 percent) of a random sample of component requirements cannot be traced to the system requirements and then to the operational requirements. In addition, we estimate that 0 percent (with a 95 percent confidence interval between 0 and 5 percent) of the component requirements in the same sample do not trace to a verification method. (See table 5 for the results of our analysis along with the associated confidence intervals.) By establishing this traceability, the SPO is better positioned to know the extent to which the acquired and deployed system can meet operational requirements. However, the SPO has not used its requirements traceability in closing higher-level component requirements. According to relevant guidance,all lower-level requirements (i.e., children) should be closed in order to sufficiently demonstrate that the higher-level requirements (i.e., parents) have been met. Consistent with this guidance, the SBInet Requirements Development and Management Plan states that ensuring the traceability of requirements from children to their parents is an integral part of ensuring that testing is properly planned and conducted. However, 4 of 8 higher- level component requirements (parents) in the above cited random sample of system-level requirements were closed regardless of whether their corresponding lower-level design requirements (children) had been closed. According to program officials, this is because their standard practice in closing parent requirements, until recently, was to sometimes close them before their children were closed. Further, they said that this was consistent with their verification criteria for closing higher-level requirements, which did not require closure of the corresponding lower- level requirements. They also said that the reason parent verification criteria did not always reflect children verification criteria was that traceability was still being established when the verification criteria were developed and thus parent-child relationships were not always available to inform the closure criteria. Furthermore, they stated that schedule demands did not permit them to ensure that the verification criteria for requirements were aligned with the traceability information. After we shared our findings on parent requirement closure with the SPO, officials stated that they had changed their approach and will no longer close parent requirements without ensuring that all of the children requirements have first been closed. However, they did not commit to reviewing previously closed parents to determine that all of the children were closed. Without fully ensuring traceability among requirements verification methods, the risks of delivering a system solution that does not fully meet user needs or perform as intended, and thus requires additional time and resources to deliver, are increased. Risk management is a continuous, forward-looking process that effectively anticipates and mitigates risks that may have a critical impact on a program’s success. In 2008, the SPO documented a risk management approach that largely complies with relevant guidance. However, it has not effectively implemented this approach for all risks. Moreover, available documentation does not demonstrate that significant risks were disclosed to DHS and congressional decision makers in a timely fashion, as we previously recommended and, while risk disclosure to DHS leadership has recently improved, not all risks have been formally captured and thus shared. As a result, the program will likely continue to experience actual cost, schedule, and performance shortfalls, and key decision makers will continue to be less than fully informed. According to relevant guidance, effective risk management includes defining a process that, among other things, proactively identifies and analyzes risks on the basis of likelihood of occurrence and impact, assigns ownership, provides for mitigation, and monitors status. To the SPO’s credit, it has developed an approach for risk management that is largely consistent with this guidance. For example, the approach provides for continuously identifying risks throughout the program’s life cycle before they develop into actual problems, including suggested methods for doing so, such as conducting brainstorming sessions and interviewing subject matter experts; analyzing identified risks to determine their likelihood of occurring and assigning responsibility for risks; developing a risk mitigation plan, to include a set of discrete, measurable actions or events which, if successfully accomplished, can avoid or reduce the likelihood of occurrence or severity of impact of the risk; and executing and regularly monitoring risk mitigation plans to ensure that they are implemented and to allow for corrective actions if the desired results are not being achieved. In February 2007, we reported that the program’s risk management approach was in the process of being established. Specifically, we noted that at that time the SPO had drafted a risk management plan, established a governance structure, developed a risk management database, and identified 30 risks. In April 2009, we reported that the DHS Chief Information Officer had certified that this approach provided for the regular identification, evaluation, mitigation, and monitoring of risks throughout the system life cycle, and that it provided for communicating high-risk conditions to DHS investment decision makers. The SPO has not adhered to key aspects of its defined process for managing program risks. In particular, the program’s risk management repository, which is the tool used for capturing and tracking risks and their mitigation, has not included key risks that have been identified by stakeholders. For example, our analysis of reports from the repository showing all open and closed risks from April 2006 to September 2009 shows that the following program risks that have been identified by us and others were not captured in the repository: program cost and schedule risks briefed by the SPO to senior SBInet officials in January 2009, such as unplanned and unauthorized work impacting the credibility of the program cost data, and program costs and schedule plans lacking traceability; program schedule and cost estimate risks identified by the Defense Contract Management Agency prior to March 2009, such as contractor- provided documentation not permitting adequate assessment of critical path accuracy, and cost projections not including all applicable elements and thus lacking credibility; and the risk of the SPO’s heavy reliance on contractors, reported by the DHS Office of Inspector General in June 2009. In addition, the SBI Executive Director told us that the program faces a number of other risks, all but one of which were also not in the repository. These include the lack of well-defined acquisition management processes, staff with the appropriate acquisition expertise, and agreement on key system performance parameters. According to program officials, some of these risks are not in the repository because Boeing is responsible for operating and maintaining the repository, and the specifics surrounding the risks and their mitigation are considered acquisition sensitive, meaning that they should not be shared with Boeing. In this regard, the officials acknowledged that the SPO needs a risk database independent of the contractor to manage these acquisition-sensitive risks. Further, the Risk Manager identified other limitations that have hindered the SPO’s risk management efforts, along with recent actions intended to address them. For example: Risk review meetings were only being held once a month, which was resulting in lost opportunities to mitigate risks that were to be realized as actual problems within 30 days. As a result, the frequency of these meetings has been increased to twice a month. Risk information provided to senior SBI managers at monthly Joint Program Management Review Meetingswas not sufficiently detailed, and thus has been expanded. Changes were being made to the risk management repository by contractor staff without sufficient justification and without the approval of the Joint Risk Review Board. For example, program officials cited an instance in which a risk’s severity was changed from medium to high and no board member knew the reason for the change. As a result, the number of contractor staff authorized to modify data in the repository was reduced. The repository did not include all requisite information for all identified risks. For example, some risks were missing the rationale for the likelihood of occurrence and the potential impact. As a result, the Joint Risk Review Board has adopted a policy of not accepting risks that are missing requisite information. According to the Risk Manager, competing program priorities have resulted in insufficient resources devoted to risk management activities, which has contributed to the state of the SPO’s risk management efforts. However, he added that the SPO is taking steps to improve risk management by revising risk management guidance, implementing a CBP- approved database tool for managing government-only risks, and increasing risk management training and oversight. Until the program’s risk management is strengthened and effectively implemented, the program will continue to be challenged in its ability to forestall cost, schedule, and performance problems. As noted earlier, we recommended in September 2008 that the SPO assess SBInet risks and that the results of these assessments, along with alternative courses of action to address them, be provided to DHS leadership and congressional committees. According to program officials, shortly after receiving our draft report they briefed the DHS Acquisition Review Board on, among other things, SBInet risks. However, the briefing slides used for this meeting do not identify individual risks. Instead, the briefing contains one slide that only identifies “contributing factors” to changes in the program’s schedule, including a reallocation SBInet funding to SBI physical infrastructure, concurrencies and delays that have occurred in testing, and the need for environmental studies. The slides do not identify risks and alternative courses of action to address or mitigate them. In addition, program officials told us that they briefed congressional committees during the fall of 2008 on the program’s status, which they said included disclosure of program risks. However, they did not have any documentation of these briefings to show which committees were briefed, when the briefings occurred, who was present, and what was discussed and disclosed. Further, House Committee on Homeland Security staff stated that while program officials briefed them following our September 2008 report, specific program risks were not disclosed. As a result, it does not appear that either DHS or congressional stakeholders received timely information on risks facing the program at a crucial juncture in its life cycle. To the SPO’s credit, it has recently improved its disclosure of risks facing the program. In particular, the SBI Executive Director briefed the DHS Chief Information Officer in November 2009 on specific program risks. However, this briefing states that the risks presented were the Block 1 risks as captured in the contractor’s risk repository and that additional risks have not yet been formalized (see above discussion about repository limitations). Until all key risks are formally managed and regularly disclosed to department and congressional stakeholders, informed SBInet investment decision making will be constrained. As noted earlier, we reported on a number of SBInet program management weaknesses in September 2008, and we concluded that these weaknesses introduced considerable risk that the program would not meet expectations and would require time-consuming and expensive rework. In summary, these problems included a lack of clarity and certainty surrounding what technological capabilities would be delivered when, and a lack of rigor and discipline around requirements definition and management and test management. To address these problems and thereby reduce the program’s exposure to cost, schedule, and performance risks, we made eight recommendations. DHS concurred with seven of the recommendations and disagreed with one aspect of the remaining one. In summary, the department has not implemented two of the recommendations and has partially implemented the remaining six. See table 6 for a summary and appendix III for a detailed discussion of the status of each recommendation. DHS has yet to demonstrate that its proposed SBInet solution is a cost- effective course of action, and thus whether the considerable time and money being invested to acquire and deploy it is a wise and prudent use of limited resources. Given that the magnitude of the initial investment in SBInet spans more than 3 years of effort and totals hundreds of millions of dollars, coupled with the fact that the scope of the initial system’s capabilities and areas of deployment have continued to shrink, the program is fraught with risk and uncertainty. As a result, the time is now for DHS to thoughtfully reconsider its proposed SBInet solution, and in doing so, to explore ways to both limit its near-term investment in an initial set of operational capabilities and develop and share with congressional decision makers reliable projections of the relative costs and benefits of longer-term alternatives for meeting the mission goals and outcomes that SBInet is intended to advance, or reasons why such information is not available and the uncertainty and risks associated with not having it. Compounding the risks and uncertainty surrounding whether the department is pursuing the right course of action are a number of system life cycle management concerns, including limitations in the integrated master schedule; shortcomings in the documentation available to inform key milestone decisions; and weaknesses in how requirements have been developed and managed, risks have been managed, and tests have been conducted. Collectively, these concerns mean that the program is not employing the kind of acquisition management rigor and discipline needed to reasonably ensure that proposed system capabilities and benefits will be delivered on time and on budget. Because of SBInet’s decreased scope, uncertain timing, unclear costs relative to benefits, and limited life cycle management discipline and rigor, in combination with its size and mission importance, the program represents a risky undertaking. To minimize the program’s exposure to risk, it is imperative for DHS to move swiftly to first ensure that SBInet, as proposed, is the right course of action for meeting its stated border security and immigration management goals and outcomes, and once this is established, for it to move with equal diligence to ensure that it is being managed the right way. To this end, our prior recommendations to DHS relative to SBInet provide for strengthening a number of life cycle management processes, including requirements development and management and test management. Accordingly, we are not making additional recommendations that focus on these processes at this time. To address the considerable risks and uncertainties facing DHS on its SBInet program, we are making 12 recommendations. Specifically, we recommend that the Secretary of Homeland Security direct the Commissioner of U.S. Customs and Border Protection to limit future investment in the program to only work that meets one or both of the following two conditions: (1) is already under contract and supports deployment, acceptance, and operational evaluation of only those Block 1 capabilities (functions and performance levels) that are currently targeted for TUS-1 and AJO-1; or (2) provides the analytical basis for informing a departmental decision as to what, if any, expanded investment in SBInet, both in terms of capabilities (functions and performance) and deployment locations, represents a prudent, responsible, and affordable use of resources for achieving the department’s border security and immigration management mission. With respect to the first condition, we further recommend that the Secretary of Homeland Security direct the Commissioner of U.S. Customs and Border Protection to have the SBI Executive Director make it a program priority to ensure that the integrated master schedule for delivering Block 1 capabilities to TUS-1 and AJO-1 is revised to address the key schedule estimating practices discussed in this report; the currently defined Block 1 requirements, including key performance parameters, are independently validated as complete, verifiable, and affordable and any limitations found in the requirements are addressed; the Systems Engineering Plan is revised to include or reference documentation templates for key artifacts required at milestone gate reviews; all parent requirements that have been closed are supported by evidence of the closure of all corresponding and associated child requirements; and all significant risks facing the program are captured, mitigated, tracked, and periodically reported to DHS and congressional decision makers. Also with respect to the first condition, we reiterate our prior recommendations, as stated in our September 2008 report,relative to establishing program commitments, implementing the Systems Engineering Plan, defining and managing requirements, and testing. With respect to the second condition, we further recommend that the Secretary of Homeland Security direct the Commissioner of U.S. Customs and Border Protection to have the SBI Executive Director make it a program priority to ensure that a life cycle cost estimate for any incremental block of SBInet capabilities that is to include capabilities and cover locations beyond those associated with the TUS-1 and AJO-1 deployments is developed in a manner that reflects the four characteristics of a reliable estimate discussed in this report; a forecast of the qualitative and quantitative benefits to be derived from any such incremental block of SBInet over its useful life, or reasons why such forecasts are not currently possible, are developed and documented; the estimated life cycle costs and benefits and associated net present value of any such incremental block of SBInet capabilities, or reasons why such an economic analysis cannot be performed, are prepared and documented; and the results of these analyses, or the documented reasons why such analyses cannot be provided, are provided to the Commissioner of U.S. Customs and Border Protection and the DHS Acquisition Review Board. Also with respect to this second condition, we recommend that the Secretary of Homeland Security direct the Deputy Secretary of Homeland Security, as the Chair of the DHS Acquisition Review Board, to (1) decide, in consultation with the board and Commissioner of U.S. Customs and Border Protection, what, if any, expanded investment in SBInet, both in terms of capabilities (functions and performance) and deployment locations, represents a prudent, responsible, and affordable use of resources for achieving the department’s border security and immigration management mission; and (2) report the decision, and the basis for it, to the department’s authorization and appropriations committees. In written comments on a draft of this report, signed by the Director, Departmental GAO/Office of Inspector General Liaison, and reprinted in appendix II, DHS stated that it agreed with ten of our recommendations and partially agreed with the remaining two. In this regard, it described ongoing and planned actions to address each, and it provided milestones for completing these actions. In addition, DHS provided technical comments, which we have incorporated in the report as appropriate. In agreeing with our first recommendation, however, DHS commented that the words “one of” were omitted before the two conditions contained in the recommendation. However, this interpretation is not correct. Rather, the intent of our recommendation is to limit future investment on the program to either of the conditions, meaning “one or both of.” Notwithstanding DHS’s interpretation, we believe that actions that it described to address this recommendation, which include freezing funding beyond the initial deployments to TUS-1 and AJO-1 until it completes a comprehensive reassessment of the program that includes an analysis of the cost and mission effectiveness of alternative technologies, is consistent with the intent of the recommendation. Nevertheless, we have slightly modified the recommendation to avoid any further confusion. Regarding its partial agreement with our recommendation for revising the integrated master schedule in accordance with a range of best practices embodied in our cost and schedule estimating guide, DHS acknowledged the merits of employing these practices and stated that it is committed to adopting and deploying them. However, it added that the current contract structure limits its ability to fully implement all the practices prior to completing the TUS-1 and AJO-1 deployments. We understand that program facts and circumstances create practical limitations associated with some of the practices, and believe that DHS’s planned actions are consistent with the intent of our recommendation. Regarding its partial agreement with our recommendation that reiterated a number of the recommendations that we made in a prior report, DHS stated that, while these prior recommendations reflect program management best practices and it continues to make incremental improvements to address each, the scope of the program had narrowed since these recommendations were made. As a result, DHS stated that these prior recommendations were not fully applicable until and unless a decision was made to move the program forward and conduct future deployments beyond TUS-1 and AJO-1. We acknowledge that the facts and circumstances surrounding the program have recently changed and that these changes impact the nature and timing of actions appropriate for implementing them. Moreover, we believe that DHS’s planned actions are consistent with the intent of our recommendation. DHS also commented that it believed that it had implemented two of our recommendations and that these recommendations should be closed. Because closure of our recommendations requires evidentiary validation of described actions, and because many of the actions that DHS described were planned rather than completed, we are not closing any of our recommendations at this time. As part of our recurring review of the status of all of our open recommendations, we will determine if and when the recommendations have been satisfied and thus can be closed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and other parties. We will also send copies to the Secretary of Homeland Security, the Commissioner of the U.S. Customs and Border Protection, and the Director of the Office of Management and Budget. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to determine the extent to which the Department of Homeland Security (DHS) has (1) defined the scope of its proposed Secure Border Initiative Network (SBInet) solution, (2) developed a reliable schedule for delivering this solution, (3) demonstrated the cost- effectiveness of this solution, (4) acquired this solution in accordance with key life cycle management processes, and (5) addressed our recent SBInet recommendations. To accomplish our objectives, we largely focused on the first increment of SBInet, known as Block 1. To determine the extent to which DHS has defined the scope of its proposed system solution, we reviewed key program documentation related to the Block 1 functional and performance requirements and deployment locations, such as the SBInet Acquisition Program Baseline and related acquisition decision memorandums, the Operational Requirements Document, the Operational Requirements Document Elements Applicable to Block 1 System, the Requirements Traceability Matrix, the Requirements Verification Matrix, and the SBInet Block 1 User Assessment. In addition, we compared Block 1 requirements that were baselined in October 2008 as part of the Critical Design Review (CDR) to the Block 1 requirements as defined as of September 2009 to identify what, if any, changes had occurred, and we interviewed program officials as to the reasons for any changes. We also compared the locations, including the miles of border associated with these locations, that were to receive Block 1 as of September 2008 to the locations specified in the program’s March 2009 Acquisition Program Baseline to identify any changes, and we interviewed program officials as to the reasons for any changes. Further, we compared the key performance parameters listed in the Operational Requirements Document, dated March 2007, to the key performance parameters in the program’s Acquisition Program Baseline dated March 2009. To determine the extent to which DHS has developed a reliable schedule for its proposed system solution, we analyzed the SBInet integrated master schedule as of June 2009 against the nine key schedule estimating practices in our Cost Estimating and Assessment Guide. In doing so, we used commercially available software tools to determine whether it, for example, included all critical activities, a logical sequence of activities, and reasonable activity durations. Further, we observed a demonstration of the schedule in June 2009 provided by contractor officials responsible for maintaining the schedule and program officials responsible for overseeing the contractor. In July 2009, we observed a demonstration of the program office’s efforts to reconcile the version of the integrated master schedule that is exported for the government’s use with the version of the schedule that the prime contractor uses to manage the program. During this demonstration, we discussed some of our concerns regarding the integrated master schedule with program officials and we inquired about deviations from some of the key practices. Subsequently, the program office provided us with a revised version of the integrated master schedule as of August 2009, which we analyzed. In doing so, we repeated the above described steps. Further, we characterized the extent to which the revised schedule met each of the practices as either Not Met, Minimally Met, Partially Met, Substantially Met, or Met. In addition, we analyzed changes in the scheduled Block 1 deployment dates presented at each of the monthly program reviews for the 1-year period beginning in December 2008 and ending in November 2009. To determine the extent to which DHS has demonstrated the cost- effectiveness of the proposed solution, we evaluated the reliability of the Block 1 life cycle cost estimate and the definition of expected system benefits, both of which are addressed below. Cost estimate: We first observed a demonstration of the cost model used to develop the estimate, which was provided by the contractor officials who are responsible for maintaining it and the program officials who are responsible for overseeing the contractor. We then analyzed the derivation of the cost estimate relative to 12 key practices associated with four characteristics of a reliable estimate. As defined in our Cost Estimating and Assessment Guide, these four characteristics are comprehensive, well-documented, accurate, and credible, and the practices address, for example, the methodologies, assumptions, and source data used. We also interviewed program officials responsible for the cost estimate about the estimate’s derivation. We then characterized the extent to which each of the four characteristics was met as either Not Met, Minimally Met, Partially Met, Substantially Met, or Met. To do so, we scored each of the 12 individual key practices associated with the four characteristics on a scale of 1-5 (Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Met = 5), and then averaged the individual practice scores associated with a given characteristic to determine the score for that characteristic. Benefits: We interviewed program officials to identify any forecasts of qualitative and quantitative benefits that the system was to produce. In this regard, we were directed to the SBInet Mission Need Statement dated October 2006, which we analyzed. In addition, we reviewed our prior reports on the Secure Border Initiative (SBI), including a report on the SBI expenditure plan, which is a plan that DHS has been required by statute to submit to the House and Senate Appropriations Committees to, among other things, identify expected system benefits. We also interviewed program officials to determine the extent to which the system’s life cycle costs and expected benefits had been analyzed together to economically justify DHS’s proposed investment in SBInet. To determine the extent to which DHS has acquired its proposed system solution in accordance with key life cycle management processes, we focused on three key processes: the system engineering approach, requirements development and management, and risk management, each of which is addressed below. Systems engineering approach: We compared the program’s defined system engineering approach, as defined in the SBInet Systems Program Office’s (SPO) Systems Engineering Plan, to DHS and other relevant guidance. To determine the extent to which the defined systems engineering approach had been implemented, we focused on two major “gates” (i.e., life cycle milestone reviews)—the CDR and the Deployment Readiness Review. For each of these reviews, we compared the package of documentation prepared for and used during these reviews to the program’s defined system engineering approach as specified in the Systems Engineering Plan to determine what, if any, deviations existed. We also interviewed program officials as to the reason for any deviations. Requirements development and management: We compared relevant requirements management documentation, such as the Requirements Development and Management Plan, the Requirements Management Plan, the Configuration and Data Management Plan, the Operational Requirements Document, the system-level requirements specification, and the component-level requirements specifications, to relevant requirements development and management guidance to identify an variances, focusing on the extent to which requirements were properly baselined, adequately defined, and fully traced. With respect to requirements baselining, we compared the component and system requirements as of September 2008, which were approved during the CDR that concluded in October 2008, to the component and system requirements as of November 2008, and identified the number and percentage of requirements changes. We also interviewed program officials as to the reasons for any changes. For requirements definition, weassessed the extent to which operational requirements that were identified as poorly defined in November 2007 had been clarified in the Operatio Requirements Document, Elements Applicable to Block 1 System, dat November 2008. In doing so, we focused on those operational requirements that are associated with Block 1. We also traced these Blo 1 operational requirements to the lower-level system requirements (i.e., system and component requirements) to determine how many of the lower-level requirements were associated with any unchanged operationa requirements. For requirements traceability, we randomly selected a sample of 60 requirements from 1,008 component requirements in the program’s requirements management tool, known as the Dynamic Object- Oriented Requirements System (DOORS), as of July 2009. Before doing so we reviewed the quality of the access controls for the database, and we interviewed program and contractor officials and received a DOOR tutorial to understand their respective roles in requirements management and development and the use of DOORS. Once satisfied as to the reliability of the data in DOORS, we then traced each of the 60 requirements S backwards to the system requirements and then to the operational requirements and forward to design requirements and verification methods. Because we followed a probability procedure based on ra selection, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. We used statistical methods appropriate for audit compliance testing to estimate 95 percent confiden r equirements in our sample. ce intervals for the traceability of Risk management: We reviewed relevant documentation, such as t SBInet Risk/Issue/Opportunity Management Plan, the SBInet SPO Risk/Issue/Opportunity Management Process, and the SBInet Risk Management Policy, as well as extracts from the SBInet risk management database and minutes of meetings and agendas from the Risk Management Team and the Joint Risk Review Board. In doing so, we compared the risk management process defined in these documents to relevant guidance todetermine the extent to which the program has defined an effective risk management approach. Further, we observed a demonstration of the r database, and we compared SBInet risks identified by us and others, including the SBI Executive Director, to the risks in the database to determine the extent to which all key risks were being actively managed. Further, we discussed actions recently taken and planned to improve risk management with the person responsible for SBInet risk management. W also reviewed briefings and related material provided to DHS leadership during oversight reviews of SBInet and interviewed program officials to ascertain the extent to which program risks were disclosed at these reviews and at meetings with congressional committees. In this regard, we also asked cognizant staff with the House Homeland Security Commi about the extent to which pr o ogram risks were disclosed by program fficials in status briefings. To determine the extent to which DHS has addressed our prior SBInet recommendations, we focused on the eight recommendations that we made in our September 2008 report. For each recommendation, we leveraged the work described above, augmenting it as necessary to determine any plans or actions peculiar to a given recommendat example, to determine the status of efforts to address our prior recommendation related to SBInet testing, we reviewed key testing ion. For documentation, such as the Test and Evaluation Master Plan; SBInet component and system qualification test plans, test procedures, and test reports; program management reviews; program office briefings; and D Acquisition Review p Board decision memoranda. We also interviewed rogram officials. To support our work across the above objectives, we also interviewed officials from the Department of Defense’s Defense Contract Managemen Agency, which provides contractor oversight services, to understand i reviews of the contractor’s integrated master schedule, requirements development and management activities, risk management practices, and testing activities. We also reviewed Defense Contract Management Agenc reports pertaining to documentation, such as monthly status reports and the integrated master schedule and cost reporting. To assess the reliability of the data that we relied on to support the findings in the report, we reviewed relevant program documentation to substantiate evidence obtained through interviews with knowledgeable agency officials, where available. We determined that the data used in this also made appropriate attribution report are sufficiently reliable. We have indicating the sources of the data used. We performed our work at the Customs and Border Protection headquarters and contractor facilities in the Washington, D.C., metropolitan area and at a contractor facility and a Defense Contract Management Agency office in Huntsville, Alabama. We conducted this performance audit from December 2008 to May 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropria evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. (CBP) In September 2008, we reported on a number of SBInet program management weaknesses and associated risks related to establishing program commitments, developing an integrated master schedule, defining and implementing a life cycle management approach, developing and managing requirements, and testing. To address these weaknesses and risks, we made a number of recommendations. Table 7 provides details on DHS efforts to address each recommendation. Our research has identified a range of best practices associated with effective schedule estimating. These are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying reasonable float time between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations. We assessed the extent to which the SBInet integrated master schedule, dated August 2009, met each of the nine practices as either Not Met (the program provided no evidence that satisfies any portion of the criterion), Minimally Met (the program provided evidence that satisfies less than one-half of the criterion), Partially Met (the program provided evidence that satisfies about one-half of the criterion), Substantially Met (the program provided evidence that satisfies more than one-half of the criterion), and Met (the program provided evidence that satisfies the entire criterion). Table 8 shows the detailed results of our analysis. Our research has identified 12 practices that are integral to effective program life cycle cost estimating. These 12 practices in turn relate to four characteristics of a high-quality and reliable cost estimate: Comprehensive: The cost estimate should include all government and contractor costs over the program’s full life cycle, from program inception through design, development, deployment, and operation and maintenance to retirement. It should also provide sufficient detail to ensure that cost elements are neither omitted nor double-counted, and it should document all cost-influencing ground rules and assumptions. Well-documented: The cost estimate should capture in writing things such as the source and significance of the data used, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources. Finally, the cost estimate should be reviewed and accepted by management to demonstrate confidence in the estimating process and the estimate. Accurate: The cost estimate should not be overly conservative or optimistic, and should be, among other things, based on an assessment of most likely costs, adjusted properly for inflation, and validated against an independent cost estimate. In addition, the estimate should be updated regularly to reflect material changes in the program and actual cost experience with the program. Further, steps should be taken to minimize mathematical mistakes and their significance and to ground the estimate in documented assumptions and a historical record of actual cost and schedule experiences with other comparable programs. Credible: The cost estimate should discuss any limitations in the analysis due to uncertainty or biases surrounding data or assumptions. Major assumptions should be varied and other outcomes computed to determine how sensitive the estimate is to changes in the assumptions. Risk and uncertainty inherent in the estimate should be assessed and disclosed. Further, the estimate should be properly verified by, for example, comparing the results with an independent cost estimate. Our analysis of the $1.3 billion SBInet life cycle cost estimate relative to each of the 12 best practices, as well as to each of the four characteristics, is summarized in table 9. A detailed analysis relative to the 12 practices is in table 10. In addition to the contact named above, Deborah Davis (Assistant Director), David Alexander, Rebecca Alvarez, Carl Barden, Tisha Derricotte, Neil Doherty, Nancy Glover, Dan Gordon, Cheryl Dottermusch, Thomas J. Johnson, Kaelin P. Kuhn, Jason T. Lee, Lee McCracken, Jamelyn Payan, Karen Richey, Karl W.D. Seifert, Matt Snyder, Sushmita Srikanth, Jennifer Stavros-Turner, Stacey L. Steele, and Karen Talley made key contributions to this report. | The technology component of the Department of Homeland Security's (DHS) Secure Border Initiative (SBI), referred to as SBInet, is to put observing systems along our nation's borders and provide Border Patrol command centers with the imagery and related tools and information needed in deciding whether to deploy agents. SBInet is being acquired and deployed in incremental blocks of capability, with the first block to cost about $1.3 billion. Because of the program's importance, size, and challenges, GAO was asked to, among other things, determine the extent to which DHS has (1) defined the scope of its proposed SBInet solution, (2) developed a reliable schedule for this solution, (3) demonstrated the cost-effectiveness of this solution, and (4) acquired the solution using key management processes. To do this, GAO compared key program documentation to relevant guidance and industry practices. DHS has defined the scope of the first incremental block of SBInet capabilities; however, these capabilities have continued to shrink from what the department previously committed to deliver. For example, the geographical "footprint" of the initially deployed capability has been reduced from three border sectors spanning about 655 miles to two sectors spanning about 387 miles. Further, the stringency of the performance capabilities has been relaxed, to the point that, for example, system performance will be deemed acceptable if it identifies less than 50 percent of items of interest that cross the border. The result is a system that is unlikely to live up to expectations. DHS has not developed a reliable integrated master schedule for delivering the first block of SBInet. Specifically, the schedule does not sufficiently comply with seven of nine key practices that relevant guidance states are important to having a reliable schedule. For example, the schedule does not adequately capture all necessary activities, assign resources to them, and reflect schedule risks. As a result, it is unclear when the first block will be completed, and continued delays are likely. DHS has also not demonstrated the cost-effectiveness of this first system block. In particular, it has not reliably estimated the costs of this block over its entire life cycle. To do so requires DHS to ensure that the estimate meets key practices that relevant guidance states are important to having an estimate that is comprehensive, well-documented, accurate, and credible. However, DHS's cost estimate for the initial block does not sufficiently possess any of these characteristics. Further, DHS has yet to identify expected benefits from the initial block, whether quantitative or qualitative, and analyze them relative to costs. As a result, it does not know whether its planned investment will produce mission value commensurate with costs. DHS has also not acquired the initial SBInet block in accordance with key life cycle management processes. While processes associated with, among other things, requirements development and management and risk management, have been adequately defined, they have not been adequately implemented. For example, key risks have not been captured in the risk management repository and thus have not been proactively mitigated. As a result, DHS is at increased risk of delivering a system that does not perform as intended. SBInet's decreasing scope, uncertain timing, unclear value proposition, and limited life cycle management discipline and rigor are due to a range of factors, including limitations in both defined requirements and the capabilities of commercially available system components, as well as the need to address competing program priorities, such as meeting aggressive system deployment milestones. As a result, it remains unclear whether the department's pursuit of SBInet is a cost effective course of action, and if it is, that it will produce expected results on time and within budget. |
The International Monetary Fund, established in 1945, is a cooperative, intergovernmental, monetary and financial institution. As of April 1999, it had 182 members. The IMF’s first purpose is the promotion of international monetary cooperation. Its Articles of Agreement (as amended), or charter, also provide that it may make its resources available to members experiencing balance-of-payments problems; this is to be done under “adequate safeguards.” Making resources available to counter balance-of-payments problems is intended to shorten the duration and lessen the degree of these problems and avoid “measures destructive of national or international prosperity.” Member countries govern the IMF through the Executive Board—the IMF’s primary decision-making body. The IMF Executive Board comprises 24 Executive Directors who are appointed or elected by one or more IMF member countries. The U.S. Executive Director, for instance, represents the United States at the IMF. When a country joins the IMF and later when IMF members agree to increase the IMF’s capital, the country pays a quota or a capital subscription to the organization. The quota serves several purposes: (1) the funds paid to the IMF contribute to the pool of funds that the IMF uses to lend to members facing financial problems and (2) the amount of quota paid determines the voting power of the member. The IMF calculates the quota by assessing each member country’s economic size and characteristics—economically larger countries pay relatively larger quota amounts. The United States pays the largest quota and thus has the largest single share of voting rights. The IMF also has access to lines of credit provided by member countries under the General Arrangements to Borrow and, more recently, under the New Arrangements to Borrow. As part of the IMF’s mission to promote economic and financial cooperation among its members, the IMF may provide financial assistance to countries facing actual or potential balance-of-payments difficulties that request such assistance. Balance-of-payments difficulties may have short- term, as well as longer-term aspects. The IMF’s approach to alleviating a country’s balance-of-payments difficulties is intended to address both aspects, as needed. As such, the IMF’s approach has two main components—financing and conditionality—that are intended to address both the immediate crisis as well as the underlying factors that contributed to the difficulties. Although financing is designed to help alleviate the short-term balance-of-payments crisis by providing a country with needed reserves, it may also support the longer-term reform efforts by providing needed funding. Similarly, although conditionality, usually in the form of performance criteria and policy benchmarks, is intended to primarily address the underlying causes of the balance-of-payments difficulties over the medium term, it can also assist in alleviating the immediate balance-of- payments problems by, for example, reducing the country’s aggregate demand, including imports. The access to and disbursement of IMF financial assistance is conditioned upon the adoption and pursuit of economic and structural policy measures the IMF and recipient countries negotiate. This IMF “conditionality” aims to alleviate the underlying economic difficulty that led to the country’s balance-of-payments problem and ensure repayment to the IMF. As the reasons for and magnitude of countries’ balance-of-payments problems have expanded (due, in part, to the growing importance of external financing and changes in the international monetary system since the 1970s), conditionality has also expanded. According to IMF staff, conditionality has moved beyond the traditional focus of reducing aggregate demand, which was appropriate for relieving temporary balance- of-payments difficulties, typically in industrial economies. Structural policies—such as reducing the role of government in the economy and opening the economy to outside competition—that take longer to implement and are aimed at increasing the capacity for economic growth became an important part of conditionality. More recently, the financial crises in Mexico (1994-95) and in Asia and Russia (1997-99) have resulted in an increased focus on strengthening countries’ financial sectors and the gradual opening of the economy to international capital flows. The main instruments used by the IMF to provide financial assistance are Stand-by Arrangements (SBA), that provide short-term assistance for problems of a temporary nature; extended arrangements, under the Extended Fund Facility (EFF), that provide longer-term balance-of-payments assistance for problems arising from structural maladjustments; typically, when established, a program lists the general objectives for the first year; objectives for subsequent years are spelled out in program reviews; a Supplemental Reserve Facility (SRF), provided under an SBA or extended arrangement, that provides assistance for exceptional balance- of-payments problems owing to a large and short-term financing need resulting from a sudden and disruptive loss of market confidence reflected in pressure on the capital account and reserves; it is likely to be used when the magnitude of capital outflows may threaten the international monetary system; and an Enhanced Structural Adjustment Facility (ESAF), which is the principal means for providing financial support (highly concessional, or low- interest, loans) to low-income countries facing protracted balance-of- payments problems. The first three arrangements are funded through the IMF’s general resources account (GRA). The ESAF is funded through separate resources. A country may also draw on its “reserve tranche,” that is, call on funds that initially represented about one-quarter of its quota. Except for the highly concessional ESAF loans, the country pays market-based interest rates on money it receives. The SRF is a new facility that charges a higher amount for its use than other IMF instruments. According to the IMF, for a member country to use this facility, there should be a reasonable expectation that the implementation of strong adjustment policies and adequate financing will result in the early correction of its difficulties. IMF financial assistance may be a part of a larger package of financial assistance committed to countries in crisis. Brazil, for example, received commitments for a package that included about $18 billion to be provided by the IMF and approximately $4.5 billion each from the World Bank and the Inter-American Development Bank–primarily to provide improved social safety nets and banking reform. Additional bilateral sources agreed to provide $14.5 billion in financing, primarily to guarantee credits extended to Brazil from the Bank for International Settlements. The resulting package for Brazil amounted to more than $41 billion in commitments. An IMF program can also serve as a catalyst for debt relief from other creditors. For example, to qualify for debt relief from the Paris Club of creditor governments, countries must reach agreement with the IMF on a reform program. The Paris Club conditions its debt relief on countries’ implementation of economic and structural reforms under IMF-supported financing programs. Part of the motivation for Russia’s IMF arrangement in 1996 was to facilitate its debt rescheduling from the Paris Club. The IMF’s general framework for establishing a financial assistance arrangement is intended to be applied on a case-by-case basis that considers each country’s individual circumstances. This process gives the IMF considerable latitude in establishing the balance-of-payments need, the amount and timing of resource disbursements, and the conditions for disbursements. Under its Articles of Agreement, as amended, the IMF limits financial assistance to those countries with a balance-of-payments need. However, the Articles do not precisely define “need,” and, according to IMF documents, the IMF’s Executive Board has been reluctant to establish guidelines that would add greater specificity to the charter’s general criteria. The specific conditions that the IMF and the country authorities negotiate are intended to address the underlying problems that contributed to the country’s balance-of-payments difficulty, while ensuring repayment to the IMF. These conditions include a variety of changes in a country’s fiscal, monetary, or structural policies. After the country completes any “prior actions” and the IMF Executive Board approves the financial arrangement, the program is to take effect and the country is eligible to receive its first disbursement of funds. We found that the IMF generally followed this process for the six countries we reviewed. The formal process the IMF generally uses to establish countries’ financial arrangements is outlined in figure 1. IMF staff, the IMF Executive Board members, and country authorities may also consult informally at any stage throughout this process. Establishment of an IMF financial arrangement begins with discussions between IMF staff and country officials and continues through the IMF Executive Board’s approval of the arrangement. If a member country determines that it is experiencing or could experience a balance-of- payments problem, it can initiate discussions with IMF staff that may lead it to request IMF financial assistance. These discussions can occur at any time, including during the country’s annual consultation with the IMF or through informal consultations requested by the member. At these consultations, IMF staff and country authorities discuss economic data and policies as well as the nature of the country’s balance-of-payments difficulty, amount of financing expected to be provided by various sources and the amount that may be requested from the IMF, instruments under which the IMF resources could be provided, potential schedule for reviewing countries’ performance and disbursing likely conditions for assessing countries’ performance under the program. IMF staff noted that key tasks during country missions to conduct the negotiations are (1) the collection of extensive data describing the country’s economic conditions and (2) an analysis of those data to recommend the amount and timing of the IMF financial assistance and conditionality. The IMF’s review of a country’s economy is an iterative process that is often based on country-provided data, projections of key macroeconomic variables, and judgment by the IMF staff and country officials. The design of an IMF program is complicated, and negotiations between IMF staff and country authorities can be difficult for several reasons. First, the countries are facing an adverse or uncertain economic situation. Second, the negotiators may disagree on the type, pace, or feasibility of the reforms needed to help overcome the difficulty. In some cases, needed reforms reflect long-standing problems and are difficult to undertake due to political constraints. For example, reforms may entail changes to labor practices opposed by unions or removal of tax preferences benefiting certain sectors. Third, conditionality and financing are based, in part, on projections of key variables such as estimated growth rates and access to external financing. Fourth, in some cases, the country may lack reliable data for analyzing the current situation or making projections. IMF staff and country authorities may or may not reach agreement on a package of financing and conditionality. If they do not reach agreement, then the member may seek other means for addressing its difficulty. If they reach agreement, the arrangement is presented to the IMF Executive Board for approval. IMF staff generally brings to the IMF Board only arrangements it believes the IMF Board will accept. After the country satisfies any required “prior actions” and the IMF Executive Board approves the arrangement, the arrangement will take effect and the country can get funds from the IMF. Under the IMF’s Articles of Agreement, as amended, the IMF considers any of the following three elements to be a basis for providing financial assistance from the GRA: the country’s balance of payments, the country’s reserve position, and developments in its reserves. However, the Articles do not precisely define the elements or provide criteria for assessing need. While the IMF Executive Board has not established guidelines that would add greater specificity to the Articles’ general criteria, over time the IMF has developed a broad framework that serves as a basis for analyzing a country’s economy and forming judgments regarding the existence and magnitude of balance-of-payments deficits and the adequacy of international reserves. The first element—the country’s balance of payments—represents the economy’s external financing requirement and equals the sum of a member’s current and capital account balances. The current account primarily includes exports and imports in goods and services; transfers; and income payments, such as interest payments. The capital account provides summary data on the changes in the net foreign assets of domestic residents arising from transactions such as external borrowing or repayments, foreign direct investment, portfolio investment (equity and bonds), and short-term capital movements. The second element—the country’s reserve position—refers to the amount of resources (hard currency, reserve position in the IMF, special drawing rights, and monetary gold) that can be used to pay for imports and make payments on external debt. IMF documents indicate that the third element—developments in the reserve position—has a very narrow application. This element is intended to ensure that members of the IMF whose currency is a reserve currency (such as the United States) would be able to use IMF resources when requested, despite the absence of a need as outlined in the first two elements. The IMF’s framework has enabled it to consider countries’ individual circumstances and changes in the international monetary system. These include increased capital flows between countries and changes in the composition and source of those flows as well as the shift in the primary recipients of IMF financial assistance from industrialized countries to developing countries. Given such considerations, decisions about a country’s need for IMF resources have become more difficult. According to IMF documents, determining need based solely on the overall balance- of-payments position is relatively clear-cut because the balance is either in surplus or deficit. Assessing need based on whether a country’s foreign reserves are sufficient requires a greater degree of judgment because no precise criteria define the appropriate level of reserves. In determining the sufficiency of a country’s reserves, the IMF can adjust the definition of “sufficient” reserves to account for such country-specific factors as the volume and variability of exports and imports, the size and variability of capital flows, the amount of short-term liabilities, and the nature of the country’s exchange rate regime. Significant declines in the foreign reserve position may be of concern if they indicate that a country may have difficulty financing its imports or repaying its external debt in the future. IMF documents indicate that the Executive Board has been reluctant to establish guidelines that would add greater specificity to the general criteria for balance-of-payments need set forth in the Articles of Agreement, as amended. Members of the IMF Board have been concerned that “codification” of the concept of need would create unnecessary inflexibility. For this reason, they urged that the concept of need should continue to be applied on a case-by-case basis. As a result, application of this concept involves considerable data analysis as well as judgment. The IMF uses somewhat different criteria for low-income countries requesting resources under the ESAF. In contrast to the criteria for demonstrating a need for GRA resources, when assessing whether a member that meets income and other criteria for ESAF eligibility has a protracted balance-of-payments problem, emphasis is to be placed on the components of its balance of payments rather than solely on its overall balance-of-payments position. According to IMF staff, the underlying balance-of-payments problems of many low-income countries did not necessarily result in conditions similar to those reflecting the GRA criteria; that is, an actual balance-of-payments deficit or low reserves. For this reason, emphasis would have to be placed on those indicators that would normally evidence “poor external performance.” Such indicators include a deterioration in the terms of trade and diminished access to capital markets. Moreover, protracted balance-of-payments problems would often be reflected by exchange rate restrictions, payments arrears, or prolonged use of IMF resources. As with the GRA criteria, the IMF Executive Board agreed to continue to use flexibility in applying the ESAF criteria. Some Board members have expressed the opinion that a low-income country, by definition, has a protracted balance-of-payments problem. and external developments. For example, owing to a rundown in its reserves, a country may allow its currency to float more freely until adjustment policies take effect and reserves are rebuilt. resource needs, IMF quota, outstanding IMF resources, and previous performance in using IMF financing; the strength of its adjustment program; and its capacity to repay the IMF. While the IMF has discretion in deciding the total amount of resources it will provide to a country, disbursements are to be limited to the amount needed by the country. If the IMF later discovers that a country drew IMF funds without a need for those funds (that is, the information on which the financing need was determined was later found to be incorrect), it can undertake remedial action. The IMF Executive Board encourages countries to request its assistance early and to undertake corrective actions early in order to minimize the potential costs and disruption of correcting the underlying causes of a balance-of-payments problem. However, a number of factors—including the belief that the problem is temporary or can be solved without official assistance, or the concern that political and social problems may arise from needed structural changes—can cause some countries to hesitate in asking for IMF assistance. For example, Korea did not draw on IMF resources until its reserves had fallen substantially. Once the IMF staff has determined the balance-of-payments needs of a member and its eligibility to draw resources, the IMF must be satisfied that the member can meet its repayment obligations to the IMF and that the policy measures agreed to are sufficient to overcome the member’s balance-of-payments problem. The IMF does this, in part, through conditionality. Fundamental weaknesses in the underlying economy, such as a large budget deficit and/or high inflation, or in the structure of the financial or corporate sectors, may contribute to the balance-of-payments problem of a country. Conditionality may vary with each country’s individual program as it seeks to address these weaknesses. As such, according to the IMF, there is no “rigid and inflexible” set of operational rules in the establishment of a country’s conditionality program. The process is one of negotiation between the country authorities and the IMF to reach agreement on a number of issues, ranging from economic assumptions to the speed and magnitude of structural reforms. The IMF arrangement often occurs within the context of the country’s larger reform efforts. As a result, not all of a country’s policies or reform efforts may be included as conditions of the IMF arrangement. For example, some structural reforms and trade liberalization measures may be mentioned in the arrangement reached between the IMF and the country authorities, but only the actions the IMF judges to be particularly important for achieving program objectives will become performance criteria and benchmarks within the arrangement. IMF officials noted that achieving performance criteria is not the ultimate goal of conditionality; rather, the performance criteria are selected as clearly observable and measurable indicators that a country is making progress toward the overall program goals, such as strengthening the balance of payments and reducing inflation. The IMF uses two types of performance criteria that generally must be met for members to qualify for disbursements. The first are quantitative performance criteria, or macroeconomic indicators, such as monetary and budgetary targets. The second are structural performance criteria, or quantifiable/observable actions that demonstrate progress toward the borrower country’s structural reform goals. Benchmarks are points of reference against which progress may be monitored but disbursements are generally not dependent on meeting them. Benchmarks are not necessarily quantitative and frequently relate to structural variables and policies, such as tax reform and privatizing state-owned enterprises. IMF conditionality tends to focus on three areas: fiscal, monetary, and structural. These three areas are designed to support a general framework that aims to strengthen the balance-of-payments position, achieve market- based growth, and decrease the role of the government in a country’s economy. Borrower country IMF arrangements generally consist of a combination of efforts in these three areas, which depend on the country’s particular circumstances. According to the IMF, poor fiscal management in a member’s economy generally has been a major factor underlying such problems as high inflation, large current account deficits, and sluggish growth. Large and persistent budget deficits may tend to overheat the economy, contributing to high inflation (especially when financed by the printing of money), excess imports, and low domestic savings. IMF staff and the member country negotiate ways to address this fiscal deficit, including instituting reductions in government spending and increases in tax revenues. Numerical targets for the fiscal level consistent with these reforms are often part of a country’s quantitative performance criteria. Similarly, IMF staff and the member country will negotiate monetary policy changes as part of the conditionality package. The underlying goals of these conditions are typically strengthening the balance-of-payments position, safeguarding or rebuilding international reserves, restoring market confidence, reducing sizeable exchange rate changes, restraining growth in domestic credit, and/or reducing inflation. For example, limits may be imposed on the increase in short-term debt owed or guaranteed by the government; this may be done in an effort to restrict the ability of a government to use short-term external financing to meet reserve targets or finance fiscal deficits. Another performance criterion that is frequently used is a limit on the net domestic assets of the central bank. By limiting the resources made available by the central bank to the economy, the growth of the money supply is slowed and inflation is lessened. Frequently, the country and the IMF reach agreement on the minimum level of foreign reserves that the country may hold; such a requirement reduces the country’s ability to manage its exchange rate through interventions in the foreign currency market. The performance criterion on international reserves is a key indicator of progress toward external viability. According to IMF staff, the presence of pervasive structural problems in a member’s economy and the need to ensure the sustainability of a country’s reform effort require that structural policy changes be included within the overall conditionality negotiated. These structural problems encompass a broad array of issues, including inefficient state enterprises, trade restrictions, and lack of transparency in the financial and corporate sectors. Reforms in these areas are included as part of a country’s structural benchmarks, which the country is strongly encouraged to satisfy, although the benchmarks do not have the same significance as the performance criteria. However, in certain instances, structural changes may be established in a precise quantitative manner and made part of a country’s structural performance criteria. Once the financial arrangement has been negotiated, it is presented to the IMF Executive Board for approval. The IMF Board generally accepts the recommendations of the staff, largely because the staff brings to the IMF Board only proposals that the staff believes the Board will accept. The decision to approve an arrangement depends on a judgment by the IMF staff, management, and Executive Board that the program is sufficient to overcome the country’s balance-of-payments difficulty and the country will be able to repay the IMF. After the country completes any prior actions and the IMF Executive Board then approves the arrangement, the arrangement will take effect and the country becomes eligible for its first disbursement of IMF funds. The country is then expected to implement the policy measures agreed to under the arrangement. (See app. I for more information on the IMF’s process for establishing financial arrangements.) According to the IMF documents we reviewed, the IMF generally followed the process described previously in establishing the financial assistance arrangements with each of the six countries that we reviewed. In each case, the balance-of-payments problem was described and the conditionality program was intended to address the underlying problems of the individual countries as defined by IMF staff and country authorities. Our analysis showed that, to varying degrees, the balance-of-payments problems of the six countries we studied stemmed from concerns regarding the access of the countries’ public and private sectors to external financing. In addition, the reform programs of each country generally addressed the areas of concern identified by country and IMF officials as contributing to the balance-of-payments problems. Moreover, the type of financial arrangement each country received, the time period of the arrangement, and the total amount of financing the IMF agreed to provide were based on the IMF’s analysis of the needs and circumstances of the individual countries. In determining the potential amount of IMF assistance, the IMF also considered the country’s outstanding IMF resources in relation to its quota. Table 1 outlines the current IMF financial arrangements for the six borrower countries. (These arrangements are described in greater detail for each country in apps. II to VII.) According to our analysis, the balance-of-payments problems of the six countries we studied were due to concerns about the countries’ continued ability to obtain external financing. In the cases of Korea, Indonesia, and Brazil, concerns over severely diminished reserves and continued access to external financing were clearly identified as important factors in the initial set of documents that recommended the establishment of an IMF financial arrangement in these countries. In the cases of Argentina, Russia, and Uganda, concerns over continued access to external financing were not as clearly defined but were embedded within a larger set of reasons for IMF assistance, including continued support for the countries’ economic reform programs. Nonetheless, the information provided by IMF staff and country authorities was sufficient to determine that a potential balance-of- payments problem existed in each of these three countries. Our analysis also indicated that the individual IMF programs were geared toward the specific IMF assessment of the needs of the six countries, as shown in table 2. The purpose of the programs was to address the immediate or potential balance-of-payments problem of each country as well as the underlying factors that IMF staff and country officials identified as contributing to that problem. The fiscal, monetary, and structural objectives of all six countries’ arrangements had the goal of helping to improve the medium- term economic growth and/or bolster investor confidence in order to continue to finance or reduce the balance-of-payments deficit or to build reserves. However, within the context of these general goals, the magnitudes and definitions of the performance criteria and the specifics of structural reforms differed across the countries. The financing of each package addressed the balance-of-payments problem of each country. In the cases of the three countries with significant losses in their reserves (Brazil, Indonesia, and Korea), the amount of the IMF financing was substantial and frontloaded, meaning that the countries were to receive much of the funding early, with the intent of providing a signal to market participants that the commitment to these countries was strong. In the three remaining countries, IMF financing was designed to be more evenly distributed throughout the duration of the program. The financing for Russia and Uganda was to be provided in relatively equal installments over the life of the program to assist in addressing the reforms agreed to under the program. Argentina’s financing was viewed as a precautionary line of credit, available only if necessary. Korea and Argentina exemplify the differences that can exist between countries’ financial arrangements with the IMF. The IMF’s approach to the financial crisis in Korea was intended to address the country’s immediate need for financing as well as the underlying causes identified by IMF staff and country authorities as contributing to the balance-of-payments difficulties. The IMF arrangement in Korea was heavily frontloaded, with the country receiving much of the agreed-to financing at the beginning of the arrangement, in order to address the country’s immediate need to replenish depleted reserves. The country faced balance-of-payments problems primarily due to significant capital outflows. Korean banks had a large amount of foreign debt, composed substantially of short-term external loans that needed frequent refinancing. As market confidence fell, the willingness of external creditors to roll over (that is, refinance) the debt declined rapidly. The attempt by the government to support the former exchange rate rapidly depleted the foreign reserves by providing creditors with the hard currency that they ultimately withdrew as short- term debt matured. As reserves reached precariously low levels, Korea abandoned its attempt to support the exchange rate, moved to a flexible rate, and sought IMF support. The conditions outlined in the IMF arrangement were intended to address immediate concerns as well as the underlying causes of the balance-of- payments difficulties as determined by IMF staff and Korean authorities. The immediate causes were a loss of market confidence, depleted foreign reserves, and a rapidly depreciating currency. The arrangement’s immediate goal was to restore calm in the markets and contain the inflationary impact of the currency’s depreciation by providing substantial financing and requiring a tightening of monetary policy. In terms of longer- term changes, IMF staff and Korean authorities identified weaknesses in the corporate and financial sectors as underlying causes for the difficulties. Specifically, increases in corporate bankruptcies (caused by large debt burdens and excess capacity) and nonperforming (unpaid) loans exacerbated weaknesses in the banking system. Weaknesses in the banking systems included a focus on maximizing revenues (not profits) and limited experience in managing risk, combined with lax prudential supervision. As a result, under Korea’s IMF arrangement, compared to other countries’ arrangements, greater emphasis was placed on structural reforms—particularly corporate and financial restructuring. Unlike Korea’s IMF arrangement, Argentina’s arrangement addresses a potential, rather than existing, balance-of-payments problem. Although Argentina enjoyed good access to capital markets and employed a strategy to lengthen the maturity of its debt and borrow when interest rates were low, it faced an uncertain future due to deteriorating conditions in the international financial environment and the effect this likely would have on its future access to capital markets. To address this potential problem, Argentina and the IMF reached agreement on a precautionary program, with Argentina agreeing to access IMF resources only if external conditions made it necessary. The government and the IMF identified fiscal discipline and structural reforms (particularly in tax systems and labor markets) as two of the most crucial elements of Argentina’s program. In Argentina, the goal of maintaining fiscal discipline is to reduce the federal government deficit, stimulate domestic saving, and strengthen confidence in the continued viability of the convertibility regime, under which Argentine pesos are exchanged at a 1-to-1 rate with U.S. dollars. Reducing the amount of the government’s deficit lowers the amount of funds the government needs to borrow from domestic and external creditors, therefore freeing up resources for other uses and decreasing the government’s dependence on external borrowing. Argentina’s government is limited in its ability to print money (pesos) to finance its deficit because under its currency board arrangement, the government has agreed to exchange each Argentine peso circulating in the economy with a U.S. dollar if requested. Consistent with this, the quantitative performance criteria agreed to under the IMF arrangement emphasize fiscal issues and are intended to limit the federal government’s budget deficit and government debt levels. Monetary issues are not emphasized as strongly due to the government’s limited power to affect the money supply and interest rates. Structural reforms aimed at, for example, decreasing the costs of labor and lowering taxes on production are aimed at making the economy more competitive, with the goal of reducing the trade deficit and thus the current account deficit. The IMF’s process for monitoring conditionality is intended to respond to individual country progress in meeting required conditions. After the IMF Executive Board approves the arrangement, the country is expected to implement the conditions. The programs are subject to periodic reviews, at which time decisions are made on future fund disbursements. In cases where the IMF determines the country has made sufficient progress in meeting the program’s conditions, the next disbursement will be made available. The IMF Executive Board may grant waivers for nonobservance of conditions and approve access to funds for countries that do not meet all required conditions if, according to the IMF, it concludes that the deviation was minor and the country had made sufficient progress in implementing the program. However, if the IMF staff concludes that a country has not made sufficient progress in implementing policies and meeting conditions it considered essential, it may recommend that disbursements be delayed or funds withheld. In these cases, the IMF Board is generally not asked to make a negative decision; rather, the review is not completed and it is not formally brought before the Board for a decision at that time. IMF staff and Executive Directors told us that these cases are discussed with the Executive Board informally and in “country matter” sessions. The IMF’s process for monitoring the conditions included in support programs allows for program modifications, depending on a country’s individual circumstances. Modifications are usually summarized in updated program documents. The programs in each of the countries we reviewed were modified, in some cases frequently, for a variety of reasons. In some instances, modifications were made because of the effect unforeseen internal or external factors had had on the country’s ability to meet the conditions in the program. In other instances, the IMF determined the initial conditions were not feasible or realistic. As illustrated in figure 1, once the IMF Executive Board has approved a program, the country is expected to implement its conditions. IMF staff monitors the program continually, and the program is subject to periodic reviews by the IMF Executive Board in order to evaluate if the country’s progress in meeting the conditions under the program justifies the continuation of disbursements. In some cases, disbursements depend only on a determination by the IMF staff that the country has met prenegotiated criteria. As such, according to IMF staff, for most programs, review by the IMF Executive Board is not required prior to each quarterly disbursement. For these programs, semiannual reviews by the IMF Executive Board are the more typical approach. In these cases, IMF staff reviews whether the country has met its performance criteria quarterly and, if they have been met, a disbursement can follow without a full IMF Board review. Larger programs, such as several we studied, tend to have tighter monitoring, and reviews can be held quarterly, bimonthly, or monthly. Future disbursements are contingent on the outcome of these reviews. In order for a country to be eligible for the next disbursement, the review has to be considered “complete.” IMF staff missions to the country review the country’s progress in meeting the program’s performance criteria and other structural reforms with country officials. Progress is outlined in documents provided to the Executive Board by both country authorities and IMF staff. IMF staff appraises a country’s progress and makes a recommendation to the Executive Board. According to IMF staff, this process involves a considerable amount of judgment and allows for a number of options depending on the country’s performance and the effect of both internal and external events on that performance. If the IMF Executive Board determines that a country has made sufficient progress in meeting the program’s conditions, the next disbursement, as specified in the arrangement, will be available for release. However, according to IMF staff, it is fairly common for one or more of the program’s conditions to be missed, including performance criteria. When this happens, IMF staff and country officials discuss the causes behind the missed criteria and changes that may be needed in the program. According to an IMF official, if the staff concludes that the deviation is minor and self-correcting or the underlying objectives of the program can be met despite the deviation, they may recommend to the IMF Executive Board that it grant the country’s request for a waiver and be eligible for the next disbursement. However, if the staff concludes that the reform program is not on track and that the criteria were missed because the country was not sufficiently pursuing an agreed-upon policy, the staff will not recommend approval of a waiver at that time and will instead delay or suspend the completion of the country’s review. Negotiations between the two parties can continue if and until the two sides reach agreement on how to restart the existing program or initiate an entirely new program, or the borrower country requests that the program be terminated. When the staff is assured that the country is once again committed to reform (sometimes by undertaking “prior actions”), it can recommend to the Executive Board that waivers be granted for the previously unmet conditions, and the review be completed. Upon IMF Executive Board approval, the country is eligible to receive the next disbursement. The documents we reviewed demonstrated that this process was generally followed for the six countries in our study, as summarized in table 3. As previously discussed, during the review process, if the IMF determines that a country has met all of the performance criteria, the country is eligible to receive its next IMF disbursement. If IMF staff believes that the country has satisfactorily implemented the requirements for the period under review but that all criteria were not met, it can recommend that the IMF Executive Board grant the borrower country’s request for a waiver of nonobservance of those unmet criteria. Generally, in these cases, the deviations are determined to be minor, of a technical nature, or temporary. The granting of such waivers generally happens fairly quickly, and access to the next disbursement is not delayed. In addition to reviewing a country’s progress on performance criteria, its progress toward meeting indicative targets and structural benchmarks is also considered in the review process and the decision to approve the next disbursement. For example, Argentina requested a waiver for the IMF Board review in March 1999 because its federal government deficit slightly exceeded its target. This situation was primarily due to adverse external factors. In this instance, the federal government deficit, estimated at $3.85 billion in 1998 (1.1 percent of gross domestic product ), exceeded its ceiling by about $350 million, or around 0.1 percent of GDP. According to the Argentine government, its efforts to contain expenditures could not compensate fully for the revenue shortfall. The shortfall mainly reflected the slowdown of economic activity in the second half of 1998 and its adverse effect on taxes, particularly the value-added tax. IMF staff viewed the deviation as minor and as not detracting from overall fiscal performance. Hence, they recommended the waiver be granted; in March 1999, the IMF Executive Board approved the waiver. In another example, Uganda requested a waiver for nonobservance of one quantitative performance criterion during its April 1998 IMF Board review. In this instance, the quantitative performance criterion was a limit on the net claims on the government by the banking system. During the review period that ended in December 1997, the Ugandan government experienced a temporary shortfall in its checking accounts with the banking system, thereby causing it to miss the performance criterion. According to IMF documents, the shortfall was due to government payments being made sooner than expected. IMF staff recommended the waiver be granted because they viewed this nonobservance as minor and of a technical nature rather than a policy violation; the IMF Executive Board approved the waiver in April 1998. The shortfall was corrected within a short period of time. During the review process, instances in which the country did not meet key quantitative or structural performance criteria may be considered significant enough to delay or suspend disbursements. According to IMF staff, a country’s record in implementing performance targets and benchmarks influences this determination. Under these circumstances, IMF staff recommends to IMF management that the review not be completed. If IMF management concurs, the staff will likely informally brief the IMF Board, but the IMF Board will not be asked to make a formal decision on the program’s continuation at that time. Depending on the situation, IMF staff may continue to work with country officials to negotiate new terms of the program so that it can be restarted or so a new program can be initiated. If country officials and IMF staff are unable to agree on terms, it is possible that the program will lapse. Indonesia’s program is an example of a situation in which disbursements were delayed several times. The Indonesian IMF program began with Executive Board approval in November 1997, with completion of the first review scheduled for mid-March. The IMF, however, delayed Indonesia’s disbursements from mid-March to early May 1998 due to the IMF staff’s determination that Indonesia had not made sufficient progress in carrying out its program. The first review was completed in May 1998, with Indonesia meeting none of the quantitative performance criteria and one of the required structural performance criteria. IMF staff recommended and the Executive Board granted Indonesia’s request for waivers of nonobservance of these criteria based on actions taken by the government, and disbursements resumed. At this time, the IMF moved from quarterly to monthly reviews of Indonesia’s program. Disbursements were also delayed in the process of completing several subsequent reviews. Brazil’s program is a more recent example of a delay in disbursements. The program began in November 1998, with the first disbursement occurring in early December. In January 1999, the government of Brazil was forced to devalue and then float its currency. Up until that time, Brazil’s currency was pegged to the U.S. dollar, and maintenance of the exchange rate was an objective of Brazil’s IMF program. Because Brazil received funds under two different IMF policies and drew from these sources simultaneously, the first and second reviews were scheduled to occur simultaneously. Completion of this set of reviews and the second disbursement were initially scheduled to occur no later than the end of February 1999. The change in the currency regime required substantial revision to the program, thus delaying until late March completion of the review. Brazil’s program was modified to reflect new economic and exchange rate circumstances. Brazil missed one of its quantitative performance criteria (a ceiling on net domestic assets in the central bank). The Executive Board granted Brazil a waiver for the nonobservance of this performance criterion, agreed to the program modifications, and approved completion of the first and second review on March 30, 1999, thus opening the way for Brazil to receive the next disbursement of funds. Russia’s program is an example of one in which the IMF delayed disbursements and program approval, reduced the amount of the disbursement, and ultimately suspended the program. The IMF delayed four disbursements: one in June and two in September and October 1996, and then another in November 1997. Russia received no funds between February and May 1997, pending approval of the 1997 program, which was delayed until May 1997, based on Russia’s successful completion of prior actions. The delayed approval of the 1998 program, due to cabinet changes and difficulty in meeting the revenue package, meant that Russia received no funds between January and June 1998. The program was finally approved in June 1998, based on implementation of prior actions. In July 1998, the IMF approved additional funds to Russia but reduced the amount of the initial disbursement from $5.6 billion to $4.8 billion due to delays in getting two measures passed in the Duma (the lower house of the Russian parliament). The IMF was scheduled to release the next disbursement in September 1998, but Russia had deviated so far from the program that the IMF made no further disbursements. Ultimately, according to the IMF, it delayed disbursements because of Russia’s poor tax collections, reflecting a lack of government resolve to collect these revenues. However, throughout Russia’s program, the IMF staff expressed the view that Russia’s key senior authorities were committed to the program and should be supported; therefore, the IMF Executive Board continued to approve disbursements. In March 1999, Russia requested that the program be terminated. In April 1999, IMF staff and Russian authorities announced they had reached agreement on an economic program that management hoped to be able to recommend to the IMF Executive Board in support of a new arrangement. As of June 16, 1999, the IMF Board had not approved the new arrangement. Modifications to a borrower country’s program are usually based on an agreement between the IMF and country officials summarized in updated program documents. In these cases, such agreements outline modified performance criteria, indicative targets, and benchmarks. IMF and country officials may modify conditions contained in borrower country programs for a variety of reasons, depending on individual country circumstances. Two reasons for modifications of programs are (1) the effect of unanticipated internal and external factors on the country’s ability to fulfill the required conditions and (2) the determination that the initial conditions were not realistic or feasible. In many instances, there is overlap between these two reasons. Unanticipated internal factors generally reflect events over which the government had less control than it had hoped. Examples include the inability of the government to enact required legislation, or other political turmoil. Unforeseen external factors are generally changes in the global economic environment that affect the ability of a borrower country to fulfill the macroeconomic conditions of its program. Examples include such things as a decline in investor confidence and/or capital flows, a decrease in demand for or price of primary exports, default by a major debtor, a recession or other economic problems in another country to which one’s economy is closely tied, and natural disasters like droughts and floods. Unrealistic or unfeasible conditions can result when a country’s problem is misdiagnosed or when the impact of certain conditions is different from what was expected. Developments in the early stages of Indonesia’s current program are an example of an instance in which unanticipated internal events made it difficult for Indonesia to fulfill the conditions it had agreed to. These events included (1) circumvention of government decrees to dismantle cartels and open up markets, (2) the government’s consideration of a currency board (which was not part of the program), (3) social unrest, and (4) the resignation of the president. Indonesia experienced a significant loss of investor confidence that resulted in a run on the banks, the reduction of foreign credit lines, and a continuing depreciation of the currency. The IMF and Indonesia revised the economic program a number of times before the situation stabilized. Brazil is another example in which unanticipated internal events resulted in program revisions. The maintenance of the exchange rate regime was an objective of the country’s IMF program. Brazil turned to the IMF for assistance in September 1998, when its currency came under pressure as a result of the Russian crisis, and it experienced a significant loss of reserves. This reserve loss decelerated after the negotiations began, but, according to Brazilian officials, Brazil’s currency came under additional pressure for a variety of reasons after its IMF program had started. These reasons included three internal setbacks that were out of the government’s control, including the defeat in Brazil’s congress of two tax measures deemed crucial to the fiscal adjustment program and the reluctance of a number of Brazilian state governors to fulfill their financial obligations to the government. To try to stem the additional loss of reserves, the Brazilian government found it necessary to devalue and then float the currency. The IMF program was then revised to reflect the new economic situation and currency regime. In Korea, a significant external factor that limited its macroeconomic performance, in the view of the IMF, was the continued Japanese recession. According to an IMF assessment, the weakening of the Japanese yen affected Korea’s export competitiveness by making Korea’s exports more expensive in comparison with Japanese exports. In addition, it was a contributing factor in worsening and lengthening Korea’s own recession. Reassessment of initial conditions can take place because these conditions are later determined to be unfeasible or unrealistic due to economic factors that were not well known at the time. For example, Treasury and IMF officials told us the IMF projections for Korea were overly optimistic at the beginning of the program. These estimates were based on Korea’s past strong growth and did not accurately project the “rolling financial crisis” throughout Asia. Also, the true state of Korea’s financial sector was not clear when Korea’s initial program was designed. Part of Korea’s agreement with the IMF was to improve transparency (openness) in its financial reporting, but as greater information became available, investor confidence dropped when the market learned more about the level of usable international reserves, corporate debt, and banks’ nonperforming loans. Apart from waivers and reviews, quantitative performance criteria and indicative targets can be changed by means of “adjusters” that are included in some country programs. Adjusters are prenegotiated to account for specific actions and assumptions about economic and financial movements. We found that there were basically two types of adjusters in the agreements we reviewed: adjusters due to unexpected external events that temporarily affect a key variable and adjusters due to in-country policy changes that affect a key variable or the measurement of that variable. The first type of adjuster automatically changes the level of a quantitative performance criterion when there are unexpected changes—generally outside of the country’s control—to one or more key variables. The rationale is that occasionally countries may fail to reach a particular quantitative performance criterion due to fluctuations in economic conditions outside their control and that temporary changes in key variables should not derail an IMF agreement. Also, some adjusters are designed to take into account the effect of positive as well as negative external developments on the quantitative performance criteria. For example, Uganda’s program had a quantitative performance criterion that set a minimum level for net international reserves. This minimum level was based on an assumed level of inflows of funds from bilateral and multilateral lending agencies. An adjuster was added to the quantitative performance criterion in order to adjust the required minimum level upward (or downward) in the event that creditors provided more (or less) debt relief than was expected. The second type of adjuster automatically changes the level of a quantitative performance criterion when policymakers choose to make changes in their monetary or fiscal policy instruments in a manner that would either directly or indirectly affect the target variables. For example, an IMF official noted that a common performance criterion in programs is a maximum permissible level of net domestic assets of the central bank, usually included as part of a strategy to target the growth of the money supply. However, other policy decisions can affect the level of the money supply. For instance, decreases in the required reserve ratio (the proportion of the total value of deposits that a commercial bank must keep either in its vault or in an account at the central bank) may increase commercial bank liquidity and the money supply. Thus, frequently the quantitative performance criteria include an adjuster that automatically decreases the performance criterion for the net domestic assets of the central bank when the required reserve ratio is reduced to offset potential increases in the money supply. This adjuster is intended to prevent policy changes from compromising the achievement of overall program objectives, such as price stability or low inflation. Our objectives were to (1) describe how the IMF establishes financial arrangements with borrower countries and the types of conditions set under these programs and assess how this process was used for six borrower countries; and (2) describe how the IMF monitors countries’ performance and assess how this process was used for six borrower countries, detailing the conditions met and not met, the reasons why conditions were not met, and the actions the IMF took in response. To meet our objectives, we obtained access to IMF officials and documents (public and nonpublic) through the Department of the Treasury and through the staff of the U.S. member of the IMF Board of Executive Directors. These documents describe the IMF’s background, policies, and practices. We reviewed borrower country documents outlining IMF arrangements and conditionality, including letters of intent, and documents presented to the IMF Executive Board, such as staff reports on arrangements. We also reviewed several IMF assessments of its operations, including reviews of ESAF and the IMF’s response to the Asian financial crisis. We discussed the IMF’s process for establishing and monitoring the conditions of its financial arrangements with officials of the IMF, U.S. government agencies, and borrower governments. To obtain additional information from in-country officials, in February 1999, we requested access to Department of State cables related to the most current IMF arrangement and economic and financial conditions in each of the six countries. According to State, it identified and reviewed over 550 cables that were determined to be responsive to our request. Due to the volume of the cables and the limited time in which to review them, State was unable to provide timely access for us to analyze the content of many of these cables and meet the legislatively required reporting date. We also obtained information from nongovernmental and academic organizations. We did not evaluate the appropriateness or effectiveness of the IMF’s terms and conditions. We reviewed the most recent IMF financial arrangements for the following six borrower countries: Argentina, Brazil, Indonesia, Republic of Korea (Korea), the Russian Federation (Russia), and Uganda. We selected these countries because they are geographically diverse, represent a mix of borrowers that were having actual or potential balance-of-payments difficulties at the time they requested IMF financial assistance, and have varying histories with the IMF. Several of these countries were in the midst of a financial crisis at the time they requested assistance. Three countries—Argentina, Russia, and Uganda—had successive IMF financial arrangements, whereas two other countries—Indonesia and Korea—had not had IMF financial arrangements for about 10 years before their most current arrangements. The information contained in this report is based on the implementation of countries’ programs from their inception through April 1999, unless otherwise noted. We conducted our review in Washington, D.C., between November 1998 and April 1999 in accordance with generally accepted government auditing standards. We recognize that the IMF’s actions have been subject to debate and criticism. An evaluation of these criticisms is clearly outside the scope of this report. We identify some of these criticisms in appendix VIII. We requested comments on a draft of this report from the Under Secretary (International) of the Department of the Treasury and the Managing Director of the International Monetary Fund. The Treasury provided written comments on a draft of this report, which are reprinted in appendix IX. These comments characterized the report as balanced and informative. The Treasury did note its concern that our discussion of flexibility in monitoring and implementing IMF programs could be misunderstood. The Treasury commented that while the IMF’s process does incorporate flexibility and latitude, “there is a fundamental link between program implementation and program support.” We agree that IMF’s process is designed to allow adjustment to a country’s program in appropriate cases, taking into account changing circumstances. We provide many examples of such adjustments in our description of the arrangements for six borrower countries. Also, in response to the Treasury’s concern, we added clarifying language to the Results in Brief to note that the resumption of IMF disbursements following a delay depends on IMF judgment that there has been satisfactory progress in meeting key conditions. For a full discussion of the process, see appendix I of this report. Both the IMF and the Treasury provided technical and clarifying comments, which we incorporated where appropriate. We also asked responsible Department of State officials to review the accuracy of the in- country information in the draft. They provided technical and clarifying comments, which we have incorporated where appropriate. We are sending copies of this report to Senator Connie Mack, Chairman, Representative Jim Saxton, Vice Chairman, and Senator Charles Robb and Representative Fortney Pete Stark, Ranking Minority Members, Joint Economic Committee; Senator William Roth, Chairman, and Senator Daniel Moynihan, Ranking Minority Member, Senate Committee on Finance; Senator Phil Gramm, Chairman, and Senator Paul Sarbanes, Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; and Representative Benjamin Gilman, Chairman, and Representative Sam Gejdensen, Ranking Minority Member, House Committee on International Relations. We are also sending copies of this report to the Honorable Robert Rubin, the Secretary of the Treasury; the Honorable Madeleine Albright, the Secretary of State; the Honorable Jacob Lew, Director, Office of Management and Budget; and the Honorable Michel Camdessus, Managing Director, IMF. Copies will be made available to others upon request. This report was prepared under the direction of Susan S. Westin, Associate Director, Financial Institutions and Markets Issues, and Harold J. Johnson, Jr., Associate Director, International Relations and Trade Issues. Please contact either Ms. Westin at (202) 512-8678 or Mr. Johnson at (202) 512- 4128 if you or your staff have any questions about this report. Other major contributors are acknowledged in appendix X. The process that the International Monetary Fund (IMF) generally uses to establish and monitor financial assistance arrangements is intended to be flexible and applied on a case-by-case basis to address the specific balance-of-payments problems of member countries. The IMF staff and the member country begin the process by assessing the country’s overall economy, balance-of-payments position, ability to finance any balance-of- payment deficit, and potential need for IMF financial assistance. If the country decides to seek IMF financing, the IMF staff and the country negotiate an arrangement that describes the amount of financing, the type of financing instrument, and the schedule for review. The IMF staff and the country also negotiate conditions—the policy measures that the country intends to fulfill in order to continue to access IMF funds. After the arrangement is negotiated, the IMF Executive Board discusses and approves it. IMF staff conduct periodic reviews to monitor the country’s progress in meeting the IMF program conditions. The frequency of the reviews depends on the type of financial arrangement that the country is under and the nature of its problem. The IMF uses both data and judgment in assessing the extent of the country’s progress in meeting program conditions. If it determines that the country is on track in implementing its program conditions, additional allotments of funds can be made available. In cases where the IMF determines deviations from the program are significant, it can delay or withhold funding unless and until, in its judgment, the country has made further progress. When a member country faces an actual or potential balance-of-payments problem, it may consult with the IMF to analyze information on the economy and discuss various methods of managing the problem. These discussions may lead the country to request IMF financial assistance in order to alleviate the imbalance. If the IMF and the country do not reach final agreement on a financial assistance arrangement, the country may seek other means to address the difficulty. Discussions can occur at any time, including during the country’s annual “Article IV” consultation with the IMF or during informal consultations as requested by the member. To aid in the IMF’s assessment of a country’s overall economic situation and to determine the magnitude of potential financial assistance required by the country, IMF staff evaluates the balance-of-payments problem and determines the financial support measures that would assist in correcting the imbalance. The IMF staff’s review of the state of a member’s economy is an iterative process and is based on country-provided data, assumptions about key macroeconomic variables, and judgment by the IMF staff and country officials. To do this, the IMF staff examines the following four related sectoral statistical systems over the medium term of 3 to 5 years with the assumption that the government will follow its stated policies: (1) national income and product accounts for gross domestic product (GDP), (2) government financial accounts for the fiscal sector, (3) consolidated banking system accounts for the monetary sector, and (4) external accounts for the balance-of-payments position. In order to analyze these four sectors, an IMF team (IMF mission) travels to the country to review the situation within the country. The team begins the analysis by reviewing the data previously collected from country officials for the most recent Article IV consultation as well as other requested information provided by the country. The information includes data on the country’s balance of payments; fiscal variables, such as government expenditures and receipts; and monetary variables, such as monetary reserves and bank deposits, stock of currency, and interest rates. In addition, it includes country authorities’ projections for areas such as real GDP growth and inflation; real sector indicators, such as employment levels, manufacturing, production, agriculture, and service sectors; budget plans for government expenditures; and subsidies for public enterprises. As part of the process of analyzing a country’s economy and determining the balance-of-payments position, the IMF staff verifies the country- provided information, searching for both consistency and contradictions in the information. According to an IMF official, data inconsistencies may be discovered in a variety of ways. For example, if IMF staff believed that the country-provided trade data were inaccurate, it would cross-check that country’s trade data with similar data of a neighboring county with whom it trades in order to verify whether the information was accurate. In other cases, if the data suggested that the manufacturing level in a country had increased and at the same time indicated that electricity usage had decreased, the staff would be alerted to the inconsistency and would seek to verify the data. In such instances, the IMF team would work with government employees in ministries or agencies to calculate and verify the information. According to an IMF official, this type of analysis is, by necessity, undertaken on a case-by-case basis, and it would be difficult to develop a universal set of standards for verifying such information. For this work, the IMF relies on its mission chiefs, who have acquired knowledge and experience in each country to assist in verifying the data. According to an IMF official, determining the balance-of-payments position is central to both the analysis of the economy and the determination about whether the country would be eligible for IMF financial support. The concept of a balance-of-payments need is broadly defined in the IMF’s Articles of Agreement and includes (1) the country’s overall balance of payments, (2) the country’s foreign reserve position, and (3) developments in its reserve position. IMF documents state that these three elements are regarded as separate, and a member’s representation of a balance-of- payments need can be based on any one of them. The first element—the country’s overall balance of payments—represents the economy’s external financing requirement and equals the sum of a member’s current and capital account balances. The current account primarily includes exports and imports of goods and services. The capital account provides summary data on the changes in net foreign assets of domestic residents arising from such transactions as external borrowing or repayments (borrowing from or repaying foreign sources), foreign direct investment, portfolio investments (both equity shares and bonds), and short-term capital movements. The second element—the country’s reserve position—refers to the amount of resources (convertible currency, special drawing rights, and gold) a country has to support its imports and external debt payments. The reserves are under the control of the monetary authority. The third element—developments in the reserve position--has a very narrow application and is intended to ensure that members of the IMF whose currency is a reserve currency (such as the United States) would be able to use IMF resources when requested, despite the absence of a need as outlined in the first two elements. According to an IMF official, determining an actual balance-of-payments need is easier than projecting a potential balance-of-payments need. This is because the process of assessing an economy is subject to many assumptions and uncertainties, including factors within and outside of the country’s control. For example, in the case of Russia, the IMF documents establishing the 1996 extended arrangement do not explicitly describe the underlying balance-of-payments need. However, the IMF documents do present a clear case for the role that IMF funding was to play in catalyzing debt rescheduling and encouraging the inflow of private capital to avoid a potential balance-of-payments problem. In 1996, Russia had a basic weakness in its external accounts due in part to short-term capital outflows and an inadequate level of reserves. Furthermore, many debt service obligations were expected to occur between 1996 and 2000, adding more stress to Russia’s external accounts. An IMF financial arrangement in 1996 was seen as critical for Russia to avoid a potential balance-of- payments problem. The IMF arrangement helped Russia obtain debt rescheduling to reduce the future burden on the federal budget and improve Russia’s access to private capital markets. Analyzing the nature, source, and severity of any existing or potential balance-of-payments problem involves assessing data about the balance-of- payments deficit and the country’s ability to finance it. To determine the nature of the imbalance, the IMF determines whether the problem is short term or longer term. For example, a short-term problem could be a cyclical or seasonal imbalance caused by the falling price of a primary export. A longer-term imbalance might be caused by underlying or structural weaknesses in the economy, such as an unsustainable government budget deficit. The IMF staff also determines to what extent the reasons for the imbalance are within the government’s control, along with the dimensions and urgency of the problem, including the availability of financing. After the balance-of-payments gap analysis is complete and if the country decides to seek IMF financial assistance, the country officials and IMF staff begin to discuss IMF financing as well as the conditions for the country program. However, according to the IMF, in order to adapt programs to individual country circumstances, it has no inflexible set of operational rules for establishing a country’s program. Nonetheless, Deputy Managing Director of the IMF, said that staff enter into negotiations with detailed instructions, agreed upon within the IMF staff offices and then by IMF management. This IMF official stated that negotiations are often long and sometimes contentious, involving several rounds of discussions. The disagreements tend to be over difficult issues, for example, whether the budget needs to be tightened, the inflation rate should be reduced less rapidly, or the agreed-upon balance-of-payments deficit can be larger. To address the balance-of-payments problem, typically the IMF uses economic models to project the potential impact of a variety of adjustment measures to develop several scenarios of possible program elements. Based on these scenarios, the IMF staff and the country negotiate what they view as the appropriate mix of fiscal and monetary adjustment, structural reforms, and financing required to achieve their overall goals; these goals can include an increase in economic growth or in investor confidence. For example, for the external sector, two independent projections of imports need to be made and reconciled. The first is based on the demand for imports, derived from information including the projected level of output and relative prices, and the second is based on the capacity to import, derived from the target change in international reserves and projections of other components of the balance of payments. For example, if the demand for imports is greater than the country’s capacity to import, the basic options for adjustment may include the following: (1) seek additional foreign exchange, (2) lower the initial target for net international reserves, (3) reduce the initial projection for output to lower the demand for imports, or (4) some combination of the above. Similar iterative analyses are also carried out for the fiscal and monetary sectors. The IMF staff and the country negotiate an arrangement that describes (1) the amount of financing expected to be provided by various sources and the amount that may be requested from the IMF; (2) the instruments under which the IMF resources could be provided, for example, Stand-by Arrangement (SBA) or Extended Fund Facility (EFF); and (3) the potential schedule for reviewing a country’s performance and disbursing funds. The IMF has many instruments through which it provides financing to member countries. Table I.1 illustrates IMF instruments used by the six IMF member countries discussed in this report. Longer-term, balance-of-payments assistance for (1) deficits arising from structural maladjustments in production and trade and widespread cost and price distortions and (2) an economy characterized by slow growth and an inherently weak balance-of-payments position that prevents pursuit of an active development policy. Can provide larger total amounts of assistance. Periodic reviews provided that appropriate monitoring of macroeconomic developments would be ensured, normally through quarterly performance criteria. Staff prepare an analysis and assessment of the performance under programs Periodic reviews, typically quarterly performance criteria. Country provides annual reports on progress made, and policies and measures to be followed, including any modifications. Reviews done in conjunction with SBA or extended arrangement. Exceptional balance-of-payments problems owing to a large, short-term financing need resulting from a sudden and disruptive loss of market confidence reflected in pressure on the capital account and reserves. Likely to be used where the magnitude of outflows may threaten the international monetary system. Helps members deal with temporary current account shocks that are largely beyond their control. A “compensatory” element is available in case of shortfalls in export earnings or excesses in cereal import costs. A “contingency” element helps members with existing arrangements keep their programs on track when faced with adverse current account shocks. 1 year/ 2 or more drawings/ within 1- 1-1/2 years from date of disbursement but may be extended another year, including surcharges Significant limits on amounts; defined methodology for determining whether CCFF is needed and, if so, type and amount. Board review at the time of request and, in the case of the contingency element, on the occasions stipulated in the underlying arrangement. Disbursements linked to phasing of existing arrangement. For the compensatory element, disbursements normally in one installment. For the contingency element, disbursements linked to phasing of existing arrangements. Repayment is in 3-1/4 to 5 years. Principal means for providing financial support (highly concessional loans) to low-income members facing protracted balance-of-payments problems. Quarterly monitoring of financial and structural benchmarks. Semiannual performance criteria are set for key quantitative and structural targets. 3 years/ semiannually/ repaid in 10 equal semi- annual installments, beginning 5-1/2 years and ending 10 years after date of each disbursement. In addition, the country and the IMF staff negotiate the likely conditions to be used to assess a country’s performance under the arrangement. These conditions are generally intended to advance the country’s larger objectives–such as a reduced balance-of–payments problem, higher economic growth, and lower inflation—as well as the reform efforts undertaken to achieve those objectives. “Performance criteria” (quantitative and structural) and “prior actions” are conditions that a country is required to meet and that the IMF uses to monitor the country’s performance and determine whether it is eligible for disbursements of resources. “Benchmarks” and “indicative targets” are other measures the IMF uses to monitor a country’s progress; however, disbursements are not generally dependent on meeting them. “Quantitative performance criteria” are clearly defined numeric targets (macroeconomic indicators), such as a specified ceiling on the government’s budget deficit or on the net domestic assets of the central bank. According to IMF staff, “structural performance criteria” must be accurately and unambiguously defined so that no subjective judgment is involved in determining whether they have been met. For example, a structural performance criterion could be that a country has to solicit bids to privatize three state-owned enterprises by a prespecified date. A prior action is a particular policy measure that is considered to be essential to the effectiveness of an adjustment program. Prior actions may be negotiated by IMF staff and country officials as part of the country’s initial arrangement or during subsequent program reviews; they generally have to be implemented before an IMF arrangement or a disbursement of funds is approved. An example of a prior action is the issuance of a regulation or other forms of legal reform. Other measures used to assess a country’s progress include benchmarks and indicative targets. They may relate to macroeconomic variables or to specific policy commitments, such as changes in key structural areas of the economy. Benchmarks can be difficult to define and are best explained as a set of specific target measures to be accomplished by a certain date, used by the IMF to assess progress toward an overall goal. In general, benchmarks could include targeted structural changes for tax policy and administration reform, financial sector reform, or exchange system reform. For example, to achieve the overall goal of strengthening a country’s banking system, the IMF and the country may agree to a structural benchmark, such as enacting legal reforms for bankruptcy or developing a bank recapitalization plan. Indicative targets are quantitative targets set on many of the standard goals of macroeconomic policy and could include targets set on the balance of payments, the rate of inflation, or the public deficit. After the arrangement is negotiated, it has to be accepted by the IMF Managing Director before it is brought before the IMF Executive Board. According to an IMF official, the Executive Board generally accepts the recommendations of the staff, largely because the staff brings to the Executive Board proposals that the Board will accept. Generally, the Executive Board is briefed formally or informally during the negotiation process, and board decisions are made on a consensual basis. Since negotiations with a country continue throughout the life of a program, the Executive Board will often use a meeting to send signals about what it will and will not accept in the future. After the IMF arrangement is approved by the Executive Board, the country is then expected to implement the agreed-upon conditions in the IMF program. To determine whether the program is on track and the country is eligible to receive the next disbursement of funds, the IMF staff conducts periodic reviews of the programs. The review schedule is built into the arrangement between the country and the IMF. For the reviews, a team of IMF staff and country officials assesses the program status, including the country’s overall economic conditions and performance with respect to criteria, prior actions, and benchmarks. According to the IMF, reviews are typically held on a semiannual basis, although disbursements can be made if countries achieve the quarterly performance criteria and prior actions. Some countries, however, including those suffering a financial crisis or receiving funds from the Supplemental Reserve Facility (SRF), tend to have tighter monitoring because funding tends to be heavily front-loaded and disbursed within a year. In these cases, the program reviews can be held monthly or bimonthly. SRF funding is for countries with exceptional balance-of- payments problems owing to a large, short-term financing need resulting from a sudden and disruptive loss of market confidence. The IMF staff monitors the program continuously and the program is subject to periodic reviews by the IMF Executive Board in order to evaluate if the country’s progress in meeting the conditions under the program justifies the continuation of disbursements. In some cases, IMF disbursements are conditioned only on the determination by IMF staff that the country has met prenegotiated quantitative criteria. According to the IMF, for most programs, review by the IMF Executive Board is not required prior to each quarterly disbursement. For these programs, semiannual reviews by the IMF Executive Board are the more typical approach. In these cases, IMF staff review whether the country has met its performance criteria quarterly and, if so, a disbursement can follow without a full IMF Board review. Larger programs tend to have tighter monitoring and all disbursements are subject to reviews by the IMF Executive Board. In these cases, through its monitoring, the IMF staff believes that the country has satisfactorily implemented the program or the staff believes that the country has not satisfactorily implemented the program. In the first case, the review is “completed” and the borrower country is eligible to receive an additional disbursement. In the latter case, review completion is delayed and the country is not eligible to receive a disbursement at that time. Satisfactory progress can be judged in one of two ways. If the IMF staff believes that the country has met all of the performance criteria and considers the review “complete,” the staff presents the results of the review to the Executive Board. In addition, the IMF and the country may negotiate a new or revised set of criteria and benchmarks. Upon the Executive Board’s approval, the country is eligible to receive the next disbursement of IMF funds. In other instances, the IMF staff could conclude that the country did not meet all performance criteria but that most deviations were minor and did not affect the country’s overall performance. The staff would then generally recommend to the Executive Board that a waiver be granted and the review would be completed on time. A country’s inability to meet a performance criterion could be due to cyclical or seasonal problems that are self-correcting; the difficulty in making economic projections, that is, if key factors, such as the money supply were underestimated; unanticipated events, for example, a tumultuous political environment; or an incorrect assessment of the cause or solution to the problem. After the IMF Executive Board grants the waiver, the country is eligible to receive IMF funds. The IMF staff considers that a country has not made satisfactory progress when key conditions are not met and deviations are significant. In these cases, “completion” of the review and disbursements are generally delayed and are not resumed unless and until, in the IMF’s judgment, satisfactory progress has been achieved. During the delay period, country officials and IMF staff negotiate the steps necessary to complete the review and make funds available. According to IMF staff, if the country did not meet the performance criteria because it is unwilling or unable to do so, the IMF will negotiate with the authorities to determine the nature of the problem and possible corrective measures. In these instances, the IMF may request that the country demonstrate its commitment to the program by undertaking a specific prior action before it recommends the Executive Board grant waivers for nonobservance of the unmet criteria and “complete” the review. In other cases where the country has not met key performance criteria, the IMF staff may determine that deviations are so significant that it is not possible to negotiate steps to get the program back “on track.” When this happens, the IMF staff generally concludes that it is not in a position to complete the review and notifies IMF management. If management concurs with the recommendation, staff briefs the Executive Board on the situation. The review will not be completed at that time and disbursements would be delayed. In these cases, the IMF staff and the country may negotiate ways to restart the existing program or initiate a new program. In some cases, for example, in Russia, some deviations from the program may be significant enough that the IMF delays or withholds further disbursements for a considerable length of time, and the program lapses. Apart from waivers and reviews, quantitative performance criteria and indicative targets can be changed by means of “adjusters” that are included in some country programs. Adjusters are prenegotiated to account for specific actions and assumptions about economic and financial movements. There are two types of adjusters: (1) adjusters related to unexpected external events and (2) adjusters due to in-country policy changes. The first type of adjuster automatically changes the level of a quantitative performance criterion when there are unexpected changes— generally outside of the country’s control—to one or more key variables. For example, in Uganda’s program, an adjuster was added to the quantitative performance criterion that set a minimum level for net international reserves in the event that creditors provided more (or less) debt relief than was expected. The second type of adjuster automatically changes the level of a quantitative performance criteria when policy makers choose to make changes in their monetary or fiscal policy instruments in a manner that would either directly or indirectly affect the target variables. It is intended to prevent policy changes from compromising the achievement of overall program objectives, such as price stability or low inflation. Argentina has undergone radical changes since 1991 when it enacted the Convertibility Law, which established the currency board arrangement. Under this system, the central bank maintains a sufficient level of U.S. currency to guarantee the convertibility of all outstanding Argentine pesos at the official exchange rate (1 peso equals 1 U.S. dollar). The currency regime is seen as greatly helping to reduce Argentina’s inflation from over 1,000 percent in 1990 to less than 1 percent in 1998, instill fiscal and monetary discipline, build investor confidence, and contribute to economic growth. The government also undertook major structural reforms between 1992-94, including substantial privatization, deregulation, trade liberalization, and pension reform. The Argentine government described the time from 1991-98 as periods of sustained growth interrupted by external shocks, including the Mexican financial crisis in 1995 and the Asian, Russian, and Brazilian crises in 1998. Argentina has had successive IMF programs since 1983. The previous arrangement was an IMF Stand-by Arrangement of over $900 million from April 1996 to January 1998. According to IMF staff and the Argentine government, Argentina registered a strong macroeconomic performance in 1997. The economy grew very rapidly, the unemployment rate fell, and inflation was virtually zero. The fiscal position improved as programmed, and there were no major difficulties in financing a widening of the current account deficit. The prudent borrowing strategy (preborrowing at lower interest rates, stretching out maturities) followed by the public sector, and the strengthening of the banking system achieved in recent years, allowed Argentina to weather the turbulence that affected international capital markets in 1997 without major immediate consequences for the economy. Nonetheless, Argentina and the IMF decided an IMF financial assistance program was necessary because of risks to the economy posed by events in international financial markets. Argentina and the IMF Executive Board reached agreement on the current 3-year EFF arrangement in February 1998. This arrangement is intended to be precautionary, meaning that Argentina will draw IMF resources only if external conditions make it necessary. The government noted that the agreement is of great significance because the IMF’s review of Argentina’s accounts provides information to investors on the country’s economic progress. The arrangement of about $2.8 billion is intended to support the government’s medium-term economic reform program for 1998-2000 and to help maintain investor confidence. When Argentina negotiated this arrangement, the country did not have an actual balance-of-payments problem. The country’s current account deficit had been increasing primarily due to its widening trade imbalance, with rising imports outpacing exports, but was funded with external capital. Foreign direct investment covered over 50 percent of the deficit in 1997 and was estimated to cover about 40 percent of the deficit in 1999. The IMF expressed concern about the sizable current account deficits expected over the next few years—although these deficits reflect to a large extent the growth of productive investment—and the economy’s vulnerability to changes in external market conditions. The policies implemented to meet these targets were intended to promote sustained growth in production and employment, increase public saving, and reduce the vulnerability of the economy to disturbances on international financial markets. As of May 31, 1999, Argentina had not drawn resources under the current EFF arrangement. The current EFF arrangement includes quantitative conditions and structural benchmarks for the period 1998-2000. Consistent with the IMF’s approach, the government and the IMF negotiated the performance criteria and structural benchmarks for the first year of the EFF; criteria and benchmarks for subsequent years have been negotiated on an annual basis. As agreed to for 1998, Argentina’s program with the IMF contained quantitative performance criteria that limited the federal government budget deficit, central bank assets, and government debt. The structural benchmarks for Argentina included reforms in the labor market, tax system, public sector budgeting and operations, health system, and judicial system as well as the completion of the privatization program. The government and the IMF identified fiscal equilibrium and structural reform (particularly in tax and labor) as two of the most crucial elements of the program. The Argentine government is on record as strongly supporting the conditions under the IMF program because they reflect the government’s own priorities. According to the Argentine government, disagreements between the IMF staff and Argentina officials have been minor. One area of disagreement has been the significance of the current account deficit. While IMF staff is concerned about Argentina’s increasing current account deficit, some government economic officials are less so. They contend that the current account deficit should not be overemphasized since it is due, in part, to investment-led growth and since external investors have been willing to finance it, thus signaling their confidence in Argentina’s economy. As shown in table II.2, three of the four quantitative performance criteria focused on Argentina’s fiscal policy. The fourth—limits on central bank assets—targeted Argentina’s monetary policy. The goals of the fiscal deficit criteria were to reduce the overall federal government deficit while increasing spending in social areas, stimulate domestic saving, and strengthen confidence in the continued viability of the currency convertibility regime. The $3.5 billion deficit represents about 1 percent of GDP, which was estimated at about $340 billion for 1998. The monetary program was intended to strengthen confidence in the currency board and the banking system by maintaining a sound financial system and providing for an adequate cushion of liquidity that could compensate for the limited role of the central bank as a lender of last resort. Under the current EFF, the Argentine government agreed to meet the following structural benchmarks by the end of 1998: Submit to the Argentine congress a tax reform program before mid-1998 for approval before the end of 1998. Tax reforms were intended to improve the efficiency and equity of the tax system and promote the competitiveness of the economy. The reforms were aimed at contributing to a reduction in labor costs by cutting employers’ payroll contributions, diminishing distortions in corporate and individual taxes, broadening the income tax base, applying the value-added tax to products not currently taxed, introducing a single tax to replace the value-added and income taxes due from small businesses, strengthening tax auditing procedures, and modifying customs codes in line with MERCOSUR (the Southern Common Market, or customs union) and World Trade Organization norms. The changes were generally focused on decreasing taxes on production and increasing taxes on consumption. Implement the first stages of a program to strengthen tax administration by revising penalties and interest on past due tax obligations to help normalize relations between taxpayers and tax authorities, privatizing collection of past due taxes, and introducing pre-shipment inspection of imports for the short term. Implement labor reforms before mid-1998–a precondition for the conclusion of the first review. Increased flexibility in the labor market was intended to decrease unemployment, strengthen economic competitiveness, and ultimately ensure the viability of the currency convertibility regime. The reforms were to significantly reduce the costs of dismissing employees, eliminate statutes that impede the renegotiation of labor contracts (expired labor contracts remain legally binding if there is no agreement to renegotiate them between employers and unions) and inhibit entry into certain professions, eliminate certain temporary labor contracts, decentralize labor negotiations, and promote increased competition among union-run health care organizations. Reform budgeting operations. The government was to submit a multiple- year budget for income, expenditure, and results covering a 3-year period, with the goal of providing transparency, efficiency, and control for budgetary administration. Take measures to promote efficiency in public spending, especially in education, public health services, and the social security and social assistance systems, and improve the quality of public sector administration. The measures were to include governance rules for public employees outlining obligations and increasing penalties for corruption. Conclude reforms to the public social security system to help increase the efficiency of expenditures. Continue reforms to the health insurance system for retirees and health care organizations (public and private), as agreed with the World Bank, in order to strengthen health care, contain the demand for high-cost hospital care, and promote efficiency in health services. Take steps to speed up rulings in court cases involving taxes and financial guarantees and collateral. Grant leases for airports, telecommunications frequencies, and power stations. Draft proposals to privatize Banco de la Nación, the country’s largest bank. Revise legislation to help financial institutions more quickly execute guarantees and collateral, and to develop a legal and supervisory framework for financial derivatives. Approve new antitrust laws. The IMF Executive Board completed the first review of Argentina’s program in September 1998, as scheduled. It found that all applicable quantitative performance criteria were met in March and June 1998 and that substantial progress had been made in the implementation of structural reforms, with the notable exception of labor market reforms. Argentina’s congress passed some of the intended labor market reforms; it passed legislation lowering dismissal costs but did not pass legislation intended to make the collective bargaining process more flexible. The IMF Board urged the Argentine authorities to take further steps in regard to labor market reform, noting that the reform recently approved by Argentina’s congress fell short of what would be necessary to enhance labor market flexibility and reduce labor costs adequately. The IMF Board also expressed concern over the possible adverse impact of the Russian debt crisis on Argentina’s access to external financing and urged the authorities to maintain firm macroeconomic policy to help promote a rapid improvement in market confidence. According to IMF and Argentine documents for the second review, completed as scheduled in March 1999, Argentina met all but one of its quantitative performance criteria (for which a waiver was granted) and made progress on structural reforms. The waiver was requested because the federal government deficit, estimated at $3.85 billion in 1998 (1.1 percent of GDP), exceeded its ceiling by about $350 million, or around 0.1 percent of GDP. However, IMF staff viewed the deviation as minor, primarily due to adverse external factors, and as not detracting from overall fiscal performance. The government noted that, significantly, the structural deficit for 1998 was smaller than that of 1997. The IMF Executive Board granted the waiver. According to the Argentine government, its efforts to contain expenditures could not compensate fully for the revenue shortfall. The shortfall mainly reflected the slowdown of economic activity in the second half of 1998 and its adverse effect on taxes, particularly the value-added tax. The government noted that debt limits were met in the context of tighter conditions in international capital markets. A larger than anticipated share of the deficit was financed using public sector deposits and receipts from asset sales. Argentina made progress in several areas of structural reform, according to IMF and country documents. The government implemented most of the tax reforms but was only able to pass some of the intended labor reforms. The government implemented tax reforms that, among other things, expanded the bases of the income and value-added taxes and improved tax administration by enhancing tax audit procedures and hastening the resolution of court cases involving tax enforcement. Regarding labor reforms, Argentina’s congress approved a law to reduce dismissal costs and eliminate most forms of temporary labor contracts with decreased social security contributions. Reforms regarding collective bargaining were not passed. While IMF staff stressed the importance of making Argentina’s labor market more flexible—particularly given the uncertainty about continued access to foreign financing and trade levels—they told us that they do not expect the government to complete the remaining labor reforms before the fall 1999 elections. As such, according to IMF staff, the emphasis on labor reforms is likely to be eased. Argentina continued making reforms to budgeting operations, public sector administration, and the public hospital system. Restructuring of the health-care system continued, as agreed with the World Bank. The government completed leasing arrangements for airports and continued working on leasing arrangements for telecommunications frequencies, which were delayed by judicial challenges, and power stations. It concluded reforms to the public social security system. In January 1999, the government outlined its proposed objectives, criteria, and benchmarks for the second year of the arrangement. The government intends to continue to focus its economic policies on promoting sustainable growth in output and employment, addressing priority social needs, and maintaining low inflation and a viable external position. The government noted that in light of the presidential election scheduled for October 1999 and the uncertainty of the adverse international environment, it recognized the critical importance of maintaining disciplined and restrained macroeconomic policies, further improving public finances, strengthening the financial system, enhancing competitiveness, and deepening structural reforms. In March 1999, Argentina and the IMF reached agreement on the quantitative performance criteria and structural benchmarks for monitoring the country’s progress during 1999, as shown in table II.3. The estimated cumulative federal government deficit between January 1999 and December 1999 was increased from $2.65 billion to $2.95 billion (0.8 percent of GDP) to reflect the criterion missed in the previous quarter. The ceiling on the noninterest expenditures of the federal government was changed from an indicative target to a quantitative performance criterion because, according to IMF staff, there was concern about the sufficiency of tax revenues. Many of the new structural benchmarks continue ongoing reforms. By the third review (August 1999) the Argentine government is to present a proposal to reform the system of tax-revenue sharing with the provinces. In light of the fiscal deficit, IMF staff stressed the importance of achieving this reform. The reform of the tax-sharing arrangement between the government and the provinces is intended to strengthen the provinces’ own revenue-raising capacity and design a more equitable, transparent, and flexible system of intergovernmental transfers. lease telecommunication frequencies. implement new monitoring systems for the external debt and the finances of provincial administrations. implement the enabling regulations for the labor statute for small- and medium-size firms. submit to the Argentine congress a proposal to transform the Banco de la Nación into a state-owned corporation. This benchmark represents a change from the government’s original intention to privatize the bank. When it appeared that congress would not approve the privatization of the bank, the authorities decided to propose the transformation of the bank into a state-owned corporation that could include private capital and management, be listed in the stock exchange, and thus be subject to increased public disclosure requirements. submit to the Argentine congress a proposal to further reform social security. complete the sale of the first package of shares of the National Mortgage Bank. Also, by August 1999, the Argentine congress is to approve the proposed changes to the central bank charter and the financial entities law, which are intended to improve banking supervision and risk assessment of financial institutions; and the fiscal responsibility law, which sets limits on government indebtedness, constrains the growth of public expenditure, and establishes a fiscal stabilization fund to smooth out the impact of cyclical fluctuations or external shocks on tax revenue. The government intends to improve the efficiency of social spending in education and social protection programs. By the fourth review (Feb. 2000), the Argentine government is to implement the tax administration program aimed at, among other things, shifting to a new electronic tax filing and collection system; strengthening auditing procedures; and amending the customs code, after congressional approval, to incorporate MERCOSUR (the Southern Common Market, or customs union) norms and new World Trade Organization valuation rules; and eliminate the 3 percent import surcharge to the common external tariff. Also by this time Argentina’s congress is to approve the social security reform and new law for Banco de la Nación. The key factors affecting Argentina’s short-term macroeconomic outlook were the need for improvements in trade and the continued availability of private-sector capital. Argentina recorded a satisfactory macroeconomic performance in 1998, in a relatively difficult international macroeconomic environment. However, the economy slowed considerably in the second half of 1998, in response to the tightening of external financing conditions in the wake of Russia’s and Brazil’s financial crises and the slowdown in export earnings. For 1998, GDP growth was estimated at about 4.2 percent, down from 7 ¼ percent in the first half of the year. Since mid-January 1999, the external macroeconomic environment (trade and investment) has deteriorated because of adverse events in Brazil. The program agreed to in March 1999 (including quantitative performance criteria for 1999) was negotiated in December 1998, consistent with the external environment at that time. Argentina and IMF officials noted that the country had weathered the turbulence in external markets well; however, given the uncertain environment, the government and the IMF agreed to reexamine the program and modify it, if needed. The third review was conducted 3 months ahead of schedule in order to reevaluate the assumptions underlying the 1999 program and modify the performance criterion in light of the deterioration in the external environment since the program was negotiated. Despite the decline in Argentina’s economic activity and current account balance, preliminary information indicated that the country made progress on the structural reforms and met the quantitative performance criteria for end-March 1999. However, GDP in 1999 is expected to decline by 1.5 percent (from the previously projected gain of 2.5 percent), which is expected to significantly reduce federal government revenues from the previous estimate by about $2.5 billion. Argentine government officials and IMF staff noted that while the government was able to compensate for the revenue shortfall in the first quarter of 1999, fully compensating for the total estimated shortfall through additional spending cuts would seriously impair the quality of public services and aggravate the economic downturn. The government therefore requested an increase in the 1999 federal deficit performance criterion from $2.95 billion (0.8 percent of GDP) to $5.1 billion (1.5 percent of GDP), an increase of $2.15 billion, or about 70 percent, from the amount agreed to in March 1999. The increase reflects about 85 percent of the expected shortfall of $2.5 billion, with the government expected to absorb the remainder. Attaining the new level will require cuts in government expenditure, including spending for social programs. The deficit level was increased to help ensure that additional government borrowing to finance the deficit does not crowd out private-sector borrowing or raise uncertainty about the government’s commitment to fiscal discipline. To help achieve the new target, the ceiling on the noninterest expenditures of the federal government is to be lowered by $450 million. The debt ceiling was raised in line with the increase in the deficit in order to accommodate additional borrowing. The modified performance criteria are shown in table II.4. The Argentine government recognized the importance of reinvigorating the structural reforms to improve economic efficiency and strengthen market confidence. Many of the new structural benchmarks continue or accelerate ongoing reforms. By the third review (May 1999) the Argentine government is to present a proposal to reform the system of tax-revenue sharing with the provinces. implement new monitoring systems for the level and composition of the financing to the provincial administrations. submit to the Argentine congress a proposal to transform the Banco de la Nación into a state-owned corporation. By the fourth review (November 1999) the Argentine government is to submit to the Argentine congress a proposal to reform social security. implement a new monitoring system for conditions of access by commercial banks to external credit lines. submit to the Argentine congress a proposal to reform the tax code. lease telecommunication frequencies. Also by November 1999, the Argentine congress is to approve the fiscal convertibility law and the changes to the central bank charter and the financial entities law. In August 1998, Brazil’s capital account came under serious pressure in the wake of the Russian crisis. The Brazilian authorities responded with a sharp increase in interest rates; significant fiscal measures, including substantial spending cuts; and strengthening of institutional mechanisms to monitor developments in public finances and take further timely corrective actions, if needed. The IMF Managing Director said he was encouraged by the determination of Brazil’s president to give high priority to further fiscal reforms. Brazil also began a dialogue with the IMF to ensure that adequate financial support could be arranged quickly, if needed. The government of Brazil saw the nature of the IMF program as preventive—to assist the country in facing a period of deep uncertainty in international financial markets and to enable the government to continue gradual depreciation of the exchange rate without having to move to a floating currency system. A 3-year IMF program was announced in November and approved by the IMF Executive Board on December 2, 1998. The IMF program represented one portion of a larger support package totaling about $41.5 billion made up of commitments from the World Bank; the Inter-American Development Bank; and bilateral financing from 20 countries, in most cases to guarantee credits extended to Brazil by the Bank for International Settlements. When the program was announced in November, the IMF stated, in its press release, that the program first and foremost addresses the chief source of Brazil’s external vulnerability—namely its chronic public sector deficit (5-7 percent of GDP). The reduced savings of the public sector necessitated a growing resort to external savings to finance the rise in domestic investment, leading to an increase in the current account deficit of the balance of payments from under 0.5 percent of GDP in 1994 to over 4 percent of GDP in 1997. The IMF program is supported by a 3-year SBA, augmented in the first year by the SRF, for a total amount equivalent to about $18 billion. Around 70 percent of the funds were to be under the SRF. Brazil received its first disbursement of $4.6 billion in early December. The second disbursement was scheduled for February 1999 after completion of the first and second reviews; however, due to the events in January, it was delayed until after the revamped program was agreed upon by the IMF Executive Board on March 30, 1999. The November 1998 IMF program had four program objectives: a frontloaded fiscal adjustment effort (with most of the fiscal adjustment expected to occur in the first half of 1999) aimed at arresting quickly the rapid growth of public sector debt; maintenance of the exchange rate regime that existed at the time; a tightly controlled monetary policy, aimed at supporting the exchange rate regime that existed at the time, while safeguarding net international reserves; and wide-ranging structural reforms. The economic program was centered on fiscal adjustment and structural reform. The macroeconomic scenario underlying the fiscal program assumed that confidence would be rebuilt gradually as measures were implemented and began to improve Brazil’s fiscal accounts and as access to foreign financing improved. The initial program had fiscal, external sector, and monetary targets. These were a mixture of quantitative performance criteria and indicative targets.The fiscal targets were a performance criterion for the “public sector borrowing requirement,” which set ceilings on the “cumulative borrowing requirement” of the consolidated public sector through June 30, 1999; an indicative target that set a minimum on the primary surplus of the primary balance of the federal government; and an indicative target that set a minimum floor on the recognition of nonregistered public sector debt net of privatization proceeds. The fiscal quantitative performance criteria were intended to stabilize the ratio of the net public debt to GDP by the year 2000 and then reduce it gradually thereafter. Under these assumptions, the public sector borrowing requirement would decline to about 4.7 percent in 1999, to about 3 percent in the year 2000, and to 2 percent in 2001. The bulk of this adjustment was planned at the federal level; however, the states and municipalities were expected to shift their consolidated primary balance from an estimated deficit equivalent to 0.4 percent of GDP in 1998 to a surplus of 0.4 percent of GDP in 1999, rising to 0.5 percent in the years 2000 and 2001. The main elements behind the assumption of the state and local governments’ primary balance improvement were the implementation of the administrative reform laws and the firm enforcement of their debt restructuring agreements with the federal government. The fiscal adjustment program had both revenue-raising and expenditure-reducing measures designed to yield overall budget savings of 3.4 percent of GDP in 1999. Revenue measures to achieve the indicative target on the primary balance of the federal government included increases in the financial transactions tax rate from 0.2 percent to 0.3 percent with a temporary surcharge of 0.08 percent for 1999; an increase in the rate of the tax on corporate turnovers from 2 to 3 percent, one-third of which is to be creditable against the corporate income tax; an increase of 9 percentage points in the contribution to the public sector pension plan by civil servants earning more than R$1,200/month; the extension of this contribution to public sector pensioners (at the rate of 11 percent for those with pensions of R$1,200/month or less and of 20 percent for the others); and a number of other measures aimed mainly at widening the bases of existing taxes and contributions, and eliminating distortions. Expenditure measures included substantial cuts in discretionary current and capital spending and savings expected from implementation of already approved constitutional reforms of the civil service and social security. The external sector targets were a performance criterion on external debt of the nonfinancial public sector, which set a ceiling on the stock of this debt; a performance criterion that set a ceiling on new publicly guaranteed an indicative ceiling on total short-term external debt disbursed and a floor on net international reserves in Brazil’s Central Bank (BCB). The monetary target was a performance criterion that set a ceiling on net domestic assets in the BCB. The goal of monetary policy was continued low inflation. The BCB intended to continue to apply a flexible interest rate policy as appropriate while safeguarding foreign exchange reserves, and to rely on indirect policy instruments to guide short-term interest rates. The government, with the support of the IMF, intended to maintain the pegged exchange rate regime with a gradual widening of the exchange rate band and to keep the increase in public sector external debt within prudent limits, around US$10 billion in 1999. While Brazil’s program does not contain structural performance criteria, it did include a variety of structural benchmarks and measures to address long-standing weaknesses in the budget process; the tax system and tax administration; public administration; social security; and the efficiency of public expenditure, especially in the social area. Table III.2 outlines the various structural reforms contained in Brazil’s November 1998 IMF program. Description Reforms aimed at strengthening budget discipline at all levels of government–Fiscal Responsibility Act to be submitted to the Brazilian Congress by December 1998. A set of new legislative initiatives to be presented to the Brazilian Congress in the first quarter of 1999 based on the principle of actuarial balance. Legislation to be presented to the Brazilian Congress before the end of 1998 to address weaknesses in Brazil’s current indirect tax system, which is viewed as inefficient and unduly complex. Passage of enabling legislation already submitted to the Brazilian Congress to ensure administrative reform already passed begins to produce effects in 1999. The government sent to the Brazilian Congress a proposal for constitutional reform that reduces restrictions on unions and creates incentives for public collective bargaining. Programs focused in public utilities (electrical sector; and some water, gas, and sewage public utilities) and state banks. The government intends to give priority to primary education and basic health care in the allocation of social expenditures, to promote the more efficient use and financing of health and education, and to better target social expenditures to the poor. Reduction in the share of total deposits of the Brazilian financial system held by state banks to about 7 percent by end-1999. All remaining state banks are to be subject to the same regulatory and supervisory scrutiny as private banks. Legislative and supervisory framework– considerable strides have been made in implementing the 25 basic principles of the Basle Committee, and the government believes that Brazil can be fully compliant by the year 2000. Addition of a stand-by facility to the deposit insurance fund to improve its finances. Measures to speed up the resolution of failed banks and to increase asset recovery rates. Subscribe to the Special Data Dissemination Standards as soon as technically feasible. The government committed to continue the policy of trade liberalization by doing the following: promoting the integration of the Brazilian economy with those of its MERCOSUL (the Southern Common Market, or customs union) and other regional trading partners; increasing trade with countries outside the region; and not imposing trade restrictions or restrictions including for balance of The government also said it would continue to promote the competitiveness of Brazil’s exports through steps aimed at leveling the playing field for Brazilian exporters, thus facilitating access to financing and to export credit insurance. The following prior actions were included in the November 1998 IMF agreement: By end-November 1998, increase the rate of the financial transactions tax to 0.38 percent for 1999 is to be under consideration by the Brazilian Congress. For completion of the first review (which was scheduled by month-end February 1999, but could have been advanced to December 15, 1998), enact revenue and expenditure measures sufficient to give confidence that the fiscal program targets for 1999 are likely to be met, and enact the constitutional amendment for social security reform, for both the private sector social security system and the federal public sector social security system. The government of Brazil was initially successful in implementing many of the elements of the fiscal package that were the core of its program. Prior to the approval of the Stand-by Arrangement by the IMF Executive Board on December 2, 1998, it had successfully guided through the Brazilian Congress, the constitutional amendment on social security reform and an increase in the tax on corporate turnover. However, the proposed measure to increase the social security contribution on active civil servants and extend it to retired ones, was not approved in early December, and the government’s efforts to pass the financial transactions tax were delayed. These were requirements under the November IMF program. In response to delays in getting an increase in the financial transactions tax, the government increased taxes on corporate profits and financial operations by executive decree. In early January 1999, a few Brazilian state governors demanded better payment terms on their debt payments to the federal government, and one declared a moratorium on these payments (24 of Brazil’s 27 state governors have agreements with the federal government whereby, in exchange for fiscal adjustment, the federal government has assumed their debt, rescheduled it over the long term, and agreed to charge preferential interest rates). This action precipitated the most recent crisis and put pressure once again on Brazil’s exchange rate, with major outflows of international reserves. In early January 1999, the president of the central bank resigned. On January 13, his successor then widened the real’s trading band. This action effectively devalued the currency by 8 percent. Massive currency outflows followed, and 2 days later Brazil gave up defending its currency and let the real float. This action, in turn, resulted in an immediate devaluation of another 12 percent. Progress continued on implementation of the fiscal program in January. After the real was allowed to float and new negotiations began with the IMF, Brazil’s Congress passed a law increasing the pension contribution of civil servants, which had been rejected previously. Brazil also approved a bill to increase the financial transactions tax, which had been delayed before. Both of these measures were requirements of the November IMF program. The BCB raised interest rates even further to try to encourage investors to keep their money in Brazil. Under Brazil’s arrangement with the IMF, completion of the first and second review was scheduled to take place no later than the end of February 1999; however, due to the change in the exchange rate regime that was pegged to the U.S. dollar and the currency devaluation, Brazil and the IMF delayed the review completion until March. As a result, Brazil did not receive an additional disbursement as scheduled in February. In addition to negotiating revisions to the economic program with the IMF, Brazilian officials also negotiated voluntary support commitments with their creditor banks. According to the IMF’s Managing Director, this effort was integral to the success of the program and was seen as a key factor in the IMF Executive Board’s consideration of the program in late March. Brazilian officials reached the necessary agreement in mid-March. In the voluntary agreement, banks agreed to keep trade and interbank credit lines at end of February levels until the end of August. On March 8, 1999, the IMF’s Managing Director announced his intention to recommend to the IMF’s Executive Board the approval of the revised economic program for 1999-2001 proposed by the Brazilian government. The amount of support to be provided by the IMF portion and the total package, including that provided by multilateral banks and bilateral financing, remained the same. The key elements of the revised program are strengthened fiscal adjustment and, in light of the floating exchange rate, the adoption of a new nominal anchor for monetary policy. The additional fiscal improvement and a firm monetary policy are expected to limit the impact of the currency depreciation on prices in the first half of 1999 and to facilitate a decline in the annualized monthly inflation rate to single digits by the end of the year. Brazil’s balance of payments is expected to improve as capital inflows recover and Brazil capitalizes on its improved competitiveness. The IMF’s Executive Board approved the revised program on March 30, 1999, thereby opening the way for Brazil’s next disbursement. Brazil requested and was granted a waiver of nonobservance of one performance criterion—the ceiling on net domestic assets in the BCB. According to IMF officials, the nonobservance of the performance criterion was the result of a premature easing of monetary policy. Like the initial program, the revised program contains fiscal, external sector, and monetary targets, some of which are the same as previous criteria or indicative targets and others of which are different. According to the IMF, the changes were the result of two factors: (1) understandings that were formulated in an informal way under the original program were made into performance criteria, and (2) the reformulation of the program required different performance criteria on technical grounds. The two fiscal targets are different from the initial program. They consist of: a performance criterion that set a floor on the cumulative primary balance of the consolidated public sector and an indicative target that set a ceiling on the total net debt outstanding of the consolidated public sector. The government intends to steadily reduce the ratio of public debt to GDP to below 50 percent by end-1999, and to below the value initially projected in the November 1998 program for the end of 2001 (46.5 percent). The government expects to accomplish this through higher than originally targeted primary surpluses of the consolidated public sector in the next 3 years. The government intends to increase the targeted primary surplus to at least 3.1 percent of GDP in 1999, 3.25 percent of GDP in the year 2000, and 3.35 percent of GDP in 2001. According to the IMF, the need for higher primary surpluses comes from the higher interest bill that resulted from the currency being devalued. Hence, to achieve the same debt-GDP ratio, primary surpluses needed to be higher. As in the initial program, the additional fiscal adjustment is to be achieved through a range of revenue-raising measures and expenditure cuts. This effort will be concentrated at the federal level, but the state and local governments are expected to contribute through the implementation of their debt restructuring agreements with the federal government and by complying with the requirements of the administrative reform laws. The first two external sector targets were the same as in the initial program, while four more performance criteria were added: a performance criterion that set a ceiling on the total external debt of the a performance criterion that set a ceiling on new publicly guaranteed a performance criterion that set a ceiling on total short-term external debt of the nonfinancial public sector, a performance criterion that set a limit on net sales of foreign exchange by the BCB, a performance criterion on the BCB’s exposure in foreign exchange a performance criterion on the BCB’s exposure in foreign exchange forward markets. The monetary target is the same—a performance criterion that sets a ceiling on net domestic assets in the BCB; however, in the view of Brazil’s government, monetary policy became a more important component in the revised program. The overriding objective of monetary policy is securing low inflation. The BCB intends to put in place as quickly as feasible a formal inflation-targeting framework. This is expected to take some time and in the meantime, it intends to rely on a quantity-based framework under which it will target its net domestic assets. According to IMF documents, the Brazilian government has reaffirmed its commitment to the wide-ranging program of structural reforms included in the November program in such areas as social security, taxation, fiscal transparency, and the financial sector. In most of these areas the government believes it has already made significant progress. Accelerating and broadening the scope of the privatization program is also a goal of the revised program. In addition, the government remains committed to the policy of trade liberalization (summarized in the November 1998 program) adopted by Brazil’s President. Table III.3 shows the structural benchmarks contained in the revised program. Structural benchmark Submission to the Brazilian Congress of a law on the complementary private pension system. Submission to the Brazilian Congress of an ordinary law on the pension system for private sector workers. Presentation to the Brazilian Congress of the Fiscal Responsibility Law. Issuance of new regulation on the foreign exchange exposure of banks, in conformity with international standards in this area. Acceptance of the obligations under Article VIII, sections 2, 3, and 4 of the IMF’s Articles of Agreement, with a definite timetable for removing any remaining restrictions (if any). Proposal of an action plan for statistical improvements that will permit Brazil’s subscription to the Special Data Dissemination Standards.Submission to the Brazilian Congress of the multi-year plan that incorporates improvements in the budgetary process along the lines described in the November 1998 program. Implementation of the remaining administrative improvements in the social security system, as described in the November 1998 program. Submission to the Brazilian Congress of an ordinary law on the pension system for public sector workers. Privatization of a number of state-owned banks. Implementation of a regulation for the institution of a capital charge related to market risks, based on the Basle Committee (in line with technical assistance from the World Bank). Implementation of a forward-looking loan classification system that takes into account the capacity of borrowers to repay (and in accordance with technical assistance from the World Bank). Until its recent financial crisis—starting in mid-1997—Indonesia had 30 years of real economic growth, averaging 7 percent annually, with annual inflation held continuously below 10 percent in the previous 2 decades. Over the past 2 decades, the incidence of poverty was greatly reduced, assisted by improvements in primary education, effective health care, and family planning. Poverty rates declined from 70 million people in 1970 to 22.5 million in 1996. Universal primary school education was achieved in the 1980s. Indonesia’s economic performance over the past several decades ranked among the best in the developing world. GDP per capita income was rising toward the level of middle-income countries. The economic structure had become diversified, as dependency on the oil sector had declined. An export-oriented manufacturing sector had emerged led by a dynamic private sector and fueled by high domestic savings and large inflows of foreign direct investment. Prior to the regional market turbulence in 1997, Indonesia’s macroeconomic situation appeared by many measures reasonably sound: the budget was in balance, inflation had been contained to single-digit levels, current account deficits were low, and international currency reserves were at a comfortable level. This strong economic performance helped attract large capital inflows. These achievements masked persistent underlying structural weaknesses in the economy, however, that made Indonesia vulnerable to adverse developments. Extensive domestic trade regulations and import monopolies impeded economic efficiency and competitiveness. Indonesia had many commodities with restrictive marketing arrangements and many state enterprises. A government agency—the State Logistics Agency—had a monopoly over the importation of essential food items, a domestic market monopoly, and the ability to restrict prices on these food items. A lack of transparency in decisions affecting the business environment and data deficiencies increased uncertainty and adversely affected investor confidence. Indonesia had a banking system that had expanded too rapidly and was not prepared to withstand the financial turmoil that affected Southeast Asia in the latter half of 1997. Too many weak banks had larger than normal levels of nonperforming loans, foreign exchange risk, concentrated bank ownership, large exposures to risks in the property sector, and connected lending—lending to related companies. Furthermore, Indonesia had a large, unhedged, private, short-term foreign currency debt prompted by large differentials between domestic and foreign interest rates. Indonesian corporations were heavily exposed to such debt and thus were vulnerable to the adverse effects of a currency depreciation. Growth in short-term foreign liabilities outpaced growth in available international currency reserves. Also, a severe drought in 1997, the year leading up to the crisis, created a need for large food imports. Following the widening of the intervention band on July 11, 1997, the rupiah was allowed to float on August 14. By October 1997, the rupiah had depreciated significantly as the regional financial crisis deepened. The sudden rise in the rupiah value of the foreign-currency-denominated loans and increased interest rates that ensued placed the banking and corporate sectors under enormous stress. At the time, Indonesia faced the loss of confidence of financial markets demonstrated by a sharp currency depreciation, a decline in foreign currency reserves, and a substantial fall in its capital account. On October 31, 1997, Indonesian authorities requested and on November 1, 1997, the IMF granted a 3-year SBA equivalent to $10.1 billion (SDR 7,338 million). The typical SBA is designed to provide short-term, balance-of- payments assistance for deficits of a temporary or cyclical nature. The IMF granted Indonesia the right to draw the funds provided Indonesia met the conditions of the program. Drawings were scheduled in 13 disbursements but were to be substantially front-loaded with $3.0 billion (SDR 2,201 million) disbursement on November 5, 1997, and an equivalent amount to be released on March 15, 1998. Interest charges were levied on a quarterly basis—at a rate slightly above the SDR interest rate. Repayments of principal under this arrangement were to be in eight quarterly installments beginning 39 months after disbursement and ending 60 months after disbursement. The principal justification for such large access was that the availability of sizable external financing would catalyze a speedy return to confidence and the resumption of normal capital market financing. Subsequent releases of $785.4 million (SDR 579 million) were to be available on June 15, September 15, and December 15, 1998. Amounts of $206.8 million (SDR 149.8 million) were to be released at eight times during 1999 and the year 2000, according to the IMF. A series of letters of intent issued by the government of Indonesia and program reviews by the IMF of the SBA followed. The SBA had 4 distinct letters of intent that documented program changes that took account of changing economic and social factors in Indonesia. The IMF reviewed the SBA twice during the November 1997-August 1998 time period. Fund disbursements were delayed twice over the course of the SBA— Indonesia’s access to funds associated with the completion of the first and second reviews was withheld. At the end of the first review funding was rephased so that amounts available for disbursement were reduced and reviews were changed from quarterly to monthly. The initial Stand-by program was not successful in restoring confidence in the economy. By August 25, 1998, the SBA had been replaced by an EFF. According to IMF documents, the first IMF Stand-by program with Indonesia met with some initial success, as confidence appeared to be boosted by the tightening of liquidity and exchange market intervention. But financial market sentiment soon began to sour. This deterioration of market sentiment reflected the government’s failure to follow through quickly on the policy measures. The closing of 16 banks while other weak banks continued operation also contributed to a loss of confidence. Indonesia’s promise to carry out a tight monetary policy was derailed by a strong liquidity expansion to deal with runs on banks. There was also political uncertainty triggered by concerns about the health of the President. Foreign creditors refused to roll over maturing credit lines, and pressure on the exchange rate intensified. By early January 1998, the rupiah had undergone a cumulative depreciation of some 75 percent from pre-crisis levels. This created severe tension in both the corporate sector and banking sectors. On January 15, 1998, the Indonesian authorities released a new letter of intent which included major revisions to their economic program and addressed new conditions. The new measures were designed to reverse the decline of the rupiah before it triggered a surge in inflation and a wave of corporate bankruptcies. Key changes from the previous program included a commitment to implement a tight monetary program, and to accelerate deregulation and trade reform. In late January, the program was strengthened with the introduction of a comprehensive bank restructuring program—to be implemented by a new agency called the Indonesian Bank Restructuring Agency (IBRA) and the announcement of a voluntary scheme to restructure private corporate debt. Market reaction to the January 15 letter of intent was swift and negative. Shortly after the announcement of the new letter of intent, the rupiah was depreciating rapidly and had lost a cumulative 85 percent of its value compared to 7 months earlier. Owing to difficulties in implementing required policy changes following the announcement of the second letter of intent under the SBA, continuing uncertainty about the government’s commitment to elements of the program, and other developments, the rupiah failed to stabilize, inflation picked up sharply, and economic conditions deteriorated. Base money grew rapidly, fueled by Bank Indonesia’s liquidity support for financial institutions. Moreover, program implementation was sidetracked by a February announcement that the government was considering the introduction of a currency board as a means of stabilizing the rupiah. There was widespread international concern that Indonesia’s financial and credibility crisis would make such a measure extremely risky. IMF officials viewed a currency board as inappropriate for Indonesia at this time because they were concerned about the rupiah’s credibility and sustainability—especially at an exchange rate far above the prevailing market rate—in light of ongoing capital outflows. Decisive policy action was also inhibited by preparations for the change in government after a March presidential election. The economic downturn deepened, while inflation accelerated sharply. Against this background, as well as the need to await the appointment of a new cabinet in the wake of the reelection of the President, the first IMF quarterly review was delayed. The first quarterly review was scheduled to be completed on March 15, 1998, and was to be tied to targets for December 1997 according to the IMF. However, the review was not completed—and hence additional funds were not available to Indonesia—until May 4, 1998. During February and March 1998, only limited progress was made in implementing the revised program. There had been a precipitous depreciation of the exchange rate and a large-scale outflow of capital. The banking sector and the private corporate sector were basically insolvent. Consumer prices increased 39 percent in the first quarter of 1998. In addition, Indonesia’s overall external payments position deteriorated sharply, especially the capital account, because of a decline in new inflows, the reluctance of foreign creditors to roll over bank and corporate external debt, and the repatriation of portfolio investment. IMF officials were concerned that, without a strong adjustment effort, Indonesia would encounter an even more severe crisis and a deepening recession. Bank Indonesia had lost control over monetary policy in the first quarter of 1998. Monetary policy was dominated by the crisis in the banking system, with liquidity support provided to the banks reflecting the drawdown in foreign currency deposits, the reduction of credit lines by foreign banks, a shift into foreign currency from rupiah deposits, losses on forward contracts, and higher nonperforming loans. Moreover, Bank Indonesia had been hurt by the complete turnover of staff in the most senior positions. To deal with the crisis, foreign experts were appointed to a monetary panel to help strengthen implementation of monetary policy. The budget, too, was adversely affected by the deterioration in the economic environment, experiencing substantial revenue losses and increased outlays. Furthermore, government decrees designed to dismantle cartels and open up markets were delayed and circumvented in several sectors, which raised concern about the government’s commitment to the IMF program. None of the five quantitative performance criteria required for completion of the first review were met, and only one of four structural performance criteria was implemented. Quantitative performance criteria were not observed on base money and public sector short-term debt outstanding at end-December 1997 and end-March 1998. Quantitative performance criteria were also not observed on the government balance and net international reserves at end-March 1998. One structural performance criterion was completed on schedule—that Indonesia issue implementation regulations on procurement. Two structural performance criteria were superseded by the creation of IBRA— the closure of banks under intensified supervision and the establishment of performance criteria for state-owned banks. Two performance criteria were pending and were expected to be implemented by end-June 1998— increases in petroleum prices and increases in electricity prices. The Indonesian government requested waivers for the nonobservance of the performance criteria. IMF staff supported granting these waivers in view of actions undertaken prior to the proposed completion of the review and the proposed actions of Indonesian authorities included in the revised program. Originally $3 billion (SDR 2,201.5 million) was to be available for Indonesia, but this amount was restructured so that equal amounts of $1 billion (SDR 733.8 million) were to be available each month over the next 3 months. On May 4, 1998, the IMF Executive Board granted the waivers and Indonesia received a $995.4 million (SDR 733.8 million) disbursement. At that point the IMF moved from scheduling quarterly to monthly reviews of the arrangement. On April 10, 1998, the IMF and the government of Indonesia issued a third letter of intent to address the far-reaching changes that had occurred in political, social, and external circumstances. The new program complemented and modified the program outlined in the previous letter of intent. According to IMF documents, the economic situation had deteriorated since the beginning of 1998: prices had increased, the government’s budget was under severe pressure as a result of the decline in economic activity, subsidies were needed to protect low-income groups from the rise in prices of staples and essentials due to the depreciation of the rupiah, restructuring the banking system was costly, and international oil prices had declined. In addition, the financial position of the domestic banking system had dramatically deteriorated and Bank Indonesia had granted very large-scale liquidity support. Furthermore, foreign banks had cut trade and other credit lines to Indonesian banks. The revised program built on the program specified in the previous letter of intent but placed more emphasis on debt strategy, banking system restructuring, privatization, and bankruptcy procedures. The revised program comprised 117 structural policy commitments covering fiscal issues, monetary and banking issues, bank restructuring, foreign trade, investment and deregulation, a social safety net, the environment, and other issues. The program required sharply raising interest rates to secure a sustained appreciation of the rupiah and strict control over the net domestic assets of Bank Indonesia. Liquidity support to banks was to be brought firmly under control. The program included an accelerated strategy for restructuring the banking system—including the takeover of seven banks that accounted for most of the liquidity support and raising the capital levels of healthier banks. The cost of bank restructuring was estimated to be 15 percent of GDP. The revised program also sought reform of bankruptcy procedures. It required a revised budgetary framework, with higher subsidies for some food and other items to soften the impact of the currency depreciation on the poor, as well as funds to cover the costs of bank restructuring. The revised program outlined a framework for restructuring private corporate debt with limited government support. This letter of intent shifted one quantitative performance criterion—the monetary policy target—from base money to net domestic assets because the net domestic assets of Bank Indonesia had been the source of monetary instability. The change was made because of the necessity of bringing under control the rapid expansion of central bank credit to banks with liquidity problems, according to Indonesian government documentation. Other quantitative performance criteria remained, with targets changed. New structural performance criteria were to merge Bank Bumi Daya and Bank BAPINDO and transfer problem loans to the asset management unit of IBRA by end-June 1998; initiate sales of additional shares in listed state enterprises including, at a minimum, the domestic and international telecommunications corporations by end-September 1998; and reduce export taxes on logs and sawn timber to 20 percent by end- December 1998. New or strengthened structural policy commitments since January 15, 1998, included raising profit transfers to the budget from state enterprises including Pertamina—the state oil company, publishing key monetary data on a weekly basis, appointing high-level foreign advisors to Bank Indonesia to assist in the conduct of monetary policy, setting minimum capital requirements for banks of rupiah 250 billion after providing external guarantees to all depositors and creditors of all locally establishing IBRA, transferring 54 weak banks to IBRA, transferring claims resulting from past liquidity support from Bank announcing 7 enterprises to be privatized, submitting to Parliament draft law on competition policy, and establishing a monitoring system for structural reforms. The second review of the SBA was scheduled to be completed on June 15, 1998, but the review and the subsequent disbursement of $995.4 (SDR 733.8 million) was delayed by about a month. The social unrest that boiled over in mid-May and culminated in the resignation of the president. There were runs on Indonesia’s largest private bank, and unemployment and inflation started to rise dramatically. The country was seen as facing an extremely severe and rapidly deepening systemic economic crisis. As a result, the review was completed on July 15, 1998. Indonesia did not have access to additional IMF funds during the delay period. According to IMF documents, the April 1998 program had gotten off to a good start. Monetary performance was kept within program targets specified in the April letter of intent, even though liquidity support to banks was higher than expected. Banks requiring most of Bank Indonesia’s liquidity support were put under the control of IBRA. New bankruptcy procedures were enacted, and restrictions on foreign investment in wholesale trade were lifted. However, IMF documentation shows that the social disturbances and political change in May 1998 derailed the April program despite generally good policy implementation. Arson, rioting, and looting in Indonesia undermined business confidence and damaged the distribution system. Business confidence was shaken, capital flight resumed, and the rupiah depreciated sharply pushing many corporations and banks further into insolvency. GDP fell by 8.5 percent in the first quarter of 1998 and by 7 to 8 percent in the second quarter of 1998. The banking system was paralyzed—unable or unwilling to lend to corporations—and the corporate sector was deeply insolvent. According to MF documents, at this time, the Indonesian economy faced the risk of falling into an even deeper systemic crisis, with normal financial market mechanisms breaking down completely, banks unwilling to lend to insolvent corporations, and access to international markets denied. Despite this situation, Indonesian officials reported that Indonesia had met three of the four IMF quantitative performance criteria. For example, they judged the structural performance criterion to increase petroleum prices and eliminate subsidies to have been met because petroleum prices had been raised on average by 38 percent, although the increase in kerosene prices was subsequently rescinded to assist poor households. Data on two quantitative performance criteria were not available—the end-June performance criteria on the contracting or guaranteeing of new external debt and the stock of public sector short-term debt outstanding. Indonesia met the end-June 1998, structural performance criteria to raise fuel and electricity prices according to an agreed schedule. One of the structural performance criteria was not met—the end-June 1998, merging of two banks and the transfer of problem assets to the asset management unit of IBRA were delayed. The Indonesian government requested a waiver for its nonobservance. The IMF staff supported this request because the preparatory work took longer than anticipated, and the merger was expected to take place by end-July 1998. The IMF staff also supported Indonesia’s request to waive the applicability of the other quantitative and structural performance criteria that were not met. On July 15, 1998, the IMF Executive Board granted the waivers and Indonesia received a $995.4 million (SDR 733.8 million) disbursement. At this time, the government of Indonesia requested and the IMF’s Board approved a $1.4 billion (SDR 1 billion) augmentation of the SBA. On June 24, 1998, the government of Indonesia issued a fourth letter of intent to address the prevailing economic conditions. Although the overall objectives and policy content of the revised program remained the same as in previous letters of intent, the new program was to be substantially revised to reflect the deterioration in the economic situation, and the emphasis placed on some IMF conditions changed to some extent. The economy faced a serious crisis as a result of the social and political upheavals in May. Tight monetary policy was thought necessary to prevent hyperinflation. The new monetary program envisaged no increase in base money or net domestic assets. The budget was the area where major changes were made to the IMF program, including requirements for a substantially increased subsidy bill for basic foodstuffs, petroleum products, and electricity; greater expenditures for health and education; and expansion of employment-creating projects. Deficit spending was expected to amount to more than 8 percent of GDP—with the recognition that this deficit was not sustainable and would need to be reduced as the economy recovered. The bank-restructuring strategy—focused on putting in place as quickly as possible a core functioning banking system— envisioned an increased role for foreign advisors. A revised strategy was added to assist the resolution of the problems of the corporate sector through the establishment of the Indonesian Debt Restructuring Agency (INDRA), which was designed to provide exchange rate protection for restructured debts. A strengthened social safety net to cushion the escalating effects of the crisis on the poor was now required. As a result of the reduction in real incomes, the number of households below the poverty line was growing rapidly. The food distribution system was to be repaired to ensure adequate supplies of food and other essential items to all parts of the country. Nevertheless, it was thought that the revised program was likely to encounter great risk from unsettled political conditions and growing social strains. Quantitative performance criteria were the same as in the prior letters of intent, but targets were changed. New structural performance criteria were as follows: Initiate sales of additional shares in listed state enterprises including, at a minimum, the domestic and international telecommunications corporation by end-September 1998. Submit to parliament a draft law to institutionalize Bank Indonesia’s autonomy by end-September 1998. Reduce export taxes on logs and sawn timber to 20 percent by end- December 1998. Complete audits of the State Oil Company, the State Logistics Agency, the State Electric Company, and the Reforestation Fund by end-December 1998. New or strengthened structural policy commitments included the following: Issue presidential decree to provide appropriate legal powers to IBRA, including its asset management unit. Reduce the minimum capital requirements for existing banks. Take action to freeze, merge, recapitalize, or liquidate the six banks for which audits have already been completed. Conduct portfolio, systems, and financial reviews of all other banks by internationally recognized audit firms. Introduce community-based work programs to sustain purchasing power of poor in both rural and urban areas. Increase subsidy for food and essential items. Introduce microcredit scheme to assist small businesses. On July 29, 1998, Indonesia requested that the SBA be canceled and the existing policy program be supported instead by an EFF. Several IMF Board members had previously suggested that such an arrangement might be more appropriate than a SBA due to the deep-seated nature of Indonesia’s structural and balance-of-payments problems. The EFF was established to provide assistance to meet balance-of-payments deficits over longer periods of time. By this time, Indonesia had received $4.96 billion (SDR 3.66 billion) in disbursements under the SBA. The EFF was to cover the remaining period of the SBA—26 months—and access under the new arrangement was to be the same as the amount remaining to be drawn under the SBA—$6.33 billion (SDR 4.67 billion). An EFF allows a country more time to repay the IMF, according to an IMF official. Repayment of principal under an EFF was to be made in 12 semiannual installments beginning 4-½ years after disbursement and ending 10 years after the date of each disbursement whereas repayments under an SBA are scheduled 3- ¼ to 5 years after each disbursement. The deep-seated nature of the structural and balance-of-payments problems facing the economy had become increasingly apparent. A thorough restructuring of the banking and corporate sectors was needed for the economy to recover from the crisis, even if this restructuring would take some time to complete. IMF staff supported the Indonesian government’s request that the SBA be replaced by an EFF. On August 25, 1998, the IMF Board approved the request for an EFF, and Indonesia received a $995.4 million (SDR 733.8 million) disbursement. A series of five letters of intent and four reviews followed the switch to an EFF. The five letters were an elaboration of the elements of the reform program, according to the IMF. Monetary policy requirements continued to be tight and focused on getting the exchange rate into an acceptable range. Fiscal policy requirements pinpointed deficit spending. Structural policies focused on reforming the financial sector, eliminating anticompetitive structures in the Indonesian economy, and providing social safety measures. Disbursements were on time twice and delayed twice when IMF officials judged that Indonesian officials were not satisfactorily implementing the set conditions. On July 29, 1998, at the time the government of Indonesia requested an EFF, Indonesia issued a new letter of intent to address the prevailing conditions. Program modifications were introduced in budgetary management, corporate debt restructuring, and bank restructuring. There was progress in implementing the Frankfurt agreement with foreign commercial banks and the introduction of auctions for central bank instruments. Progress was also being made on elaborating the details of the plan for bank restructuring. IBRA and its asset management unit were fully operational, and foreign investment banks and a leading foreign commercial bank were assisting the bank restructuring process. These developments had a beneficial impact on market confidence. At the time of the request for the EFF, the economic situation remained precarious. Output had declined 10 percent and was likely to decline as much as 15 percent for 1998/1999, according to the IMF. Inflation was projected to be 80 percent for 1998. Food security was a continuing concern—food prices had risen dramatically since the beginning of May 1998. Severe problems in the banking system and corporate sector were still not adequately addressed. Actions to resolve six private banks that were taken over were needed. Actions were also needed on the recapitalization of sounder banks and the restructuring of state banks. Progress on corporate debt workouts was very slow, and IMF staff judged that the Indonesian government needed to be involved in facilitating such workouts. The outlook for the program was vulnerable to changes in the political and social climate. The June program had slippages in monetary policy—concerns about further bank closures led to renewed withdrawals of deposits from troubled banks, and the move by Bank Indonesia to reabsorb liquidity led to a rise in interest rates. Strenuous efforts were necessary to bring base money in line with program targets. New measures were added to repair and strengthen the distribution system, to mitigate the humanitarian effects of the crisis by expanding social safety net programs and improving the targeting of subsidies, to remove obstacles to corporate sector restructuring through the adoption of regulatory and administrative reforms, and to restructure insolvent banks. The distribution and subsidy systems were improved to ensure that essential goods were available at affordable prices. In addition, a new program was created to provide rice at highly subsidized prices to the poorest families. Components of this strategy were the following: The State Logistics Agency was to release large quantities of rice of all qualities into the market. The rice was to be released into the market at less than the market price. The State Logistics Agency was to increase direct deliveries of medium- quality rice to retailers and cooperatives. To put further downward pressure on prices, the value-added tax on rice was to be suspended. The program for delivering rice at prices well below market prices to poor families was to be expanded as quickly as possible, with the help of provincial governors. The State Logistics Agency was to actively seek new imports for rice to ensure that stocks remained adequate. Private traders were to be freely allowed to import rice. Quantitative performance criteria were the same as those in effect in the final letter of intent of the SBA except for changes in targets. Structural performance criteria were to: initiate sales of additional shares in listed state enterprises including, at a minimum, the domestic and international telecommunications corporations by end-September 1998; submit to parliament a draft law to institutionalize Bank Indonesia’s autonomy by end-September 1998; reduce export taxes on logs and sawn timber to 20 percent by end- complete audits of the State Oil Company, the State Logistics Agency, the State Electric Company, and the Reforestation Fund by end-December 1998. New or strengthened structural policy commitments included an IMF review of public expenditure management, the transfer of assets of the seven frozen banks to the asset management the transfer of the responsibility for six state banks from the Ministry of State Enterprises to the Ministry of Finance, the launch of the Indonesian Debt Restructuring Agency, the institution of tax neutrality for mergers, the submission to the Indonesian parliament of a new arbitration law consistent with international standards, the completion of a review of accounting and auditing standards to make them consistent with international standards, and the establishment of a voluntary framework to facilitate corporate restructuring. On September 17, 1998, IMF staff presented its first review of the EFF to the IMF’s Executive Board. Completion of the review was to be based on indicative fiscal and monetary targets, as well as external targets for end- July and end-August 1998. IMF staff recommended that the review be completed and that Indonesia continue to have access to IMF assistance. The policy discussions with the government of Indonesia were conducted in close collaboration with the World Bank and the Asian Development Bank. According to IMF documents, Program implementation was generally good and the program was broadly on track. Market sentiment had improved as a result of good implementation and increased financing for the program. Steps were being taken in key areas where problems had occurred, especially in regard to food security, or where progress needed to be accelerated, such as corporate restructuring. A cautious easing of monetary policy was seen as possible once inflation had been brought down from its high levels. The challenge for policy at that time was to proceed with structural reforms-–chiefly banking system and corporate restructuring. Improving the food situation was crucial for ensuring social stability. Real GDP was estimated to have declined by 12 percent in the first half of 1998, while cumulative inflation for the first 8 months of the year was 69 percent. Although the political situation had stabilized to some degree by September, it remained fragile, as indicated by street protests. The privatization program was behind schedule, and a shortfall from the target for privatization revenues was believed to be likely. The budget was running far within program targets in part because of delays in increasing spending on social programs. Because the government had adopted a strategy for addressing the urgent problems created by the recent rapid increase in rice prices, IMF staff believed it helped limit risks to the program from social unrest. On the other hand, bank restructuring had been subject to delays. The transfer of assets to the asset management unit was being delayed pending passage of amendments to the banking law. In addition, little progress had been made in corporate restructuring. To address some of these issues, a package of measures to address bank restructuring was announced on August 21, 1998. The package included the recapitalization of core banks, the closure of six large private banks, the merger of four state banks, and other items. An important development with respect to corporate restructuring was the announcement of the Jakarta Initiative—a voluntary framework to guide and streamline out-of-court restructuring of corporate debt. This initiative was announced in early September 1998 and used approaches that were proven successful in other countries. The approach covered all foreign and domestic debt and applied equally to all creditors. To promote financing to distressed companies, the principles encouraged creditors to subordinate their existing claims to lenders that were willing to provide interim financing. Several benchmarks were implemented during the course of this review. The end-June 1998 measure to allow transferability of forest concessions and to de-link their ownership from processing of new concessions was done by end-August. The end-July measure to issue a presidential decree to provide appropriate legal powers to IBRA, including its asset management unit, was done on schedule. The end-August measure to submit to parliament a draft amendment to the banking law, incorporating procedures for the privatization of state banks, and the removal of the limits on private ownership of banks was done on August 24, 1998. On September 25, 1998, the IMF Board completed the review and Indonesia received a $928.3 million (SDR 684.3 million) disbursement. On September 11, 1998, at about the time of the first IMF review of the EFF, the government of Indonesia announced a revised program to address the new conditions. The letter of intent established indicative targets for monetary and fiscal variables and for international reserves. The letter of intent indicated that the program intended to continue to implement a firm monetary policy. As inflation declined, the government of Indonesia expected interest rates to decline, easing pressure on the corporate and banking sectors. Development expenditures, particularly those for the social safety net, which were running below the programmed levels, were to be stepped up. Rice was to be provided at highly subsidized levels to poor families. For the first time in 30 years, the government was to allow private traders to import rice. This letter of intent included commitments related to an August 21, 1998, announcement by the government of Indonesia of a major bank-restructuring package that covered banks with almost half the assets of the banking system. The end-September targets for net domestic assets, overall central government balance and net international reserves, and net international reserves were quantitative performance criteria. The letter of intent contained an updated matrix of structural policy commitments with the following new or strengthened commitments: Eliminate subsidies on imports of sugar, wheat, wheat flower, corn, soybeans, soybean meal, and fishmeal. Strengthen public expenditure management. Prepare a final plan for restructuring three banks. Complete the legal requirements for the merger of four state banks. Prepare a plan for the operational merger and restructuring of four state banks. On October 23, 1998, IMF staff submitted a second IMF staff review of Indonesia’s program. IMF staff reported that further progress had been made with stabilization since the last review and that policy implementation under the IMF program continued to be generally good. The priority for policy at this juncture was to foster recovery in output, consolidate gains in stabilization, and strengthen programs to protect the poor. IMF staff recommended that waivers for nonobservance be granted for two missed structural performance criteria provided that there was a satisfactory arrangement for the repayment of liquidity support by private banks. The situation remained fragile and the economy extremely weak. Unemployment and poverty were on the rise. Although the political situation had stabilized, the outlook remained uncertain and, in the IMF staff’s view, further turbulence in coming months was not ruled out. There had been slippages in some areas, notably privatization. Privatization of several mining companies and the domestic telecommunications concern had been postponed until market conditions improved. The inability of most corporations to pay high rates on loans had resulted in a negative spread between commercial bank deposit and lending rates, contributing to continuing decapitalization of the banking system. At this time there was no satisfactory agreement on the repayment of liquidity support by private banks. By the third week in October 1998, the rupiah had strengthened beyond expectations, inflation had moderated, and prices for many staple food items had declined. Key elements of bank restructuring were moving ahead. Indonesia then announced a government-assisted recapitalization program for viable banks. The merger of four state banks had been initiated, and plans had been announced for resolving the debt situation of six major private banks. Progress was being made in establishing the appropriate legal and regulatory framework for the Jakarta Initiative. Completion of the second review under the EFF was to be based on indicative and performance targets for end-August and end-September 1998. The government of Indonesia had complied with performance criteria for end-September 1998 on net domestic assets and net international reserves. However, Indonesia requested a waiver for the following end-September performance criteria due to the lack of available data on the central government balance, the contracting or guaranteeing of new external debt, and the short-term external debt outstanding. One benchmark for the end of September 1998 was done on schedule while the completion of another benchmark was delayed. The benchmark to complete action plans for all 164 state enterprises was done on schedule. The benchmark to complete divestiture of two state enterprises that were unlisted was delayed because of weak market conditions. The government also requested waivers for structural performance criteria that were not met. These criteria dealt with share sales of domestic and international telecommunications companies and submission to parliament of a draft law to institutionalize Bank Indonesia’s autonomy. Although share sales of one company had been completed, other shares had not been sold due to weak market conditions. The draft law was nearing completion, and submission to parliament was expected by mid- November. On October 30, 1998, the IMF Board granted the requested waivers. On November 6, 1998, Indonesia received a $928.3 million (SDR 684.3 million) disbursement. On October 19, 1998, the government of Indonesia announced a new letter of intent incorporating adjustments to prevailing conditions. This was the third letter of intent to be announced under the EFF. This revision contained several measures to further strengthen the IMF program, especially in the areas of banking and corporate debt restructuring. The letter of intent called for lowering interest rates as long as the rupiah remained strong and inflation was falling. Development spending was to be accelerated. Monitoring of development spending was to be strengthened to protect against leakage and corruption. The preparation of the master plan for privatization was completed—all but a few selected enterprises were to be privatized within the next decade. The program included requirements to streamline the food distribution procedures and make adequate food supplies available to the most vulnerable groups. On September 28, 1998, the government announced the formal merger of four state banks into the newly established Bank Mandiri. The next day, Bank Indonesia announced key elements of a bank recapitalization program for potentially viable private banks—including higher capital adequacy ratios, injections of new capital, lower levels of nonperforming loans in accordance with new prudential requirements, and preparation of business plans demonstrating achievement of medium-term viability and compliance with prudential regulations. Indonesia’s parliament approved amendments to the banking law on October 16, 1998, which facilitated the restructuring process by strengthening the legal powers of IBRA and its asset management unit. The Jakarta Initiative on corporate debt restructuring was expected to be fully operational by end-October. The decrees necessary to give effect to the Initiative were signed and a chairman appointed. At this time about a dozen companies, with a combined debt exposure in excess of $3 billion, were entering the process. On October 23, 1998, a draft government regulation was to be signed to provide for tax neutrality for mergers and removal of other tax disincentives for restructuring. Quantitative performance criteria were as specified in the first EFF, with targets changed. New and strengthened structural policy commitments were to complete a review by Bank Indonesia of business plans of relatively strong recapitalize banks whose business plans are accepted by Indonesia, transfer to IBRA banks that are determined to be insolvent and ineligible for the recapitalization plan, resolve 26 banks currently subject to IBRA control for which audits were expected to be completed by mid-November, establish centralized control of lending decisions and treasury management in the four state banks that were being merged into Bank Mandiri, reach final settlement with former owners of two private banks for repayment of Bank Indonesia liquidity support, encourage the initiation of negotiations between debtors and creditors under the Jakarta Initiative, and expand the subsidized rice scheme to 17 million poor families. On November 13, 1998, the government of Indonesia issued a letter of intent and supplementary memorandum of economic and financial policies that detailed revised conditions under the EFF. The new letter of intent undertook a number of additional steps to implement the key areas of corporate and financial restructuring. The letter of intent reaffirmed the government’s commitment to keep base money under control so as to stabilize prices and accommodate further appreciation of the rupiah. Progress continued to be made on lengthening the maturity structure of monetary instruments. Development expenditure was targeted to rise. The revised program sought collaboration at all levels in stepping up internal government oversight mechanisms to help identify leakages and ensure accountability. The letter of intent had a commitment to sell majority interests in the Jakarta container port and minority interests in the Jakarta airport operations, the largest palm oil plantation in Indonesia, and the international telecommunications enterprise. The letter of intent contained a commitment to taking steps to release detailed financial information about the state logistics agency, the state oil company, and the state electric company. Banking sector reforms included requirements for recapitalization of private sector banks, resolution of debt in certain frozen banks, and other actions. There was to be a renewed effort to implement the Jakarta Initiative. A foreign exchange monitoring system was to be developed to allow Bank Indonesia to oversee foreign currency flows on a more timely basis. As of April 30, 1999, the system had been approved by the government of Indonesia but had not begun operations. The letter of intent only had one structural performance criterion—reduce export taxes on logs and sawn timber to 20 percent by end-December 1998. New and strengthened structural policy commitments included the following: Raise aviation fuel prices to international levels. Complete terms and conditions of bank recapitalization bond. Reach agreement with former owners of six banks for repayment of Bank Indonesia liquidity support and connected lending. Issue three new prudential regulations on connected lending, the capital adequacy ratio, and the semi-annual publication of financial statements. Establish a mechanism for the appointment of ad hoc judges to the Commercial Court. Expand the subsidized rice scheme and increase monthly allocations to 20 kilograms per family. Eliminate exchange rate subsidies for rice imports by the National Logistics Agency and replace them with explicit budgetary subsidies. On December 15, 1998, IMF staff presented their third review under the EFF to the IMF Board. In its view, macroeconomic policies were on track, financial sector reform was proceeding, progress was being made on corporate restructuring, and slippages and delays in some areas were being addressed. The rupiah had strengthened, allowing money market rates to begin falling. Inflation had abruptly slowed. Fiscal policy had been less stimulative than envisaged but development spending was accelerating. Moreover, the rice program was being broadened beyond the initial target of 7.5 million families. The privatization agenda was narrowed to 4 or 5 enterprises from the original list of 12 enterprises. Financial sector and corporate restructuring was moving forward on several fronts with the aim of restoring the soundness of the banking system. On November 7, 1998, final agreement was reached with the previous owners of four banks to repay the equivalent of 9 percent of GDP in obligations stemming from loans obtained by their enterprises from these four banks. An increasing number of companies were seeking assistance in initiating negotiations with creditors. The review noted slippages in some areas of the program, including privatization and some risk that political unrest could again derail the program. Government authorities remained reluctant to finance the restructuring costs because of the political implications. There had only been limited progress in corporate debt restructuring—further steps were needed in expediting regulatory approvals for restructuring, establishing a public registry to facilitate interim financing, and streamlining the Commercial Court. According to IMF documents, Indonesia met the indicative targets on net domestic assets and net international reserves. Data were not available for the indicative target for the central government balance, but the IMF believed that the target had been met. IMF staff recommended completion of the third review and supported the introduction of three bimonthly reviews during the first half of 1999 before moving to quarterly reviews. On December 15, 1998, the IMF Board approved completion of the review. Indonesia received a $928.3 million (SDR 684.3 million) disbursement. On March 25, 1999, the IMF completed its fourth review of the EFF and the request for augmentation of funds. The review was scheduled to have been completed on February 15, 1999. This was the first bi-monthly review. Although progress was reported in implementing the IMF program, delays had occurred in implementing key banking and corporate restructuring measures. Nevertheless, the IMF staff was satisfied that policies and developments were continuing to evolve as well as could be expected under difficult and unsettled domestic conditions. Progress toward achieving macroeconomic stability had been helped by a firmer and more consistent monetary policy. The external current account kept its solid surplus of almost 5 percent of GDP in 1998-1999, offsetting a weaker capital account. A trade surplus of $17 billion accounted for the bulk of the improvement in the current account. In mid-March 1999, net international reserves of $15 billion remained above the program targets. Opposition political parties supported the IMF program. Indonesia continued to pose exceptional risks for the IMF, particularly until the political transition was further advanced, according to IMF staff. The economy had not yet bottomed out. Export volumes had declined sharply, and domestic banks were reluctant and unable to extend credit to exporters. Although the judiciary had not implemented the bankruptcy law in a manner consistent with international practice, Indonesian authorities were believed to be cooperating fully in carrying out a corrective strategy. This corrective strategy included proposed legislation aimed at improving governance of the judiciary and expectations that state banks and IBRA were aggressively to pursue their largest borrowers. Corporate debt restructuring under the Jakarta Initiative had yet to spread rapidly—only 15 companies, involving about $2 billion in foreign currency debt, had concluded debt restructurings with creditors. Several benchmarks were completed on schedule. For example, the measure to finalize a decision on the resolution of all banks that fail the criteria for eligibility to the recapitalization program was implemented in that all these banks were closed or intervened on March 13, 1999. The IMF granted a waiver for nonobservance of the structural performance criterion—to reduce the export tax on logs and sawn timber to 20 percent at end-December 1998. The measure was adopted in February 1999. The result of the review was that on March 25, 1999, Indonesia received a $465.3 million (SDR 337 million) disbursement, and the total amount available to Indonesia was increased $985.8 million (SDR 714 million). The government of Indonesia issued a fifth letter of intent under the EFF on March 16, 1999. This letter of intent included a number of new steps to strengthen the program—especially the banking system and corporate restructuring. Banking reforms requirements included state bank resolution; private bank recapitalization; resolution of debt in banks under IBRA control; and improvement of the legal, regulatory, and supervisory framework. Steps to strengthen the corporate restructuring framework included the following: A regulation became effective that removed company law limitations on debt-to-equity conversions. The Ministry of Finance passed a decree providing more favorable tax treatment of cancellation of indebtedness income in restructurings. Legislation was to be submitted for the registration of security interests that would give certainty concerning the priority rights of lenders. Actions related to the rice situation included elimination of the state trading agency’s exchange rate subsidy for imports of rice, a public procurement floor price policy that was aimed at keeping domestic rice prices broadly in line with world prices, and the unhindered import of rice by the private sector. To supplement the People’s Economy Initiative for development of small- and medium-sized enterprises and cooperatives, Indonesia was to review commercial lending practices to and the financing needs of small- and medium-sized enterprises and cooperatives, transform the BRI state bank into a specialized bank with a mandate to lend only on commercial terms, and simplify directed credit schemes to cooperatives and small- and medium- sized enterprises and ensure that lending rates are positive in real terms and adjust them periodically to reflect market conditions. The letter of intent set end-March and end-May quantitative performance criteria and indicative targets for the rest of 1999 and the year 2000 as well as structural performance criteria and benchmarks through September 1999. There were no new structural performance criteria in this letter of intent. Policy actions were to continue to be guided by the matrix from the November 1998 letter of intent. Prior to Korea’s 1997 financial crisis, Korea had experienced about 30 years of economic growth and was considered to have had broadly favorable macroeconomic performance. Korea had recorded real GDP growth of about 6 percent in the first 3 quarters of 1997, and inflation was around 4 percent. Korea’s external financing crisis stemmed from fundamental weaknesses in its corporate and financial sectors. Korea had experienced a mild recession in 1993. In response, Korea’s elected officials promised growth and encouraged Korea’s conglomerates (called “chaebols”) to invest heavily in new factories. In turn, Korean firms made substantial investments, leaving Korea with excess production capacity and large debt burdens for Korean firms. This overcapacity led to falling prices for its main exports—computer memory chips, cars, ships, steel, and petrochemicals—and weakened profitability. The large amount of short-term borrowing compounded these other problems. Most of the corporate debt was either short-term borrowing from domestic financial institutions or from the issuance of promissory notes. At the end of December 1997, the 30 largest conglomerates owed approximately 111.3 trillion won (the Korean currency) in loans and payments to Korean banks, according to Korea’s Office of Bank Supervision. The conglomerates’ current liabilities (less than 1 year) accounted for 60 percent of total liabilities and roughly half of nominal GDP in 1996. These factors resulted in an increase in bankruptcies beginning in 1997, including a large Korean steel company and car manufacturer. These bankruptcies weakened the financial system, since bank loans were not being paid off, and non-performing loans rose sharply, causing strains in the banking system. Korean government estimates of nonperforming loans at the end of 1997 were 34.9 trillion won. Weaknesses in the banking system were thought to be based on a lack of commercial orientation (that is, a focus on increasing market share over improving profitability) and limited experience in managing risk, combined with lax prudential supervision. These factors, as well as the large-scale, external short-term borrowing of the Korean banks, made Korea vulnerable to the contagion effects of financial problems in Southeast Asia. The weak state of the banking sector led to successive downgrades by international credit rating agencies and a sharp tightening in the availability of external financing. External creditors began to reduce their debt exposure to Korean banks in the latter part of 1997, causing a sharp decline in usable reserves. A large amount of these reserves were being used to finance the repayment of the short-term debt of Korean commercial banks’ offshore branches. Historically, Korean authorities had a policy of not letting private banks go into default. Consequently, the Bank of Korea was providing foreign exchange support to commercial banks as foreign creditors reduced their exposure on short-term lines of credit. The total amount of foreign currency reserves the Bank of Korea, the central bank of Korea, held at the end of December 1997 was $20.4 billion, the usable portion of which was $8.9 billion. As of December 31, 1997, the total amount of Korea’s private and governmental external liabilities was $154.4 billion, calculated under IMF standards. The Korean government estimated that at the end of December 1997, approximately $27.3 billion was due by the end of the first quarter in 1998. The ability of Korea to repay its short-term foreign debts was dependent on the willingness of foreign lenders to extend the terms of existing loans and/or to offer new financing. Korea had made earlier attempts to reform the financial sector and had taken steps to liberalize its capital account. Korea permitted short-term foreign borrowing but had not allowed domestic banks access to longer- term foreign borrowing, which added to Korea’s financing problems. Korea was faced with depleted foreign reserves and a rapidly depreciating currency when it asked for IMF assistance in late November 1997. It had been 10 years since Korea had had an IMF program, and Korea did not have any outstanding IMF credit. Korea had made its last repayment of prior borrowings to the IMF in 1988. Table V.1 presents a history of Korea’s recent financial problems. Hanbo Steel, a large Korean conglomerate, collapses under $6 billion in debts, first bankruptcy of a Korean conglomerate in a decade. President’s Committee on Financial Sector Reform recommends short-term reform measures. Thailand devalues its currency, the baht. Kia, Korea’s third largest carmaker, requests emergency loans. Korean government announces plan for providing special financing for certain commercial and merchant banks. Announced government guarantee for overseas foreign currency borrowings by Korean commercial banks. IMF mission goes to Seoul for an Article IV consultation.Credit rating agencies begin to downgrade the ratings of Korea and Korean companies to below investment grade. Kia Motors Corp. announces bankruptcy. Bank of Korea intervenes to attempt to halt the decreasing value of the won. IMF announces it is ready to provide assistance if needed. Bank of Korea loosens band on currency, won begins to drop sharply.Korean government requests IMF assistance. Korea bank asset workout program announced. Korea Asset Management Corporation reorganized to acquire and dispose of nonperforming loans. $21 billion IMF package announced, which was part of a larger financing package totaling about $58 billion. Korea eliminated its daily currency exchange rate band. IMF staff conduct first biweekly review of Korea’s program. South Korea elected opposition party Kim Dae-jung to serve a 5-year presidential term. Moody’s rating service announces that it lowered Korea’s foreign currency ratings. Won drops to its low of 1,963 won to the dollar. Standard & Poor’s announces that it lowered Korea’s long-term foreign currency credit ratings. IMF funding accelerated, debt restructuring talks begin. IMF and 12 country lenders agree to advance Korea $10 billion to prevent default. Korea issues second letter of intent with accelerated and strengthened reforms. Korea’s National Assembly passes 13 financial reform bills designed to facilitate financial sector restructuring, accelerate capital market liberalization, and improve prudential regulation. IMF conducts second biweekly review of Korea’s program. $22 billion in Korean foreign debt restructured. Tripartite accord (among labor, management, and the government) reached on Korea’s restructuring program and sharing the burden of reform. IMF conducts first quarterly review of Korea’s program. President Kim and the new administration take office. Korea issues global bond offering of $4 billion to add to its official reserves. Financial Supervisory Commission formed. IMF conducts second quarterly review of Korea’s program and completes its Article IV consultation. Korea signs memorandum of understanding with the World Bank for implementing corporate sector reforms. IMF conducts third quarterly review of Korea’s program. Review completed and disbursement made. IMF conducts fourth quarterly review of Korea’s program. Korea requests waiver for obtaining bids for the sale of Korea First Bank and Seoul Bank. IMF Board approves waiver, review is completed, and disbursement made. Korea First Bank signs Memorandum of Understanding with Newbridge Capital for sale of Korea First Bank. Seoul Bank signs memorandum of understanding with HSBC for sale of Seoul Bank. IMF conducts fifth quarterly review of Korea’s program. IMF recommends waivers for completion of an audit of Korea Asset Management Corporation and delivery of recommendations based on a financial supervisory review of Korea Development Bank. The financial supervisory review was conducted within the timetable under the review, and the remaining actions were subsequently completed. IMF completes review and disbursement was made. On December 4, 1997, the IMF approved a 3-year stand-by arrangementwith Korea for an amount equivalent to special drawing right (SDR) of 15.5 billion (amounting to about $21 billion). This program was formulated under emergency procedures and later drew on the IMF’s newly established Supplemental Reserve Facility. The World Bank and the Asian Development Bank committed $14 billion to the Korean government. In addition, interested countries pledged $22 billion as a second line of defense for a total package of $58.4 billion. At the time of the announcement, the IMF staff team continued to work with Korean officials to develop more fully the policy measures for the program. The full program was to be reviewed by the IMF’s Executive Board in January 1998. It was planned that the review would expand the scope of the performance criteria and set performance measures and benchmarks for 1998. Customary clauses were also included as conditions for Korea’s IMF program. Each subsequent review adjusted and expanded the performance criteria for the next reviews, that is, they were set as “rolling” performance criteria. The IMF’s monitoring of Korea’s program started with two biweekly reviews in 1997 and quarterly reviews for 1998 and the first quarter of 1999. After the fifth quarterly review in March 1999, the IMF plans to conduct reviews every 6 months, and Article IV consultation discussions are planned for June or July 1999. The IMF program for Korea included a combination of macroeconomic policies—changes to monetary and fiscal policies—and structural reforms. The IMF-directed response was to tighten monetary policy, including raising interest rates to stabilize the currency and reduce government spending, along with an ambitious reform program for financial sector and corporate restructuring. Macroeconomic policies were an essential part of Korea’s program. The large official financing package was assembled to help break the cycle of capital outflows, exchange rate depreciation, and financial sector weakness. However, compared with other countries’ IMF programs, the structural reforms in Korea, as well as Indonesia and Thailand, were central to dealing with the underlying causes of the financial crisis, restoring market confidence, and setting the stage for resuming and sustaining growth in Korea. According to Korea’s December 3, 1997, IMF letter of intent, Korea’s IMF program was “built around (1) a strong macroeconomic framework designed to continue the orderly adjustment in the external current account and contain inflationary pressures, involving a tighter monetary stance and substantial fiscal adjustment; (2) a comprehensive strategy to restructure and recapitalize the financial sector, and make it more transparent, market-oriented, better supervised and free from political interference in business decisions; (3) measures to improve corporate governance; (4) accelerated liberalization of capital account transactions; (5) further liberalization of trade; and (6) improvement in the transparency and timely reporting of economic data.” The broad policy goals of restoring investor confidence and building international reserves have remained throughout the program, although the emphasis has changed and adjustments have been made in specific targets as Korea’s reforms progressed. Korea’s macroeconomic program included monetary and fiscal policy measures. The initial letter of intent did not fully specify Korea’s reform program but did provide a framework of reforms that Korea intended to pursue. IMF staff continued to work with Korean officials to develop more detailed policy measures to be taken. To monitor Korea’s progress under the program, the initial agreement detailed the following quarterly quantitative performance criteria: a ceiling on net domestic assets of the Bank of Korea, a floor on net international reserves of the Bank of Korea, and the interest rate charged by the Bank of Korea on foreign exchange injections to Korean commercial banks or their overseas branches was not to be below 400 basis points above LIBOR. These quantitative macroeconomic performance criteria, in addition to other indicative targets, structural performance criteria, and structural measures, were used to monitor Korea’s progress. The IMF and Korea also agreed to indicative targets to monitor Korea’s economic progress, including a floor on the consolidated central government balance, reserve money,and broad money (M3). The principal macroeconomic objectives of Korea’s IMF program, as detailed in the initial December 3, 1997, letter of intent, include building the conditions for an early return of confidence so as to limit the deceleration of real GDP to about 3 percent in 1998, followed by a potential recovery in 1999; containing inflation at or below 5 percent; and building international reserves to more than 2 months of imports by end- 1998. The main objective of the monetary policy was to contain inflation to 5 percent in 1998 and limit depreciation of the won. To demonstrate to markets the government’s resolve to confront the crisis, monetary policy was tightened immediately—interest rates were raised—to restore and sustain calm in the markets and contain the inflationary impact of the won depreciation. The government of Korea reversed its policy of providing liquidity to Korean banks and allowed money market rates to rise to a level sufficient to stabilize markets. The day-to-day conduct of monetary policy was guided by movements in the exchange rate and short-term interest rates, which were used as indicators of how tight monetary conditions were. A flexible exchange rate policy was maintained, with monetary and exchange rate policy being implemented in close coordination with IMF staff. Fiscal policy in Korea had traditionally been formulated prudently, according to the IMF. In recent years, the Korean government’s budget was in broad balance, with government savings of around 8 percent of GDP and a low level of public debt. Unlike economic problems in Latin America (large public debts), the Korean crisis was centered in the private sector. For 1998, Korea was to maintain a tight fiscal policy—by cutting government spending and raising certain taxes—to limit upward pressure on interest rates and to provide for the still uncertain costs of restructuring the financial sector. The quantitative performance criteria were adjusted at subsequent reviews to reflect changes in economic assumptions, discussed more fully below. The first quarterly review of the full program was completed in February 1998, which expanded the scope of performance criteria and set performance criteria and benchmarks for 1998. Two biweekly reviews were conducted in the interim period after announcement of the Korea program and before the first quarterly review in February 1998. The IMF used numerous structural performance criteria to monitor Korea’s progress in making structural reforms. Korea’s structural reforms focused on financial sector reforms, capital account liberalization, strengthening corporate governance and corporate structure, labor market reforms, trade liberalization, and information provisions and program monitoring. After the first IMF quarterly review, measures to increase spending for Korea’s social safety net, including unemployment insurance, were added to the program. The third quarterly review added a World Bank component on corporate sector reforms. For monitoring Korea’s reforms, the IMF set benchmarks in the initial letter of intent for the first and second biweekly reviews. As Korea implemented its reforms, the structural performance criteria used to monitor progress changed to reflect the reforms undertaken (see table V.2 and discussion that follows). The IMF set Korea’s benchmark for the first biweekly review “to comply with the understandings between the Korean government and the Fund staff regarding the implementation of interest rate policy.” For the second biweekly review, to be completed on January 8, 1998, Korea was “to call a special session of its National Assembly, shortly following its presidential elections in December 1997 to pass reform bills on financial sector reforms, capital account liberalization, and trade liberalization.” Korea was also “to publicize its foreign reserve data.” Also, “the Bank of Korea was not to increase its deposits with nonresident branches and affiliates of domestic financial institutions after December 1997.” At the first quarterly review, and at each quarterly review throughout 1998, the IMF and Korea agreed to additional specific structural performance criteria to monitor Korea’s reform efforts. For example, at the third quarterly review, Korea was to obtain bids for the sale of Korea First Bank and Seoul Bank by November 15, 1998. Korea was monitored against this performance criterion at its fourth quarterly review in December 1998. Table V.2 details Korea’s reported progress and changes in its structural performance criteria from the initial IMF program in December 1997 through the fifth IMF quarterly review in March 1999. Table V.2: Structural Performance Criteria for Korea’s IMF Program Date of review First biweekly review, 12/17/1997 Second biweekly review, 1/8/1998 Structural benchmarks and performance criteria to be met Structural benchmark set: Compliance with understandings between the Korean authorities and the IMF regarding the implementation of interest rate policy. Structural benchmarks set: Call a special session of the National Assembly after elections to pass reform bills that (1) revise Bank of Korea Act to provide central bank independence; (2) consolidate bank supervision; and (3) require corporate financial statements to be prepared on a consolidated basis and certified by external auditors. Submit legislation to harmonize the Korean regime on equity purchases with the Organization for Economic Cooperation and Development’s practices. Call rate rose to about 30 percent on Dec. 24, 1997. Increase in interest rate cap from 25 percent to 40 percent was approved by cabinet on Dec. 16 and became effective Dec. 22, 1997. Passed the three financial reform bills by the National Assembly on Dec. 29, 1997. The Financial Supervision Board will be under the Prime Minister’s office. Submit legislation concerning hostile takeovers to harmonize Korean legislation on abuse of dominant positions in line with industrial countries’ standards. Publication of foreign reserve data. The Bank of Korea’s deposits with nonresident branches and affiliates of domestic institutions will not be increased after end- Dec. 1997. Eliminate interest rate ceiling. Korea was to submit legislation to National Assembly to remove interest rate ceiling as soon as necessary procedures are completed, but not later than Feb. 28, 1998. Raised ceiling on aggregate foreign ownership of listed Korean shares from 26 to 50 percent and the individual ceiling from 7 to 50 percent on Dec. 11, 1997. Raised the aggregate ceiling on foreign investment in Korean equities to 55 percent on Dec. 30, 1997. Under Korea’s foreign direct investment law, Korea already allowed foreign investors to buy equity in the stock market (as well as over the counter) for the purpose of friendly mergers and acquisitions, without limits. Legislation submitted to allow greater foreign ownership of banks. It was announced that foreign participation in merchant banks would be allowed without limit. Publishing data on Korea’s foreign reserves began Dec. 17, 1997. Data on usable reserves of the BOK is published twice monthly (for 15th and the last day of each month) within 5 business days. Data on net forward position of the Bank of Korea is being published monthly. All of these data were placed on the Bank of Korea’s web site, starting May 15, 1998. Began Dec. 24, 1997. The Bank of Korea was to limit its funding of financial institutions to short-term liquidity support, which the BOK offered to commercial banks through its liquidity support program. Increase in interest rate cap from 25 percent to 40 percent was approved by cabinet on Dec. 16, 1997, and became effective on Dec. 22, 1997. Assume government control of Korea First Bank and Seoul Bank and request the management of these banks to write down the equity of existing shareholders. These banks came under intensive supervision beginning Dec. 24, 1997. The equity capital was written down, and the government recapitalized these banks and took effective control of the banks by Jan. 31, 1998. By March 31, 1998 Complete second round evaluation of the remaining 20 merchant banks and suspend operations of those banks that fail to pass the evaluation. Allow foreign banks and brokerage houses to establish subsidiaries. Completed Feb. 26, 1998. Completed June 29, 1998 Legislation was enacted to allow the writedown of existing shareholders’ equity in insolvent financial institutions. By June 30, 1998 Complete an assessment of the recapitalization plans of commercial banks. Introduce legislation to allow a full writedown of existing shareholder equity, eliminating the current minimum bank capital floor for this purpose. Establish a unit for bank restructuring under the Financial Supervisory Board with adequate powers and resources to coordinate and monitor bank restructuring and provision of public funds. In addition to the end-June performance criteria, IMF added the following for end Sept. 1998: Submit legislation to allow for the creation of mutual funds (by Aug. 31, 1998) Require listed companies to publish half-yearly financial statements prepared and reviewed by external auditors in accordance with international standards (by Aug.31, 1998) For end-Dec. 1998: Obtain bids for Korea First Bank and Seoul Bank (by Nov. 15, 1998) Unit established on Apr. 1, 1998. Legislation submitted to the National Assembly on Aug. 8, 1998; related legislation put into effect in Sept. 1998. Completed. At the fourth quarterly review, the IMF staff recommended a waiver to extend the date for obtaining bids for Korea First Bank and Seoul Bank from Nov. 15, 1998, to end- Jan. 1999. Korea First Bank: memorandum of understanding signed with Newbridge Capital, Dec. 31, 1998; Seoul Bank: memorandum of understanding signed with HSBC on Feb. 22, 1999. Completed July 1998. Introduce consolidated foreign currency exposure limits for banks, including their offshore branches (by Nov. 15, 1988). In addition to end-Dec. 1998 performance criteria, additional criteria were set for end-March 1999: To complete an audit of Korea Asset Management Corporation to international standards by a firm with international experience in auditing this type of agency and to reflect any losses identified in the Korea Asset Management Corporation’s financial statement The Financial Supervisory Commission to complete supervisory examination of the Korea Development Bank and make recommendations to Ministry of Finance and Economy, as needed, as to any remedial actions required. IMF staff recommended a waiver for this action at the fifth quarterly review but it has since been completed. External audit report completed March 12, 1999. Losses identified in external audit report were reflected in the Korea Asset Management Corporation’s financial statement as of April 30, 1999. IMF staff recommended a waiver for this action at the fifth quarterly review but it has since been completed. Financial Supervisory Commission completed its examination of the Korea Development Bank March 20, 1999. Recommendations coming from the examination were submitted to the Ministry on April 26, 1999. Structural benchmarks and performance criteria to be met Period of April 1-August 31, 1999 (1) Issue regulation by April 1, 1999, requiring insurance companies that fail to meet the mandatory solvency margin thresholds (specified in the Memorandum of Economic Policies for the fifth review of the stand-by arrangement) to submit recapitalization plans by July 31, 1999. (2) By June 1, 1999, begin publishing data on revenue, expenditure, and financing of the consolidated central government on a monthly basis with no more than a 4-week lag. (3) By June 30, 1999, issue new loan classification guidelines that fully reflect capacity to repay. These guidelines would also cover the treatment of restructured loans and the valuation of equity and convertible debt acquired as part of corporate restructuring. (4) For merchant banks, implement prudential rules for foreign exchange liquidity and exposures based on a maturity ladder approach by July 1, 1999. (5) Issue instructions, effective July 1, 1999, that at least 20 percent of the new guarantees issued by Korea Credit Guarantee Fund and Korea Technology Guarantee Fund will cover only 80-90 percent of the value of guaranteed obligations depending on the credit rating of the firm. Ongoing. (1) Regulation was issued on March 26, 1999. The centerpiece of Korea’s structural reform package was financial sector restructuring. Korea’s goals were to have a sound, transparent (improved Korea’s financial reporting, according to international accounting standards), and more efficient financial system. Korea had already begun efforts to reform its financial sector prior to seeking IMF assistance but had not been successful in passing reform legislation. Korea’s initial IMF letter of intent detailed the government’s plans for addressing the financial restructuring of the banks. The Korean government, in consultation with the IMF, prepared a comprehensive action program to strengthen supervision and regulation in accordance with international best practices. The IMF agreement built upon the framework for financial sector reforms that the Korean government had published in November 1997. In its original letter of intent, Korea specified the need for a credible and clearly defined method for closing troubled banking institutions. The strategy required that troubled institutions present viable rehabilitation plans and close those insolvent financial institutions that failed to carry out their rehabilitation plans within specified periods. Korea also planned to set a timetable for all banks to meet or exceed Basle capital standards. The disposal of nonperforming loans was to be accelerated. All forms of assistance to banks, including financing from the Korean Asset Management Corporation and the deposit insurance funds, would be provided only as part of viable rehabilitation plans. All support to financial institutions, other than Bank of Korea liquidity credits, were to be recorded transparently in the fiscal accounts. In addition, blanket guarantees were to be phased out and replaced by a limited deposit insurance scheme. In its first IMF agreement, Korea stated its intentions to restructure and recapitalize troubled financial institutions. Timeframes and rules for doing this were detailed in later agreements that accelerated and strengthened Korea’s plans for addressing these problems. For example, the Koreans were successful in passing financial reform legislation and established a high-level team to negotiate with foreign creditors by the end of December 1997. The Korean government (1) appointed a high-level task force to develop and implement a strategy to address the financial crisis, (2) assumed control of Korea First Bank and Seoul Bank and hired outside experts to develop a privatization plan, and (3) hired experts to conduct due diligence with respect to the balance sheets of merchant banks and to assess the rehabilitation plans. Other measures included in Korea’s initial IMF agreement were reforms in capital account liberalization, corporate governance and corporate structure, labor market reforms, and information provisions and program monitoring. standards are formula-based and apply risk-weights to reflect different gradations of risk. Since 1992, the rules have been amended. One of the most notable change is the establishment of risk-based capital requirements to cover market risk in bank securities and derivatives trading portfolios. capital account were aimed at increasing competition and efficiency in the financial system. The schedule for allowing foreign entry into the domestic financial sector was to be accelerated. The United States supported these reforms and sought to move them forward quickly. Treasury officials told us that these were conditions they considered necessary to address underlying structural problems. More details were added in later agreements about the other structural reforms. For example, details about support for Korea’s social safety net were added after the first quarterly review in February 1998. As part of monitoring Korea’s progress in meeting IMF conditions, the IMF conducted quarterly reviews. After these quarterly reviews, monetary and fiscal targets were revised for the conditions outlined in the original IMF agreement. From the initial review to Korea’s present program, the IMF added details and conditions to structural reforms that address underlying problems in the financial and corporate sector. According to IMF, Treasury, and State Department officials, changes in conditions for Korea’s program reflected the progress made under the IMF’s program. Korea’s initial program was intended to restore market confidence and limit private capital outflows through the large financing package, which was heavily front loaded, together with sound economic policies. However, according to program documents and our discussions with IMF officials, the program was not initially successful in restoring investor confidence, and private capital outflows far exceeded program projections. According to IMF officials, the changes made to Korea’s macroeconomic targets reflected worsening conditions in the external environment (for example, the weakening of the Japanese yen, affecting Korea’s export competitiveness) and were adjusted to match actual economic data. Nevertheless, the IMF was criticized for the fact that the policies taken in Korea to stabilize the economy caused monetary conditions to become too tight. IMF and Treasury officials told us the IMF projections were overly optimistic at the beginning of the program, based on Korea’s past positive growth, and emphasized that the IMF did not accurately project the “rolling financial crisis” throughout Asia. According to IMF officials and program documents, Korea’s response to the program was slow at first because of its national presidential election on December 18, 1997. The positive impact of the announcement of the IMF program on exchange and stock markets was small and short-lived. In the 2 weeks from the announcement until the first biweekly review, the won dropped to its low of 1963 won per dollar on December 23, 1997. Before the crisis, the value of the won was 915 to the dollar on September 30, 1997. Investor confidence was further undermined by doubts about Korea’s commitment to the IMF program, as the leading candidates for the presidential election hesitated to endorse it publicly. Moreover, new information became available about the state of Korea’s financial institutions, the level of its usable reserves, and short-term obligations falling due, raising concerns among investors about Korea’s widening financing gap. Part of Korea’s agreement was to improve transparency in its financial reporting because the levels of usable international reserves, corporate debt, or banks’ nonperforming loans had not been readily apparent from published data. A temporary agreement was reached with the private, foreign bank creditors on December 24, 1997, to continue lending to Korean borrowers (to roll over short-term loans), and discussions on voluntary rescheduling of short-term debt were initiated. At the same time, Korea issued another letter of intent requesting the IMF to accelerate its funding, which the IMF agreed to do. Specifically, on December 24, 1997, Korea asked the IMF to modify the disbursement date under the stand-by agreement to December 30 from the original date of January 8, 1998, to permit an advancement of its IMF drawings. In negotiating the advancement of funds, Korea agreed to strengthen its structural reform agenda to accelerate financial sector restructuring and facilitate capital inflows into the domestic economy and bond market. Interest rates were raised significantly to about 30 percent at end-December 1997 from rates of about 12 percent in September 1997. Conditions for the Bank of Korea to provide foreign currency liquidity support to banks were tightened. One condition (quantitative performance criterion) of the IMF agreement was to raise the interest rate on Bank of Korea foreign exchange loans to commercial banks. These actions were considered a signal of a clear commitment by the incoming administration to support reforms under the IMF program. According to IMF documents, signs that Korea’s economy was stabilizing emerged by the time of the second biweekly review on January 8, 1998. Korea met the end-December 1997 quantitative performance criteria for the net domestic assets and net international reserves. The other conditions for the review were met, and efforts to liberalize Korea’s capital account were accelerated substantially. For example, Korea lifted the restriction on foreign borrowing of over 3-year maturity on December 16, 1997. To address Korea’s vulnerability to its short-term debt and improve its rollover rates, on January 28, 1998, Korea reached an agreement-in- principle with private bank creditors. IMF and Treasury documents note that this agreement was a voluntary rescheduling of Korean banks’ short- term debt into loans with longer-term maturities. The agreement covered interbank deposits and short-term loans maturing during 1998, equivalent to about $22 billion. The IMF completed its first full quarterly review of Korea’s program in February 1998. According to IMF documents, Korea’s exchange market situation was improving, but there were growing signs of a decline in economic activity. According to IMF, Treasury, and Korean officials, the agreement with bank creditors had helped to improve Korea’s financing conditions. Korea’s usable reserves had increased, and the won had appreciated by nearly 20 percent from the low in late December 1997. In terms of fiscal policy, the IMF said it had proved difficult to adjust government spending rapidly. With the large currency depreciation occurring and domestic demand contracting, the IMF made adjustments in Korea’s program. The revised program was based on lower (but still marginally positive) growth projections. The fiscal target for 1998 was lowered from a surplus of 0.2 percent of GDP in the original program (including bank restructuring costs) to a deficit of 0.8 percent of GDP. The IMF and Korea agreed that Korea would maintain a tight monetary policy as long as the exchange market situation continued to be fragile. While Korea had already taken a number of steps to implement the program’s comprehensive structural reform agenda, the revised program specified additional commitments in financial sector restructuring and capital account and trade liberalization. For example, Korea was to establish a unit for bank restructuring under the Financial Supervisory Board with adequate powers and resources to coordinate and monitor bank restructuring and the provision of public funds. Korea established this unit in April 1998. After the new government took office in late February 1998, business, labor, and the government reached a tripartite accord. Based on this accord, the reform agenda was broadened to include measures to strengthen the social safety net, increase labor market flexibility, promote corporate restructuring, and enhance corporate governance. According to IMF documents and announcements, Korea’s program remained on track, and market confidence in the new government’s commitment strengthened. Growth projections were marked down further during the second quarterly review, which was completed May 29, 1998. Korea had successfully launched a global sovereign bond issue, significant capital inflows into the domestic stock and bond market had been registered, and usable reserves now exceeded $30 billion. According to IMF documents, Korea’s sharp decline in economic activity, however, was weighing heavily on corporations, necessitating an acceleration of structural reforms in the financial and corporate sectors. Korea had lowered interest rates, but monetary policy continued to focus on maintaining exchange market stability. In view of the weaker outlook for growth, the fiscal target was eased further to permit automatic stabilizers (that is, adjustments in tax and government spending) to take effect. In Korea’s July 1998 letter of intent, Korea reported that it had made substantial progress in overcoming its external crisis. However, market sentiment weakened somewhat in June in view of growing concerns about the domestic recession and the impact of economic conditions in the region. Nevertheless, the won remained broadly stable and appreciated vis- à-vis the U.S. dollar in July, permitting Korea to further lower interest rates to pre-crisis levels. The Korean government prepared a supplementary budget to support economic activity and strengthen the social safety net. Output was now projected to decline by 4 percent in 1998, inflation had decelerated and was expected to average 9 percent during the year, and the current account surplus was expected to reach nearly $35 billion (over 10 percent of GDP). The IMF’s third quarterly review, completed on August 28, 1998, focused on a further easing Korea’s macroeconomic policies to mitigate the severity of the recession and on strengthening Korea’s structural reform agenda. For example, Korea broadened its corporate restructuring efforts significantly, supported by the World Bank. In a July 23, 1998, memorandum of understanding between the government of Korea and the World Bank, Korea agreed to develop a framework and capacity to do voluntary corporate workouts and to provide policy support for corporate restructuring, in addition to taking other actions to reform the corporate sector. By the end of October 1998, Korea had drawn $27.2 billion of the total financing package for Korea, including $18.2 billion from the IMF and $9 billion from the World Bank and the Asian Development Bank. Output was projected to contract by 5 percent in 1998, inflation had decelerated further and was expected to average 8.5 percent during the year, and the current account surplus was still expected to reach nearly $35 billion. Exchange market conditions permitting, interest rates were to be lowered again. According to Korean officials, they reluctantly agreed with the IMF to raise Korea’s fiscal deficit target to 4 percent of GDP. Korea introduced a supplementary budget to increase government spending, including additional spending for social programs for those most affected by Korea’s recession. The IMF completed its fourth review of Korea in December 1998. The IMF staff recommended, and the Executive Board granted, a waiver for the structural performance criterion to obtain bids for the sale of two Korean banks. According to IMF staff, Korea’s implementation of policies had been good, and all their quantitative criteria had been observed. It was apparent that Korea would not obtain bids for selling two Korean banks by the November 15, 1998, deadline, although the bidding process had begun. Since the World Bank was assisting Korea with this process, according to IMF staff, completing this action was a matter of timing, and it was necessary to allow a sufficient period for Korea to complete these negotiations. This action has since been completed. The IMF Executive Board met on April 7, 1999, for Korea’s fifth quarterly review. According to IMF documents, the Korean authorities met all their quantitative performance criteria for end-December 1998 and fulfilled its policy commitments under the program. However, the IMF staff recommended waivers for (1) completing an audit of Korea’s Asset Management Corporation to reflect any losses identified during the audit in its financial statement and (2) delivery of recommendations based on a financial supervisory review of the Korea Development Bank. According to IMF and Treasury officials, Korea has since completed these actions. Korea completed its audit of Korea’s Asset Management Corporation on March 12, 1999, and the losses identified during the audit were reflected in its financial statement as of April 30, 1999. Also, Korea’s Financial Supervisory Commission finished its supervisory examinations of the Korea Development Bank on March 20, 1999, (within the timetable of the review) and made recommendations to the Ministry of Finance and Economy on April 26, 1999. IMF, Korean, U.S. Treasury, and State Department officials we spoke with were consistent in their views that Korea’s reform efforts remain strong, but difficult reforms still need to be made in Korea’s corporate sector. As noted earlier, Korea’s program began slowly due in part to a presidential election. But to date, Korea has made substantial progress in its financial sector reforms. The U.S. Department of the Treasury reported to Congress that Korea had complied with its IMF program. The Treasury reported that Korea’s external financing crisis has been alleviated—the Bank of Korea’s usable foreign exchange reserves recently surpassed $50 billion, reflecting a current account surplus in 1998 of nearly 12 percent of GDP and strong net inflows of portfolio capital. According to the Treasury’s report, Korea’s short-term external liabilities declined by nearly half, from $63.2 billion at the end of 1997 to an estimated $32.5 billion at the end of 1998. The Treasury also reported that Korea’s continued adherence to the restructuring program set forth by the IMF and World Bank will be crucial to Korea’s sustained recovery. Korea has already begun to repay its IMF borrowings for a total of about $6.1 billion, as of April 30, 1999. According to Korean government documents, Korea’s domestic economy remains weak, although stable. While Korea’s economy still is vulnerable to external shocks, the government is projecting growth for 1999. IMF officials have changed its growth projections for 1999 from a negative 1 percent to a positive 2 percent GDP growth rate. As of April 1999, other private sector projections for Korea were also more optimistic. Some officials we spoke with noted that Korea still faced difficult reforms in its corporate sector and emphasized that it would take time for Korea to complete the reforms they have begun. The early IMF programs in Russia faced unsettled conditions, systemic problems, and large macroeconomic imbalances. During 1992-94, the initial period of market reform, Russia received financial assistance from the IMF in the form of a first credit tranche Stand-By Arrangement (SBA) and two purchases under the Systemic Transformation Facility (STF). From the outset, the Russian economic programs focused on reducing macroeconomic imbalances and moving toward a market-based economy. The IMF, along with the World Bank and other bilateral and multilateral agencies, also began providing a broad range of technical assistance that would develop the supporting macroeconomic management capability. These early programs were implemented under unsettled political and constitutional conditions that severely complicated the already daunting task of stabilizing the economy while transforming its basic features. While significantly reducing the fiscal deficit and curtailing credit expansion aided a decline in consumer price inflation from 2,500 percent at end-1992 to around 200 percent at end-1994, none of the programs was successfully carried through: stabilization remained elusive, reforms fell short of the goals, and inflation remained excessive. The 1995 SBA was negotiated over several months against the backdrop of policy failures and worsening economic performance. For example, in January 1995—midway through program negotiations—the monthly inflation rate accelerated to 18 percent and there was a further $1 billion reserve loss. The SBA was approved in April 1995, despite a large measure of uncertainty regarding the Russian government’s ability and determination to implement the program. The program itself was characterized by what the IMF considered to be a large reliance on expenditure restraint. The SBA program focused on Russia’s achieving a substantial and sustained reduction in inflation, seen as essential for economic recovery. This was to be effected by imposing an even tighter monetary policy and a reduction of the deficit from 5 percent of GDP in 1996 to 2 percent of GDP in 1998. Although inflation in Russia declined significantly in 1995 – the consumer price index was 134 percent at the end of 1995 – it nonetheless remained significantly above the level targeted in the SBA program. The focus of the 1996 arrangement was on reducing fiscal and monetary imbalances while transitioning to a market-based economy. The primary problems were the fiscal deficit, weak tax collection, and excessive government spending. The recently terminated $10-billion, 3-year EFF arrangement, approved by the IMF in March 1996, was negotiated on the heels of the 1995, 12-month, SBA arrangement, under increasingly adverse political circumstances. The program’s broad objectives were to achieve financial stabilization while transitioning to a market-based economy and to lay the basis for sustained growth. This was to be accomplished by reducing the budget deficit from around 5 percent in 1995 to 4 percent in 1996 and 2 percent in 1998, lowering the inflation rate from around a 7-percent monthly average in 1995 to 1.9 percent per month in 1996, and implementing key structural reforms. In addition to improving tax administration and limiting government expenditures, the fiscal strategy was to reduce the deficit by improving revenue collections – raising the revenue-to-GDP-ratio from around 10 percent in 1995 to 11 percent in 1996 and to 15 percent by 1999. The monetary strategy was to continue to lower inflation and strengthen the banking system by resolving the problem of weak and insolvent banks. At the time the IMF and Russia were negotiating the 1996 arrangement, the critical problems facing Russia were – and continue to be – fiscal and monetary imbalances, combined with very slow progress toward a functioning, market-based economy. At the heart of the fiscal deficit problem lay weakness in tax revenue collection and government spending in excess of what was affordable. To address the revenue problem, the program focused on improved tax administration, collecting outstanding tax arrears – especially from the energy sector – and eradicating the culture of nonpayment. The Russian government also agreed to resist strong spending pressures and to make cuts in noninterest spending to achieve the deficit reduction target. A restrained credit stance was intended to lower inflation further toward a single-digit annual rate and to serve as the first line of defense against depreciation pressures on the ruble. The 3-year EFF program also continued to press for implementation of the structural reforms key to a market-based economic system, including improving the structure of government spending and treasury functions, strengthening the banking system, reaccelerating the privatization process, and completing the process of trade policy liberalization. Key to evaluating Russia’s progress in the program, and to the decision to release the next quarter’s loan tranche, were the quarterly performance criteria. These quantitative quarterly performance criteria included the following fiscal, monetary and international reserve targets: federal and enlarged (including regional and extrabudgetary funds) federal government cash revenue floor, limit on the stock of net domestic assets of the monetary authority (that is, currency in circulation and bank deposits at the Central Bank of Russia, ), limit on the monetary authority’s claims on the federal and enlarged floors on both gross and net international reserves. The 1996 plan was based on an ambitious structural reform program aimed at improving the functioning of markets. The following are some of the 20 structural benchmarks proposed under the 1996 program: By March 31, 1996, Russia was to complete an evaluation of the financial condition of the 10 largest banks. By June 30, 1996, Russia was to establish procedures for gas prices to reflect variation in transmission costs to launch audit of 5 major fully or majority-owned state owned enterprises and submit legislation for move to an accruals-based system for the profit and value added tax. By September 30, 1996, Russia was to ensure that all remaining import duties rates above 30 percent are replaced submit specific legislation to improve the fiscal relations between the federal and subnational governments, and conclude an evaluation of the financial situation of the 200 largest banks. By December 31, 1996, Russia was to complete an annual audit of the Pension and Employment Funds according to international standards, prepare a list and launch an audit of an additional 5 major enterprises in which the state has full of majority ownership, and initiate an implementation procedure to deal with problem banks. Russia also had to undertake certain prior macroeconomic actions (for example, introduce additional revenue measures) and structural policy actions (for example, revoke import restrictions on alcoholic beverages) before the IMF Executive Board would approve the 1996 EFF program. Both structural performance benchmarks and prior actions for IMF Board reviews of the program were altered frequently throughout the program to reflect changing conditions. Overall, the IMF determined that Russia’s efforts during 1996 fell short of the targets. There were seven program reviews during the program’s first year. These seven reviews included four instances of program modification, three occurrences of waivers for nonobservance of performance criteria, and three delays in disbursements. While Russia had success in moderating inflation – the monthly average inflation rate for 1996 was 1.7 percent – there was less success in achieving fiscal goals. For 1996, the federal deficit registered 6.3 percent of GDP instead of the planned 4 percent, and federal revenues fell from 10.5 percent of GDP in 1995 to only 9.5 in 1996, in contrast to the targeted increase of nearly 1 percent in 1996. Moreover, exchange rate stability was bought at the expense of a significant loss of reserves. Additionally, progress in pursuing structural reforms was disappointing, according to the IMF. In the first half of 1996, uncertainties related to the election outcome influenced fiscal performance and revealed the fragility of the 1996 fiscal framework; in the second half of 1996, concerns about the health of the Russian president contributed to heightened uncertainty. More fundamentally, however, fiscal slippages were attributable to a lack of sufficient political commitment to insist on the payment of tax liabilities, especially by large taxpayers, as well as the weak capacity of tax administration and the deficiencies in the tax system. Nonetheless, while they delayed the completion of a number of reviews for failure to meet program conditions, the IMF staff continued to recommend approval of the program, despite uncertainties about the government’s capacity to implement it, because, as the staff said, the new government demonstrated strong leadership, which could lead to a successful program if backed at the highest level. Russian presidential election concerns dominated the first half of 1996. During this period, the IMF reviewed the program four times (three monthly reviews and one quarterly review), modifying the deficit limits in the first two reviews and making broad performance modifications in the fourth month review. Inflation continued to decline as the monetary authority adhered to a tight credit stance, and the central bank was able to maintain a stable exchange rate corridor despite the political uncertainty and pressure toward ruble depreciation. There were other positive developments: Russia had (1) achieved some structural reforms in banking and tax-related fiscal measures; and, (2) satisfied the quantitative targets in the first two reviews, aided by modification of the deficit limits to accommodate the clearance of accumulated wage arrears and the jump in treasury bill rates. However, the fiscal situation remained quite vulnerable, owing to both internal and external factors. The continuing weakness in revenue collection reflected the lack of will to enforce existing law, deficiencies in the tax system, rising tax arrears, and strong spending pressures with the approaching June presidential elections. The higher treasury bill rates, which raised the interest payments to higher levels than assumed under the program, was in large part due to the highly charged political environment. On this basis, Russia and the IMF agreed to an upward adjustment in the deficit ceiling, while securing the government’s commitment to focus on collecting tax arrears. Two other areas that were also a source of ongoing concern were the sustained depreciation pressures on the ruble, which put the international reserve targets at risk, and the sluggish progress on structural reforms. The staff also attributed the pressure against the ruble, and the consequent loss in reserves, to the market sensitivity generated by this historic, election-dominated situation. Throughout this period, the IMF staff commented on the determination of key senior officials to abide by the program, noting their commitment and determined efforts. However, the completion of the fourth review and the disbursement of the July tranche was postponed until late-August because the program had gotten too far off track. Russia had missed its monetary targets and had barely complied with the deficit target. The main concern was the progressive weakening of the federal government’s cash tax revenues, reflecting an environment in which paying taxes appeared to be more a matter of choice than an obligation. The upcoming heavy interest payment schedule and accumulation of wage and pension arrears made the deficit target virtually out of reach. Hence there was a broad reassessment of policy requirements for the remainder of the year to bring the program back on track. The completion of the fourth review was made conditional upon Russia’s meeting end-July targets as modified and significantly increasing tax revenues. Russia also received a waiver for nonobservance of end-June targets. In the end, the IMF staff’s support for the program reflected their assessment that immense pressures had led to Russia’s missing the targets, that the Russian authorities were taking actions to bring the program back on track, and that the Russians’ efforts “deserve the benefit of doubt and warrant continued Fund support.” Three reviews were completed from August through December 1996 (following the completion of the fourth review). The first review focused on progress in structural policies and found the results disappointing, though structural reform efforts had been recently stepped up. Fourty-four modifications were proposed for the structural program, and the Russian authorities agreed to a revised set of 10 new structural benchmarks for the remainder of the year. During this period, the IMF continued to encourage the government to open the treasury bill market to nonresidents so that Russia could have better access to private capital markets. The CBR officials agreed in principle but expressed concerns regarding the volatility of foreign capital inflows that could easily be converted into dollars rather than rolled over into new debt. Meanwhile, by September, the dominant concern was continuing pressure on the ruble and international reserves, despite the favorable inflation trend and the cautious macroeconomic policy. The IMF staff believed that noneconomic, temporary, and reversible factors such as concerns about President Yeltsin’s health, the postponement of the completion of the fourth review, changes in the rules governing nonresident access to the Treasury bill market, and concern about the health of the banking system contributed to the exchange market pressure. While continuing to note the major risks and difficulties in the Russian situation, the IMF maintained a cautious optimism that the authorities would address these problems and continue to achieve program objectives. However, by the third quarterly review, originally scheduled for completion in October 1996, Russia had gotten too far off the program, and the review was delayed until December. Consequently, both October and November disbursements to Russia were delayed. Russia had missed the September international reserve and deficit targets – the deficit of the federal government amounted to 6.7 percent of GDP. There was a marked deterioration in revenue performance because of a tax code change that gave priority to wage payment over meeting tax payments: revenues had declined to 9 percent of GDP at November 1996. The nonobservance of the deficit target was due, in part, to the need to make large interest payments on treasury bills that had been issued at high interest rates in the second quarter. But more fundamentally, the deficit continued to originate from a weakness in revenue collection due to a lack of government resolve to enforce tax laws. As a result of weaker-than-anticipated revenue and higher-than-anticipated interest payments, the IMF and Russia agreed to modifications to the fiscal and monetary performance criteria for end- December 1996. These modifications were to serve as the first step of the 1997 program. Also, understandings were reached on a comprehensive action plan that sought to improve revenue collection by creating a tax- paying culture in Russia rather than just proposing tax measures. The IMF staff noted that, in hindsight, the structural work plan might have been too ambitious for Russia to manage, given its limited institutional capacity. Even though program revisions had just been introduced in August/September to reflect the slower pace of implementation of structural reforms in the first half of 1996, progress in the structural policy agenda was still lagging at this time. With the important exception of banking reform – where actions were in line with the program – structural reforms fell short of the objectives in all areas in 1996. Only two of the seven structural benchmarks that were the subject of this review had been met, and immediate action was required before the staff could recommend completion of the third quarter review. At the end of 1996, the situation in Russia remained fragile, and the fiscal situation was difficult. However, the staff determined that the authorities continued to demonstrate their firm intention to maintain a restrained credit stance to forestall inflation and to reduce pressure on international reserves. The staff also acknowledged the authorities’ good faith efforts and exemplary cooperation with the IMF. In the end, the IMF granted Russia a waiver for its nonobservance of end- September performance criteria. The completion of the May 1997 Article IV staff report also gave the Executive Board’s approval of the 1997 EFF program. The report followed the April negotiations and the setting of program targets. The approval came after Russia implemented a series of prior actions, including submission of the tax code and a new 1997 spending plan to the Duma, a crackdown on large tax debtors, and announcement of transparent privatization procedures. The 1997 program included a revised schedule of disbursements for the 1997 program year (Russia had received no program disbursements since the one following the completed eighth month review in mid-February 1997). As envisaged under the program, performance was to be monitored quarterly on the basis of quarterly performance criteria. However, because of the significant risks that Russia still faced, the IMF continued to closely monitor developments throughout the period of the extended arrangement. A major focus of the fiscal program in 1997 was the reversal of the declining trend in federal cash revenues in relation to GDP and the elimination of the use of noncash revenue sources. Cash revenues were targeted to increase, on average, to 8.3 percent of GDP in 1997, compared with 7 percent of GDP in 1996. To improve revenue collection, the Russian authorities agreed to major tax reform and the full implementation of the comprehensive November 1996 action plan. The annual limit on the federal deficit in 1997 was set at 5.5 percent of GDP, higher than the original EFF target of 3 percent of GDP for 1997, but lower than the 6.3 percent deficit at yearend 1996. A further reduction in inflation to a monthly rate of 1 percent in 1997 was one of the program’s main economic goals. In addition to implementing the November 1996 action plan in full, the structural program for 1997 was designed to accelerate the process of building the institutional and legal framework to support a market economy. Table VI.2 shows Russia’s performance in some critical areas. Table VI.2: Russian Federation: Federal Budget Aggregates and Inflation, 1993- 1997 (in Percent of GDP) 13.7 20.2 18.2 2.0 -6.5 874.5 11.4 23.2 21.2 2.0 -11.4 307.4 9.1 15.4 12.5 2.9 -4.8 197.4 7.0 15.8 11.3 4.5 -6.3 47.6 8.3 15.0 10.8 4.2 -5.5 14.2 (actual 1997, 14.6) Preliminary data at this time were showing that the economy had begun to turn around since the third quarter of 1996. Output appeared to have stabilized in 1997 after years of decline; inflation continued to decelerate – the monthly percent change for the last quarter of 1996 had declined to 1.7 percent; and the exchange rate was stable. Structural reforms had gained momentum in the areas of natural monopolies and public utilities, and the government had eased restrictions on access to capital markets by nonresidents. While the authorities had used a sizable reserve cushion to defend the ruble during 1996, there was a reversal of exchange market pressure in the first half of 1997 attended by large capital inflows. The easier monetary conditions due to the capital inflows and the clearing up of arrears brought with them the associated risk of renewed inflation, and the IMF monetary program was revised for the second half of 1997. Compared to the severe difficulties experienced in 1996, the developments during the first half of 1997 were encouraging. The IMF staff noted, however, that there were still considerable uncertainties in Russia, and that the IMF assumed a potentially large exposure to risk in providing support to the country. Given Russia’s substantial reliance on energy exports, there was also a risk of external shocks, for example, due to a decline in the price of oil or gas. Amid uncertainties about the government’s capacity to implement the program, the IMF approved the 1997 program based on the strong leadership demonstrated by the new government as well as the completion of the prior actions. In mid-1997, the economic crisis that started in Thailand quickly spread to other Asian countries and to Russia, aborting the nascent economic recovery that had just begun in Russia after 8 years of deep output decline. From October 1997 on, Russia continued to experienced recurrent financial crises. The government and the CBR attempted to protect the main economic policy achievements of the recent years—low inflation, a fixed ruble, and the living standards of the people – through foreign exchange market interventions and interest rate hikes, both seen as needed to defend the ruble. The spillover from the Asian financial turbulence in the fall of 1997 spread to Russia’s financial markets and further undermined investor confidence, already adversely impacted by Russia’s ongoing fiscal problems. Federal cash revenue collections were not improving, and the government was able to achieve the deficit target only by holding down cash expenditures, thus creating new expenditure arrears. Substantial foreign exchange outflows accompanied the financial turbulence. The CBR’s response was to sell foreign exchange and, later, to raise interest rates. Consequently, Russia was unable to meet its international reserve target. Originally intended to be an assessment of end-September performance, the IMF’s sixth quarterly review and the corresponding quarterly disbursement were delayed until January 1998. The delay was due to the serious underlying weakness and slow progress in addressing the fiscal problems, as indicated by the nonobservance of the government revenue performance criterion from January to September 1997. The review also indicated that the September performance criteria, which Russia did not meet, were no longer operationally relevant. The December criteria were being modified, as they were no longer attainable either, and thus could not be applied against Russia’s performance yet. Thus, the review requested a waiver of the applicability of December performance criteria. During this period, structural reforms proceeded generally as envisaged under the 1997 program, particularly in the areas of natural monopolies (gas) and privatization, and there was continued progress in closing and restructuring smaller banks. Overall, however, the IMF staff recognized that the program continued to face serious risks. In late 1997, the IMF and Russia created a credible fiscal action plan and developed monetary policy actions and targets to reestablish monetary policy restraint, which had deviated considerably from the program. On the fiscal side, the discussions emphasized the difficulties in controlling budget expenditures, as well as ineffective efforts to collect taxes from large debtors, as the source of fiscal problems. For example, the inability of the government to pay its own bills, combined with extensive use of monetary offsets and noncash mechanisms to settle budgetary arrears against tax debtors’ arrears, undermined incentives for paying taxes in cash. The Russian government agreed to take steps (prior actions) based on the newly developed strategy to bring the fiscal program back on track, including the abolition of all types of noncash tax arrangements on January 1, 1998. The monetary policy discussions were concerned with the CBR’s response to sizable foreign exchange outflows and how to ensure that these outflows would not become a source of inflationary pressure. Informal and flexible understandings were reached on a revised monetary program for end-December 1997 that permitted some room for expansion of base money but also emphasized keeping inflation on a downward trend and protecting international reserves. To complete the review, the government had to undertake fiscal measures, agree upon targets for the 1998 federal budget, revise monetary performance criteria for end- December 1997, and complete actions on the structural side. The IMF staff conceded that little had been accomplished on the fiscal side by end-December 1997, particularly in the collection of tax revenues, owing to a lack of “forceful and focused implementation,” along with slow progress in improving tax administration, and that the credibility of the Russian authorities was at stake. However, they recommended the completion of the sixth review based on the newly adopted fiscal action plan that brought a new approach to tackling the fiscal problem and the expectation that the authorities would make a concerted effort to follow through this time. During February 1998, amid the ongoing pressures on Russia’s financial markets, an IMF mission team visited Moscow to hold discussions for the seventh quarterly review and to complete the talks begun earlier on the 1998 program. The subsequent dismissal of Prime Minister Victor Chernomyrdin in March and the Duma’s approval of Yevgeny Kiriyenko in April, together with weak oil prices, delayed the review and implementation of the program, as well as the disbursement of the $700 million credit tranche. Follow-up staff visits took place in April and May to revise the fiscal targets and policies for 1998. In mid-May, following the formulation of the 1998 program, a severe financial crisis hit Russia, coinciding with renewed financial instability in Asia (Indonesia) and labor unrest in Russia. The CBR’s interventions in the foreign exchange market led to a large decline in reserves, and sharp increases in interest rate and financial volatility underscored Russia’s vulnerability to changes in market sentiment. The IMF staff again recognized that the program might have to be revisited unless confidence returned. The completion of the review and approval of the 1998 program occurred in June 1998, following Russia’s completion or satisfactory progress in 27 fiscal, financial, and structural measures (many were from the November 1997 Fiscal Action Plan) and observance of the March targets. Some measures included (1) collecting taxes from large tax debtors, (2) taking steps to improve tax collection, (3) establishing better monitoring and control over expenditure commitment, and (4) identifying additional expenditure cuts. Although Russia missed the deficit and cash revenue targets for end-March, no waiver was requested, though a waiver was granted for nonobservance of one December 1997 performance criterion. The staff also supported Russia’s request for the extension of the EFF arrangement for a fourth year in light of the delayed purchases during 1996-97 and the need to catch up with the original program objectives. The Russian government favored achieving the deficit target through spending cuts, as officials did not think that they could collect the required amount of cash tax revenues or that the Duma would agree to the required tax measures. However, the IMF staff’s opinion was that expenditure cuts often translated into new expenditure arrears, hence they emphasized strengthening collections from large, delinquent tax debtors. In the end, the program relied on both approaches. For example, the Emergency Tax Commission met in May and made a decision to collect arrears from a number of large tax debtors, and the Expenditure Reduction plan was adopted by presidential decree that month as well. Eliminating mutual offsets, which undermined the incentives to pay taxes in cash, was also critical to resolving the fiscal problem. No new offset operations had been approved since January 1, 1998, and federal government abstention from any offset operations was to be a performance criterion under the 1998 program. The 1998 structural reform program was front loaded with a wide range of measures taken as prior actions ahead of the IMF Executive Board’s consideration. Structural reforms that would have important macroeconomic impact over the medium term were designated benchmarks for each quarter. Some areas of focus for the structural reform agenda included making improvements in corporate governance through ensuring a more transparent accounting by public utility and transport monopolies, engaging in an open and competitive privatization process, liberalizing the trade regime, and strengthening the prudential and supervisory framework of the banking sector. Some of the fiscal prior actions Russia had to undertake for the completion of the seventy quarterly review were based on elements from the November 1997 Fiscal Action Plan, for example, collecting taxes from large tax debtors, establishing better monitoring and control over expenditure commitments, and identifying additional expenditure cuts needed to observe the program targets. Progress in structural reforms continued to be based on an overall assessment, but with a particular emphasis on the structural benchmarks. While the IMF’s projections for 1998 and beyond indicated a strengthening of Russia’s balance of payments over the medium term that would permit Russia to service its obligation to the IMF, the IMF staff was cognizant of substantial risks to the program, such as: a variability in capital flows and foreign exchange outflows, magnified by Russia’s dependence on nonresident’s participation in the treasury bill market (as illustrated by May 1998 events); a vulnerability to external shocks, given Russia’s reliance on energy a sluggish pace in transitioning to a market economy; and the upcoming elections that could undermine the government’s will and ability to implement tough measures. Nevertheless the IMF staff indicated that the program was deserving of continued IMF support because of the government’s strong commitment to the program and the important steps it took to stabilize and reform the economy during the first 2 years of the EFF. Further, the staff noted, the Russian authorities were taking additional prior actions before the IMF Board meeting, were implementing many of the fiscal measures, and were committed to an ambitious structural reform agenda. The Russian government had financed its high, and ultimately unsustainable, budget deficits by selling ruble-denominated, short-term debt to both foreign and domestic investors. By May 1998, nonresident investors were holding about one-third ($20 billion) of domestic treasury securities. The government borrowed in capital markets and issued treasury bills and bonds at high yields to attract capital. This added a heavy debt service burden to the Russian budget. Further, the short-term maturity of the debt meant that Russia constantly had to roll over the debt. This made the economy highly vulnerable to changing investor sentiments in the capital market. As long as foreign and domestic investors were willing to renew short-term debt, this practice could continue, but Asia’s financial problems intensified the instability in global financial markets. The combination of high yields, deteriorating investor sentiment, and the short-term maturity of the treasury bills raised investor concerns that the Russian government would not be able to meet around $1.5 billion in debt service that fell due each week in the remainder of 1998. By June 1998, domestic borrowing to finance the federal budget came to a virtual halt. The Russian government had been in a race between its need to collect more taxes and to pay the rising interest bill on its growing debt – the government had to roll over more than $1 billion per week of treasury bills. This became impossible, as export revenues declined with falling oil and commodity prices and interest rates sharply increased when capital fled the country. The persistent weaknesses in tax collection and government spending in excess of what was affordable exacerbated the situation. Russia was forced to request international assistance to replenish international reserves, to overcome liquidity problems arising from foreign investors redeeming their short-term ruble-denominated debt, and to provide the government with reserves of dollars and other foreign currencies to keep the ruble at its current value in foreign exchange markets. The government needed more dollars to attempt to prevent the ruble from losing too much of its value against the dollar. A depreciated ruble could create serious problems for the Russian banks and industries that had to buy dollars with rubles to repay their loans from foreign banks. It could also reignite the ruinous inflation that had plagued Russia in the early 1990s by raising the price of imports. Recognizing that it was a calculated risk,and to try to help Russia avoid devaluation, the IMF made a decision to provide $11.2 billion in extra funding on an augmented EFF arrangement on July 20, 1998. The financing consisted of an increase in the EFF arrangement of about $8.3 billion, and about $2.9 billion under the Compensatory and Contingency Financing Facility (CCFF) to compensate for a shortfall in export earnings, mainly due to lower oil prices. Of the augmented amount to be provided under the extended arrangement, about $5.3b was to be made available under the Supplemental Reserve Facility (SRF), and the remainder was new EFF funding. The augmentation of the extended arrangement came from borrowing the equivalent of about $8.3 billion under the IMF’s rarely used General Agreement on Borrowing. As June 1998 data were not available to assess Russia’s performance under the 1998 program, this requirement was waived in the proposed decision, and the IMF approved the first disbursement under the CCFF. The remainder of the disbursements were to be in three additional installments phased through February 1999. Because of Russia’s delays in implementing the personal income tax and pension measures, the amount being made available immediately was reduced from $5.6 billion to $4.8 billion. The difference was to be made available in September, assuming the measures were satisfactorily implemented. The new package included fiscal measures to aimed at reducing the fiscal deficit. These included: tax reforms, measures to increase tax revenues, and spending cuts; new structural reforms to address the arrears problem and promote private sector development; and steps to reduce the vulnerability of the government debt position (for instance, a voluntary restructuring of short-term treasury bills). The July 20, 1998 announcement of the IMF’s additional policy package had a positive, but very short-lived, effect on Russia’s financial markets. Ultimately, the Duma’s lack of support for the program in the areas of personal income tax and pension fund financing and the veto by the president of several measures led the IMF to reduce the initial amount of the disbursement from $5.6 billion to $4.8 billion. The program also faced opposition in the key energy sector, and the collection of overdue tax payments from a number of oil companies proved difficult. Finally, the government-owned Sberbank’s decision to not roll over its sizable treasury bill holdings falling due in the last 2 weeks in July culminated in cancelled bond auctions because of prohibitively high borrowing rates. With pressure growing against the ruble and spreading to the banking sector, the CBR was forced to intervene on a large scale. However, these actions were not enough to avert a serious crisis. Russia was facing a full-scale banking and currency crisis by mid-August. Russia’s persistently large fiscal imbalances, heavy reliance on short-term foreign borrowing financed at high interest rates, the impact of the declining price of oil on Russia’s external balance, and delays in structural reform led to Russia’s replacing Asia in August 1998 as the center of the financial crisis afflicting emerging markets, thus potentially erasing many of the gains of prior years. In August 1998, the Russian government abandoned its defense of a stable ruble exchange rate – one of the major accomplishments of the previous years – essentially devaluing the ruble, forced a restructuring of government domestic debt, and placed a 90-day moratorium on commercial external debt payments. The financial crisis intensified following the dissolution of the Kiriyenko government and the approval of Yvegeny Primakov as Prime Minister on September 11, 1998. On that day as well, the German government acknowledged that Russia missed virtually all of a DM800 million interest payment due on August 20 on sovereign debt dating from the Soviet era. Russia’s decision to unilaterally restructure its ruble-denominated sovereign debt and impose a moratorium on private external debt payments had significant repercussion in the financial markets, effectively destroyed Russia’s external creditworthiness, and cut Russia off from international capital markets. Currently, Russia’s debt service exceeds Russia’s ability to pay. The IMF’s second tranche was scheduled to be delivered on September 15, 1998, but the IMF has made no further payments following the initial $4.8 billion disbursement because of the Russian government’s failure to meet its loan conditions. According to the IMF, the immediate cause of the Russian economic crisis was the growing loss of financial market confidence in the country’s fiscal and international payments situation, leading to a loss of reserves and an inability to roll over treasury bills as they matured. However, fundamental problems having to do with Russian economic policy and economic structure lay behind Russia’s vulnerability. According to the IMF and the Congressional Research Service, deeper problems involving the incomplete restructuring of Russia’s economy caused Russia’s vulnerability. Russia’s fiscal problem originated in Russia’s failure to reform its huge and inefficient tax system, resulting in inadequate tax collection. Further, the culture of nonpayment and the widespread use of barter have made it difficult to resolve the fiscal imbalances. According to one estimate by Russia scholars, more than 50 percent of payments are conducted by barter and 40 percent of the tax revenues are paid in a nonmonetary form. Public spending has not been adequately controlled, and the government has not been able to cover its expenditures with revenues. Other structural problems include the lack of clarity in the administrative relationship between the federal government in Moscow and the regional and local governments. This situation produces confusion and conflict over control of assets and tax authority. The vagueness of relationships is further complicated by problems in dealing with the oligarchs, a group of individuals who have amassed a great deal of wealth and who control the major banks and enterprises. There has also been slow progress in making key structural reforms such as introducing accountability and transparency at all levels of government operations, establishing a federal treasury system, and restructuring enterprises and the legal framework, which adversely affects the economy’s performance more broadly. Years of war and civil strife in the 1970s-1980s destroyed Uganda’s infrastructure, public services, and agricultural production and impoverished the population. Per capita GDP in 1986 was 60 percent below its level of 1970, annual inflation had risen to 240 percent, and external debt service was more than 50 percent of exports. Exports other than coffee had all but ceased by 1987. The country had annual declines in terms of trade each year from 1986 to 1992. However, the country has been undergoing successful macroeconomic adjustment and structural reform with IMF and other donor support since 1987. Economic growth has averaged over 5 percent per annum since 1987, but a European Union representative in Uganda told us in April 1998 that much of the country’s growth has been “recovery growth” and that the country was only reaching levels in 1998 that it was at in 1972. He also said that, after 25 years of war and chaos, with the society surviving largely at the subsistence level, the country was vulnerable to corruption. Uganda has had 10 IMF arrangements since 1987. The current 3-year ESAF arrangement approved by the IMF Executive Board in November 1997 totals about $138 million and is to support the Ugandan government’s 1997/98-1999/2000 economic plan. The first semiannual installment of $27.6 million of the first annual arrangement was made in November 1997. In April 1998, Uganda was the first country to complete an international initiative aimed at reducing the debt burden of some heavily indebted poor countries. The IMF has had a resident representative in Uganda since July 1982. U.S. Treasury officials said that, over the past few years, problems have become apparent in (1) government privatization of state-owned enterprises, (2) corruption within government, and (3) government military spending. IMF and U.S. Treasury officials said that, unlike many governments, the Ugandan government is committed to addressing the corruption problem. There appears to be increased emphasis by the IMF and other donors on reducing corruption within the government and holding down military expenditures to ensure that funds are available for needed social spending. The IMF resident representative also told us in April 1998 that the rule of law needs to be strengthened since laws, regulations, and procedures are weak throughout the system. According to the external (independent) experts’ 1998 evaluation of ESAF, the government’s reform program benefited from intensive public education and consensus-building initiatives. The external evaluation also noted that the Ugandan president defended government policies in the face of public opposition and protests, rather than opting for political expediency as is done by “most presidents.” IMF officials said the Ugandan parliament supports the ESAF program, although there are some questions among legislators about the speed at which it is implemented. While the Ministry of Finance is responsible for specific monitoring of program performance criteria and benchmarks, the parliament’s Economy Committee monitors the program in a general way. IMF missions to Uganda meet with the president, the Ministry of Finance, and the Central Bank, and in recent years have also met with noneconomic ministries, parliamentary committees, nongovernmental organizations, and private- sector organizations. The IMF is providing technical assistance to the government to implement changes in customs management and administration, establish a large-taxpayer unit for the 100 largest taxpayers, improve budget management through improved expenditure control and promote secondary markets in treasury bills, and improve the statistical base through enhanced collection and reporting of national accounts, revenue, expenditures, balance-of-payments and debt statistics, and implementation of prior technical assistance missions’ recommendations. The IMF reported that, during the annual arrangement in 1994/95-1996/97, annual real GDP growth averaged 8 percent and inflation was 5 percent. The fiscal deficit, excluding grants, was reduced from 11.2 percent of GDP in 1993/94 to 6.5 percent in 1996/97. The external current account deficit, excluding grants, declined to 6.1 percent in 1996/97, and improved balance of payments increased international reserves to 4.6 months of imports of goods and nonfactor services. Government elimination of marketing boards, price controls, export taxes, and foreign exchange restrictions contributed to expansion and diversification of the export base. Uganda’s debt service ratio as measured by the annual payments on debt outstanding as a ratio of export earnings fell from 53.7 percent in 1993/94 to 18 percent in 1996/97 following Paris Club debt reschedulings. The external experts’ 1998 evaluation reported that the 1994-97 ESAF arrangement did not need a stabilization component and consequently focused on a development agenda of structural reforms. The scope of IMF- government policy dialogue focused on issues not traditionally within the IMF’s area of expertise. As part of Uganda’s structural adjustment, the following reforms were undertaken. The civil service was reduced in size by 25 percent, noncash benefits were monetized and salaries increased, and army demobilization was completed. Within tax policy reforms, the tax identification number system was expanded, a value-added tax (VAT) introduced, most discriminatory tax exemptions were eliminated, and a new income tax bill was submitted to parliament. The Bank of Uganda was restructured and its recapitalization begun, two commercial banks were restructured, the Uganda Commercial Bank was recapitalized and steps to privatize it begun, and enforcement of adequate capital requirements in the banking sector was undertaken. Fifty-five public enterprises were privatized, actions were initiated to privatize telecommunications, and a communications act and amendments to remove the Uganda Electricity Board’s monopoly and regulatory powers were submitted to parliament. Import tariffs and import duty exemptions were reduced, export taxes were eliminated, and an external debt-management and borrowing strategy that eliminates nonconcessional borrowing was implemented. IMF disbursements for the 3 -year arrangement were $24.5 million in September 1994 and $26.3 million in April 1995; $29.8 million in December 1995 and $29 million in May 1996; and $33.7 million in December 1996 and $32.5 million in May 1997. On October 22, 1997, the government requested a new 3-year ESAF arrangement of about $138 million to support its economic plan for 1997/98-1999/2000. Uganda’s fragile external position left it vulnerable to external shocks; and it faced deteriorating terms of trade, uncertainty over the effectiveness of revenue measures, and substantial expenditure pressures. The IMF approved the arrangement on November 10, 1997. IMF officials said that other donors wanted Uganda to have an IMF program as an anchor for their assistance. They also said that some IMF executive directors felt that Uganda needed assistance on structural issues such as financial sector reform, privatization, trade liberalization, and social spending. IMF and U.S officials emphasized the Ugandan government’s commitment to reform. The IMF made its first disbursement to Uganda under the new arrangement in November 1997 for $27.6 million. The quantitative performance criteria for Uganda focus chiefly on bolstering Uganda’s liquidity and creditworthiness by improving its ability to reduce inflation, by garnering resources readily usable for the purpose of financing deficits in the balance of payments, and by stabilizing the foreign exchange value of the currency (Ugandan shilling). The quantitative performance criteria for the first annual arrangement covered the following: Ceilings were set on net domestic assets of the banking system as a monetary policy measure intended to control the rate of inflation by limiting the amount of money in circulation. Increases in the net domestic assets of the banking system are, in effect, increases in outstanding loans to the nonbanking sector that raise the amount of money in circulation and represent a potential source of inflation. Limits were set on the net claims of the banking system on the government, as a mechanism to restrict the growth rate of government borrowing. Net claims of the banking system on the government are loans to the government by the banking system. Bank loans to the government may either increase the amount of money in circulation and possibly raise the rate of inflation in the country or raise the interest rate by fostering competition with the private sector for loans. Moreover, by discouraging banks from lending to the government, limiting net claims may also serve as a fiscal restraint on the government. A prohibition was set on the issuance of promissory notes by the government to curb the rate of growth of government spending financed through issuance of negotiable instruments, such as bonds. This fiscal restraint prohibits government borrowing from the public to finance government expenditures. Arrears on outstanding external debt was forbidden. This prohibition enforces the Ugandan government’s agreement with the IMF and the World Bank to maintain an on-time payment history to remain eligible for past and future debt reduction benefits under the HIPC. The Bank of Uganda was prohibited from incurring debt with a maturity of less than 1 year. Short-term external debt of the Bank is loans from external sources contracted by the Bank when it is unable to provide sufficient foreign exchange to pay for expenses that are incurred for routine international transactions. This prohibition, therefore, ensures that the Bank maintains sufficient foreign exchange on hand to pay for each year’s imports of good and services. Consequently, short-term credit extended to Uganda to facilitate trade with international trading partners cannot be converted to long-term international debt. Limits were established on new public- or publicly-guaranteed nonconcessional debt. This was intended to reduce total external debt by restricting government borrowing from international sources, unless the debt contains a grant element of at least 35 percent. A minimum net international reserve level for the Bank of Uganda was set. Setting a minimum reserve level enhances the availability of foreign exchange for the purposes of stabilizing the value of the currency and maintaining adequate foreign exchange to pay for several months of imports of goods and services. Table VII.2 shows the specific criteria and timetable, or benchmarks, for the first annual arrangement. Adjustments to be made for import support in excess of cumulative projections. Adjustments to be made for debt service paid by the central government in excess of cumulative projections. Excludes notes issued to regularize domestic payment arrears not to exceed 24.1 billion. This criterion must be continuously observed. Excludes debts contracted in the context of reschedulings. External debt with maturity of less than 1 year excluding normal import related credit. Concurrent adjustments to be made in case of adjustments in ceiling of net domestic assets and net claims on government. The structural performance criterion for the first annual arrangement was to complete government auditing of at least 200 VAT payers, 50 of which would be from the top 400 VAT-registered taxpayers, and the rest of which would be based on revenue-risk criteria. Achievement of the criterion was to be completed by December 31, 1997. Three prior actions for the removal of import bans by March 31, 1998, were also established. Table VI.3 shows the structural performance benchmarks for the first annual arrangement. Table VII.3: Structural Performance Benchmarks for the First Annual Arrangement CategoryPerformance benchmark Privatization Relinquish government control of 80 public enterprises by December 31, 1997. Relinquish government control of 89 public enterprises by March 31, 1998. Relinquish government control of 95 public enterprises by June 30, 1998. Divest 23 enterprises including 7 with asset values of 5 billion Ugandan shillings or more by June 30, 1998; divestiture of at least 3 of these 7 large enterprises by December 31, 1997. Offer Uganda Telecommunications Ltd. for sale following its separation from the Uganda Posts and Telecommunications Corp. by December 31, 1997. Set the size of the number-limited civil service on the payroll, excluding primary school teachers, at 57,100 by December 31, 1997, and 55,600 by June 30, 1998. Gain Cabinet approval of agreed structures and establishments for 9 central ministries/departments by January 31, 1998. Reduce Uganda Electricity Board employment from 3,060 as of June 1997 to 2,800 by December 31, 1997, and 2,300 by Ensure minimum nonwage budgetary expenditures for the Priority Program Areas of health and education at $24.6 million by December 31, 1997, and $45.5 million by June 30, 1998. Audit 600 taxpayers based on revenue/risk criteria by June 30, 1998. Conduct annual on-site inspections of at least 40 percent of banks by June 30, 1998. In its March 24, 1998, Article IV consultation and midterm review, the IMF staff reported that the government had met its quantitative and structural performance criteria for December 31, 1997, with the exception of the government’s net position vis-à-vis the banking system. This criterion was missed, according IMF staff, because of the more rapid liquidation of domestic nonbank liabilities than expected (government checks cleared the banking system sooner than expected). The IMF Executive Board granted a waiver because nonobservance was deemed to be technical in nature, as opposed to a policy violation. Performance was reported as satisfactory with respect to the structural benchmarks. However, some of the benchmarks were categorized by IMF staff as “observed with delay,” meaning that the benchmarks were met but not within the timeline envisioned. In addition, the removal of three import bans, a prior action with a completion date of March 31, 1998, was met according to the IMF. In the October 28, 1998, IMF staff paper to the IMF Executive Board on Uganda’s request for a second annual arrangement, the staff stated that the government had met the removal of bans on three imports on time. The IMF disbursed $27 million in April 1998. On October 28, 1998, the Ugandan government requested the second annual ESAF arrangement. The IMF staff had reported in its October 28, 1998, ESAF policy framework paper that heavy rains in 1997/98 had adversely affected Ugandan food and coffee production, transportation, and exports; real GDP growth was 5.5 percent and inflation 5.8 percent. The current account deficit excluding grants as a share of GDP was 8.3 percent. Capital and official transfers financed the current account deficit and generated a balance-of-payments surplus so that gross international reserves rose to 4.9 months of imports of goods and services. The IMF Executive Board approved the arrangement on November 11, 1998. The first disbursement of $23.1 million was made November 25, 1998. The quantitative performance terms and conditions for Uganda‘s second annual arrangement added two criteria to those of the first annual arrangement: a minimum amount of total revenue was to be collected in order to reduce a minimum amount of nonwage expenditures to be made in the priority program areas of education and health so that the social sector would not be overlooked relative to other priorities, particularly military expenditures. Table VII.4 shows the quantitative performance criteria and benchmarks for the second annual arrangement. Adjustments to be made for import support in excess of cumulative projections. Adjustments to be made for debt service paid by the central government in excess of cumulative projections. Minimum expenditure would be increased by no less than 50 percent of the first 8.6 billion of import support in excess of cumulative projections. Excludes notes issued to regularize domestic payment arrears not to exceed U sh 24.1 billion. This criterion has to be continuously observed. Excludes debts contracted in the context of rescheduling agreements. External debt with maturity of less than 1 year excluding normal import related credit. Concurrent adjustments to be made in case of adjustments in ceilings of net domestic assets and net claims on government. The following structural performance criteria for the second annual arrangement (1998/99) were to be completed by December 31, 1998: verification by the Verification Subcommittee of the Ugandan government line ministries’ report on arrears outstanding at the end-June 1998 and submission of its findings to the Arrears Monitoring and Reporting Unit, and completion of follow-up site examinations of the banks for which the Bank of Uganda sent a timetable of corrective actions. Prior actions that were to be completed by March 31, 1999, for the midterm review were the removal of the import ban on cigarettes, and approval of divestiture plans in 1998/99 by the Divestiture and Reform Implementation Committee and commencement of investment search (defined as issuance of information memorandum, advertisement of sale, or placement of shares on stock exchange) for 10 enterprises by March 15, 1999, of which 5 were to be high-priority enterprises. Table VII.5 shows structural performance benchmarks for the second annual arrangement. Table VII.5: Structural Performance Benchmarks for the Second Annual Arrangement CategoryPerformance benchmark Privatization Approval of divestiture plans in 1998/99 by the Divestiture and Reform Implementation Committee and commencement of investment search (define as issuance of information memorandum, advertisement of sale, or placement of shares on stock exchange) for 16 enterprises by June 30, 1999. Decision by the Cabinet on options for increasing private sector involvement in the operations of the Uganda Railways Corporation by December 31, 1998. Finalization by the Arrears Monitoring and Reporting Unit of a plan to clear verified outstanding arrears within 3 years Reduction in the size of the number-limited civil service on the payroll, excluding primary school teachers to 53,190 by December 31, 1998, and 512,640 by June 30, 1999, with a margin of error of up to 99 for new pending cases. Limitation of the waiting period between the date of reporting to work and that of being put on the payroll to no more than 4 weeks to be a continuous benchmark beginning October 1, 1998. Completion by the Large-Taxpayer Unit of 10 comprehensive on-site audits by December 31, 1998. Completion by the Large-Taxpayer Unit of an additional 40 comprehensive on-site audits by June 30, 1999. Completion of on-site audits of all retail and nonretail gasoline outlets by the Uganda Revenue Authority by June 30, Completion of on-site examination of four commercial banks that have been identified as showing less-than-full compliance with bank regulations or being in need of stronger management practices, and issuance of relevant examination reports by September 30, 1998. In the IMF staff paper to the Executive Board on Uganda’s request for the second annual arrangement, the staff stated that the government had met its macroeconomic objectives for 1997/98 and that real growth was reviving and inflation was low. The staff also said the end-June 1998 quantitative and structural benchmarks were largely met, with the exceptions of net claims on the government by the domestic banking system (which was exceeded by a very small margin) and the number of public enterprises privatized (which set back significantly the privatization program). The 1998 external experts’ evaluation of ESAF noted that the IMF’s traditional role is crisis management and that this has generally been the context for the extension of ESAF arrangements. The evaluation stated that Uganda had fully achieved stabilization and the major macroeconomic reforms had been implemented, and consequently, the IMF has reached the point where it had to decide whether to (1) maintain its exclusive focus on crisis-management and so withdraw from Uganda, or (2) extend its mandate and remain in Uganda. It noted that the case for withdrawal from Uganda is that the IMF’s work is done. The case for continued involvement was that (1) investors and donors still regard Uganda as high risk and want the reassurance that an IMF presence brings, (2) the Ugandan government still needs IMF expertise, and (3) ESAF resources are most productive in an already reformed policy environment such as Uganda’s. The evaluation favored continued IMF involvement in Uganda. U.S. Treasury officials felt that continued IMF involvement in Uganda is warranted because the reform program is still in a fragile state due to (1) serious weaknesses in human and institutional capacity that the IMF is uniquely suited to help remedy, and the recently-identified problems with corruption that are in part related to these capacity deficiencies, and (2) the threats to fiscal and economic stability posed the military security problems in the region. IMF staff conducted their midterm review of Uganda’s performance under the arrangement in February/March 1999 in conjunction with their annual Article IV consultations. Staff found that the government had missed the December 1998 quantitative performance criteria on (1) net domestic assets, (2) net credit to the government by the banking sector, (3) issuance of promissory notes for current expenditures, (4) minimum non-wage expenditures in the social sectors of health and education, and (5) minimum net reserves. The structural performance criterion on the verification of arrears was also missed. The midterm review was consequently not completed and the IMF delayed the second disbursement under the arrangement. An IMF official said the non-observance was marginal and the country’s macroeconomic picture had not changed, with inflation remaining low and the real growth rate possibly exceeding the government’s target of 7 percent. The official said that Ugandan revenues were very good due to (1) improved controls of corruption in customs, (2) improved tax administration, and (3) income tax reforms, such as a broadened tax net and elimination of tax exemptions, which were paying off. However, the official said the government had used the unexpected revenues to increase military spending from 1.9 percent of GDP to 2.5 percent. Although the increased military spending does not violate IMF criteria, the official expressed concern that government officials not continually expect revenues to exceed expectations in order to pay for increasing military expenditures. The official said that IMF staff’s major concern was that Uganda’s privatization effort was completely off track due to political factors and corruption. The official said there is a loss in government credibility and therefore buyers are reluctant to bid for enterprises in the privatization program. The parliament had suspended the program while it conducts an investigation. The IMF staff set prior actions relating to the privatization program and the financial sector, which the government must meet prior to the staff’s completion of the midterm review, which resumed in May 1999 and is expected to be completed in June 1999. Despite these problems, the IMF official said the Ugandan government has been quick to react to IMF findings, is making efforts to meet IMF conditions, has fired corrupt officials, has promised to hold down military spending, and should still be classified as a good performer. The financial support that the IMF has provided to member countries, along with the conditions attached to that support, has long been a topic of debate. This issue recently received considerable prominence when the U.S. Congress considered an increased U.S. quota contribution to the IMF in 1998. While a full discussion of these issues is outside the scope of this report, several themes, including “moral hazard,” the appropriateness of IMF conditionality, and the effect of IMF programs on the poor, have been consistently raised and illustrate the complexity of this debate. The issue of moral hazard has two components: (1) the willingness and ability of an international financial institution, such as the IMF, to “rescue” a country from problems that may be of its own doing; and (2) the concern that the financing provided by these institutions is shielding private sector participants from the risks inherent in their investments. In the first instance, critics argue that the incentives for a country to avoid financial difficulties are diminished by its reliance on IMF assistance to lessen the impact of its policy mistakes. In response to this criticism, the IMF stresses that crises inevitably bring painful consequences, and that, in exchange for receiving its financial assistance, countries have to agree to adopt a stringent conditionality program that is designed to address each country’s underlying problems. The adjustments required in implementing such a program can be very costly and painful, and thus should provide sufficient disincentive to countries from pursuing questionable policies. Furthermore, countries are obligated to repay the IMF for the financial assistance provided. Under the second moral hazard issue, critics of the IMF contend that in providing financial support to countries, the IMF also “bails out” large international banks and other private lenders. When a member country receives financial assistance from the IMF, the funds can be used to pay off existing creditors including those in the private sector. This activity has raised concerns about the efficiency of the international financial system by shielding private sector participants from the risks inherent in their investments. If some creditors are not fully assuming investment risk, and are lending under the assumption that the IMF and other official support will be forthcoming if necessary, distortions could be introduced into the international financial system. The IMF and the Group of Seven (G-7), in recent public announcements, have acknowledged the existence of this threat to the international financial system and are exploring strategies for reducing it. However, it has been argued that the danger of moral hazard should be balanced against the danger of the further spread of financial difficulties, or “contagion.” During a crisis, lenders and investors may try to limit their exposure to all developing countries, not just those in crisis. This can result in countries with sound economic policies experiencing a financial crisis, driven largely by external events out of their control. By providing assistance to nations facing such a crisis, the IMF may also slow or stop the exit of private-sector lending to other developing countries and thus help minimize this potential threat to the international financial system. The appropriateness of IMF conditionality has also been subject to a considerable amount of debate. First, some critics believe that the IMF has overstepped its original mission by including conditions related to economic and social development strategies (“mission-creep”). Second, some critics have stressed that the imposition of an IMF conditionality program, under crisis conditions, that lacks a political consensus is unlikely to be successful and could in fact generate instability within the country. Third, during the Asian financial crises, several critics questioned the IMF’s underlying economic assumptions for these countries, believing the initial IMF programs in Korea, Thailand, and Indonesia represented the IMF’s standard approach to crises (macroeconomic austerity) that was inappropriate for these countries’ situations. According to those critics, the IMF’s “cookie-cutter approach” was doing those countries more harm than good. In response, the IMF has said that the flexibility of its approach to countries has allowed it to adapt to changing situations. In particular, its increasing emphasis on structural issues has reflected a growing understanding that balance-of-payments problems cannot be resolved if an economy suffers from deep-seated structural weaknesses. Moreover, the IMF has emphasized that its arrangements for individual countries constantly evolve, depending on developments, and that conditions are modified as necessary. The Thai, Indonesian, and Korean programs, for instance, were modified to take account of these countries’ unexpectedly severe recessions. The IMF has also striven in recent years to coordinate its efforts with other international financial institutions, including the World Bank. The IMF has also been criticized because of the belief that its programs impose undue hardships on the poor. These critics point out that IMF programs often require that governments cut expenditures and reduce budget deficits in order to meet the IMF’s macroeconomic goals. They argue that such cuts often result in reductions in spending on health, education, and other social programs vital to the poor. The IMF has acknowledged that, in certain cases in the past, programs for the poor have been excessively reduced. To lessen this potential, the IMF says that it now pays considerable attention to social issues and to social safety nets, to the point of sometimes now requiring that countries maintain minimum spending levels for social programs, despite the need for a general reduction in government spending. This glossary is provided for reader convenience, not to provide authoritative or complete definitions for IMF funding arrangements, programs, and facilities. A decision by the IMF that gives a member the assurance that the institution stands ready to provide foreign exchange or special drawing rights (SDRs) in accordance with the terms of the decision during a specified period of time. An IMF arrangement—which is not a legal contract—is approved by the IMF Executive Board in support of an economic program under which the member undertakes a set of policy actions to reduce economic imbalances and achieve sustainable growth. Resources used under an arrangement carry with them the obligation to repay the IMF in accordance with the applicable schedule, and to pay charges on outstanding purchases (drawings). (See “purchases and repurchases.”) Under Article IV of the IMF’s Articles of Agreement, the IMF holds bilateral discussions with members, usually every year. A staff team visits the country, collects economic and financial information, and discusses with officials the country’s economic developments and policies. On return to headquarters, the staff prepares a report, which forms the basis for discussion by the Executive Board. At the conclusion of the discussion, the Managing Director, as Chairman of the Board, summarizes the views of directors, and this summary is transmitted to the country’s authorities. An international treaty that sets out the purposes, principles, and financial structure of the IMF. The Articles, which entered into force in December 1945, were drafted by representatives of 45 nations at a conference held in Bretton Woods, New Hampshire. The Articles have since been amended three times, in 1969, 1978, and 1992, as the IMF responded to changes in theworld economic and financial structure. A country’s balance-of-payments accounts summarize its dealings with the outside world. Balance-of-payments accounts are usually divided into two main parts, the current account and the capital account. A country is said to have a surplus in its balance-of-payments if there is an increase in its net official assets (official reserves minus its liabilities to foreign official institutions). It is said to have a deficit (or external deficit) if there is a decrease in its net official assets. The smallest unit in quoting yields on bonds, mortgages, and notes, equal to one one-hundredth of one percentage point. Bank regulators from industrialized countries adopted standards for credit risk exposure for internationally active banks in 1988 under the auspices of the Bank for International Settlements. Known as the Basle Accord, the standards were fully implemented in 1992 by member countries. The standards are formula-based and apply risk-weights to reflect different gradations of risk to each asset category. Since 1992, the standards have been amended. The most notable amendment is the establishment of risk- based capital requirements to cover market risk in bank securities and derivatives trading portfolios. A set of standards for effective bank supervision, issued by the Basle Committee on Banking Supervision in September 1997. The core principles were developed in close collaboration with supervisors from around the world, the IMF, and World Bank. The standards are comprised of 25 core principles that form a sound framework on which to build supervisory structures that meet the needs and conditions prevalent in individual countries. In the context of IMF programs, a point of reference against which progress may be monitored. Benchmarks are not necessarily quantitative and frequently relate to structural variables and policies. In Enhanced Structural Adjustment Facility Arrangements, some benchmarks are designated as semiannual performance criteria and are required to be observed in order to qualify for phased (semiannual) borrowings. In addition, quantitative benchmarks are set for the quarters for which there are no performance criteria, and structural benchmarks are set for any date agreed upon under the arrangement. The capital account of the balance-of-payments shows all flows that directly affect the national balance sheet. It includes (1) direct investment by foreign firms in domestic affiliates and by domestic firms in their foreign affiliates; (2) portfolio investment, which include net purchases by foreigners of domestic securities and net purchases by domestic residents of foreign securities; (3) net lending to domestic residents and net lending by domestic residents to foreigners; and (4) changes in cash balances, which include changes in cash balances held by banks and other foreign- exchange dealers, resulting from current and capital transactions. A special IMF financing facility (window) that was established in 1988 to combine the long-standing Compensatory Financing Facility (retaining its essential features) with elements of contingency financing. The compensatory element provides resources to members to cover shortfalls in export earnings and services receipts, as well as excesses in cereal import costs, that are temporary and arise from events beyond the members’ control. The contingency element may help members with IMF arrangements to maintain their economic programs when faced with a broad range of unforeseen adverse external shocks. As defined by the IMF, economic policies that members intend to follow as a condition for the use of IMF resources. These are often expressed as performance criteria (for example, monetary and budgetary targets) or benchmarks, and are intended to ensure that the use of IMF credit is temporary and consistent with the adjustment program designed to correct a member’s external payments imbalance. This is the broadest measure of a country’s international trade in goods and services. Its primary component is the balance of trade, which is the difference between merchandise exports and imports. The current account shows all the flows that directly affect the national-income accounts. It includes exports and imports of merchandise and services, inflows and outflows of investment income, and grants, remittances, and other transfers. A set of exceptional procedures established by the IMF to facilitate rapid Executive Board approval of IMF financial support for a member while ensuring the conditionality necessary to warrant such support. These emergency measures are to be used only in circumstances representing, or threatening to give rise to, a crisis in a member’s external accounts that requires an immediate IMF response. An IMF facility established in December 1987 to provide assistance on concessional terms to low-income member countries facing protracted balance of payments problems. The ESAF’s operations are financed through borrowing by a trust administered by the IMF as a trustee. A government’s policies concerning at what price (or whether) it will seek to stabilize or otherwise influence the rate of exchange between domestic currency and other currencies. Currency reserve fund of the U.S. government employed to stabilize the dollar and foreign exchange markets. ESF is managed by the Treasury. The Federal Reserve Bank of New York acts as fiscal agent for the Treasury. ESF holds special drawing rights allocated to the United States by the IMF. A decision of the IMF under the Extended Fund Facility that gives a member the assurance of being able to purchase (draw) resources from the General Resources Account, in accordance with the terms of the decision, during a specified period, usually three to four years, and up to a particular amount. A financing facility (window) under which the IMF supports economic programs that generally run for three years and are aimed at overcoming balance-of-payments difficulties resulting from macroeconomic and structural problems. Typically, an economic program states the general objectives for the 3-year period and the specific policies for the first year. Policies for subsequent years are spelled out in program reviews. See “Balance of Payments Accounts.” Taxation and government spending policies designed to achieve government goals, such as achieving full employment, price stability, or growth in the economy. Foreign direct investment occurs when citizens of one nation purchase nonfinancial assets in some other nation. Distinguished from portfolio investment (below), foreign direct investment generally involves ownership of assets used in production (e.g., factories). The purchase by one country’s private citizens or their agents of marketable noncontrolling positions in equity and debt securities issued by another country’s private citizens, corporations, banks, and governments. Commonly, these marketable noncontrolling positions can be easily reversed. Foreign exchange is the money issued by a foreign country. The foreign exchange market is an interbank or over-the-counter market in foreign exchange that is a network of commercial banks, central banks, brokers, and customers. The stock of liquid assets denominated in foreign currencies held by the monetary authorities (finance ministry or central bank). Reserves enable the monetary authorities to intervene in foreign exchange markets to affect the exchange value of their domestic currency in the market. Reserves are typically part of the balance sheet of the central bank. Reserves are invested in low-risk and liquid assets—often in foreign government securities. In an IMF arrangement, placing a more that proportional part of the disbursement of the financial resources available to a member near the beginning of the arrangement. Long-standing arrangements under which 11 industrial countries stand ready to lend to the IMF to finance purchases (drawings) that aim at forestalling or coping with a situation that could impair the international monetary system. Since the establishment in 1962, these arrangements have been renewed every four to five years and been invoked 10 times, according to IMF documents. Additional funds are also available to the IMF under an “associated agreement” with Saudi Arabia. Assets, whether ordinary (owned) or borrowed, maintained within the IMF’s General Resources Account. Key interest rates at which the major banks in the London interbank market are willing to lend funds to each other at various maturities and for different currencies. LIBOR has become the most important floating rate pricing benchmark for loans and debt instruments in the global financial markets. These rates are published daily by the Bank of England and are based on a sampling from a group of reference banks that are active in the Eurocurrency market, but agreements that use LIBOR do not necessarily rely on quotes published by the Bank of England. Macroeconomic policy is governmental and central bank policy concerning a nation’s economy as a whole including, among other things, price levels, unemployment, inflation, and industrial production. The macroeconomic analysis of open economies is concerned with the effects of international and domestic transactions on output, employment, and the price level and the effects of these in turn on the balance of payments and exchange rate. It is also concerned with the implications of openness and of exchange-rate arrangements for the functioning of monetary and fiscal policies. Monetary policy is the central bank’s use of control of the quantity of money and interest rates to influence the level of economic activity. The quantity of money can affect price levels and, for a given real income, the level of nominal income within a given system. The central bank often concentrates its policy actions, such as the interest rates it charges banks to borrow, to achieve a money stock target. In theory, the demand for money changes with changes in income and interest rates, in addition to other factors. Arrangements under which 25 member countries or their financial institutions would be ready to lend to the IMF under circumstances similar to those covered by the General Arrangements to Borrow (see General Arrangements to Borrow). The New Arrangements to Borrow are not to replace the General Arrangements to Borrow, and the total amount of resources potentially available under the New Arrangements to Borrow and the General Arrangement to Borrow is about $46 billion. The New Arrangements to Borrow can be activated when participants representing 85 percent of the credit lines’ resources determine that there is a threat to the international financial system. The New Arrangements to Borrow became effective on November 17, 1998 and were activated in December 1998 in connection with the financing of an arrangement for Brazil. Measurable and observable indicators, such as monetary and budgetary targets, or structural (policy) adjustments, that must be met, typically on a quarterly basis, for a member to qualify for purchases under a country’s arrangement with the IMF. These indicators measure a country’s implementation of conditions agreed to under the country’s IMF program. Performance criteria are generally categorized as quantitative or structural depending on the conditions being measured. (See also “benchmarks.”) The practice of making the IMF’s resources available to its members in installments over the period of an arrangement. When the IMF makes its general resources available to a member, it does so by allowing the member to purchase SDRs or other members’ currencies in exchange for its own (domestic) currency. The IMF’s general resources are, by nature, revolving; purchases (or drawings) have to be reversed by repurchases (or repayments) in installments within the period specified for a particular policy or facility. See “performance criteria” and “benchmarks.” The capital subscription, expressed in SDRs, that each member must pay to the IMF on joining, up to 25 percent is payable in SDRs or other acceptable reserve assets and the remainder in the member’s own currency. Quotas, which reflect members’ relative size in the world economy, are normally reviewed every five years. The debt instruments issued or guaranteed by the central government of a country. Debt instruments are typically bonds evidencing amounts owed and payable on specified dates or on demand. International reserve asset created by the IMF in 1969 as a supplement to existing reserve assets. Its value as a reserve asset is derived, essentially, from the commitments of participants to hold and accept SDRs and to honor various obligations connected with its proper functioning as a reserve asset. The IMF defines its value in terms of a basket of major international currencies that fluctuates with market conditions. A decision of the IMF by which a member is assured that it will be able to make purchases (drawings) from the General Resources Account up to a specified amount and during a specified period of time, usually one to two years, provided that the member observes the terms set out in the supporting arrangement. See “performance criteria” and benchmarks.” A facility (window) established in December 1997 to provide financial assistance to members experiencing exceptional balance of payments difficulties due to short-term financing needs resulting from a sudden and disruptive loss of market confidence reflected in pressure on the capital account and the members’ reserves. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the International Monetary Fund (IMF), focusing on how the IMF: (1) establishes financial arrangements with borrower countries and the types of conditions set under these arrangements and assess how this process was used for six borrower countries; and (2) monitors countries' performance and assess how this process was used for the same six borrower countries, detailing the conditions met and not met, the reasons why conditions were not met, and the actions IMF took in response. GAO noted that: (1) under IMF's Articles of Agreement, as amended, IMF limits financial assistance to those countries with a balance-of-payments need; (2) continued disbursement of assistance to a country is based on IMF's consideration of data on and judgment of the country's progress in meeting the agreed-upon conditions; (3) IMF has developed a broad framework for establishing a financial assistance arrangement that is to be applied on a case-by-case basis considering each country's circumstances; (4) the specific conditions that IMF and the country authorities establish are intended to address the immediate and underlying problems that contributed to the country's balance-of-payments difficulty, while ensuring repayment to IMF; (5) after a country fulfills any early IMF requirements and the IMF Executive Board then approves the financial arrangement, the program is to take effect and the country is eligible to receive its first disbursement of funds; (6) according to information GAO reviewed for the six countries in its study, IMF generally followed this process to establish the financial assistance package and the conditions for the assistance; (7) the underlying causes and magnitude of the balance-of-payments difficulty varied among the countries but generally stemmed from concerns about their continued access to external financing; (8) the IMF's process for monitoring a country's progress toward overall program goals and compliance with program conditions is designed to respond to an individual country's progress and situation; (9) according to IMF staff, many disbursements are conditioned only on the determination by IMF staff that the country has met prenegotiated quantitative criteria; other disbursements are subject to reviews by IMF Executive Board; (10) the process for conducting IMF Board reviews, which involves the borrower country and IMF, is designed to incorporate data on a country's economic performance as well as the judgment of the IMF Executive Board and staff; (11) according to the information GAO reviewed, the monitoring of the IMF's conditionality program in the six countries in GAO's study was generally consistent with this approach; (12) IMF missions to each country reviewed the country's economy and documented the country's progress in satisfying conditions; and (13) in some cases, IMF determined that country progress in meeting the conditions had not been sufficient, and its response varied depending on the specifics of the condition and the judgment of the IMF staff and Executive Board on the country's overall progress. |
TRICARE, DOD’s health care program, has 9.1 million eligible beneficiaries that include active duty, certain reservists, and retired members of the uniformed services, as well as their families and survivors. Beneficiaries may generally obtain care from either MTFs or civilian providers. TRICARE beneficiaries can obtain prescription drugs directly from MTFs, the TMOP, and network and nonnetwork retail pharmacies. The pharmacy benefits law, as passed in October 1999, directed the Secretary of Defense to establish a pharmacy benefits program. The program is, among other things required to include a uniform formulary that should ensure drugs are available in the complete range of therapeutic classes; required to make drugs on the uniform formulary available to beneficiaries at MTFs, the TMOP, and retail pharmacies; and authorized to establish copayment requirements for generic, formulary, and nonformulary drugs. The pharmacy benefits law also directed the Secretary of Defense to establish the P&T Committee to develop the uniform formulary, and the BAP to review and comment on the development of the uniform formulary. Finally, the Secretary of Defense was to implement the use of the Pharmacy Data Transaction Service (PDTS) at designated MTFs, the TMOP, and retail network pharmacies. The PDTS is an electronic service that DOD uses to maintain prescription drug information for all TRICARE beneficiaries worldwide. In 2001, DOD established the current pharmacy copayment structure, which is based on whether a drug is classified as formulary generic (tier 1), formulary brand-name (tier 2), or nonformulary (tier 3). The copayment also depends on where the beneficiary chooses to fill his or her prescription. (See table 1.) The NDAA for Fiscal Year 2007 directed DOD to establish the Task Force on the Future of Military Health Care to assess health care services provided to members of the military, retirees, and their families and to make recommendations for sustaining those services. In addition to other aspects of DOD’s health care system, the task force reviewed DOD’s pharmacy benefits program. It issued an interim report in May 2007 and a final report in December 2007 to the Secretary of Defense on its findings and recommendations. The Secretary of Defense may comment on the recommendations provided in the task force’s final report and, within 90 days of its issuance, must forward the report to the Committees on Armed Services of the Senate and the House of Representatives. DOD’s spending on prescription drugs more than tripled from $1.6 billion in fiscal year 2000 to $6.2 billion in fiscal year 2006. Retail pharmacy spending accounted for the greatest increase, rising almost ninefold from $455 million to $3.9 billion. It also grew from 29 percent of DOD’s overall drug spending to 63 percent—the largest increase of the points of service. TMOP spending rose from $106 million to $721 million and increased from 7 percent of total spending to 12 percent. MTF pharmacy spending rose from $1 billion in fiscal year 2000 to $1.7 billion in fiscal 2004, but declined slightly to $1.5 billion in fiscal year 2006. In fiscal year 2000, MTF spending accounted for 65 percent of DOD’s overall drug spending but declined to 25 percent in fiscal year 2006. (See fig. 1.) Three overarching factors influenced these trends. First, because federal pricing arrangements that generally result in lower prices were not applied to drugs dispensed at retail pharmacies during this time period, these drugs were generally more expensive for both DOD and its beneficiaries than the drugs dispensed at MTFs or the TMOP. However, the NDAA for Fiscal Year 2008 requires that federal pricing arrangements now be applied to TRICARE prescriptions filled at retail pharmacies. Second, the increased use of retail pharmacies has exacerbated the effect of higher retail prices. More beneficiaries are using only retail pharmacies to obtain their prescriptions—about 2 million in fiscal year 2006, up from about 1 million in fiscal year 2002 (see fig. 2). Further, beneficiaries are obtaining more maintenance drugs—drugs for long-term conditions, such as high blood pressure or cholesterol—at retail pharmacies (see fig. 3). From fiscal year 2004 through fiscal year 2006, the number of maintenance drug prescriptions dispensed at retail pharmacies increased by more than 11.6 million. Those dispensed at the TMOP increased much less, by about 1.5 million, while those at MTFs decreased by about 2.5 million. DOD officials cited additional reasons that they believed contributed to the increased use of retail pharmacies, though they could not quantify the effect of these reasons. These reasons included: base closures, which have decreased the number of MTF pharmacies; deployment of MTF personnel, which limits MTF appointment availability, resulting in more beneficiaries going to civilian providers and filling their prescriptions at retail pharmacies; the vast TRICARE retail network of about 59,000 pharmacies, which has become more convenient for beneficiaries; and the prescription copayment structure, which does not discourage beneficiaries from using the more costly retail pharmacies. Third, according to DOD officials, TRICARE expansions have led to a growing population of aging beneficiaries, who use more drugs. By fiscal year 2006, about 1.7 million beneficiaries, age 65 or older, were eligible for the pharmacy benefit through TRICARE benefit expansions that began in 2001. According to DOD data, retail pharmacy spending for beneficiaries age 65 or older increased by about 207 percent from fiscal year 2002 through fiscal year 2006—slightly higher than the 184 percent increase for beneficiaries under age 65. (See fig. 4.) DOD officials told us that the average cost per beneficiary at retail pharmacies in fiscal year 2006 was about $1,277 for beneficiaries age 65 or older, compared with about $368 for those under age 65. MTF spending declined slightly for both age groups as TMOP spending increased. Those under age 65 were more likely to use MTFs, while those age 65 or older were more likely to use the TMOP. DOD has efforts under way to limit its prescription drug spending through the use of its uniform formulary and through beneficiary outreach for the TMOP. In an attempt to further limit its drug spending, both DOD and its Task Force on the Future of Military Health Care have recommended changes to the beneficiary copayment structure intended to encourage beneficiaries to use more cost-effective points of service. However, the NDAA through Fiscal Year 2008 prohibits any increase to retail copayments through fiscal year 2008. According to DOD officials, the agency has limited its prescription drug spending primarily through costs avoided through the use of its uniform formulary, which was implemented during 2005. DOD data show that the agency avoided about $447 million in drug costs in fiscal year 2006 and $916 million in drug costs in fiscal year 2007. MTFs accounted for most of DOD’s cost avoidance, while retail network pharmacies accounted for the least. Cost avoidance is affected by the following factors that result from DOD’s formulary decisions: The prices DOD obtains for drugs. In exchange for including a manufacturer’s drug on the uniform formulary, manufacturers can offer DOD prices below those otherwise available through statutory federal pricing arrangements, which applied only to drugs dispensed at MTFs and the TMOP during the time of our review. According to DOD officials, the agency had obtained prices for drugs dispensed at MTFs and the TMOP that are about 30 percent to 50 percent lower than the prices it obtained for drugs dispensed at network and nonnetwork retail pharmacies. This difference in price can be attributed to savings achieved through the discounts obtained for uniform formulary placement as well as the lower prices obtained through federal pricing arrangements for drugs dispensed at MTFs and the TMOP. Changes in beneficiaries’ use of formulary and nonformulary drugs within a therapeutic class. Once a drug is designated nonformulary, its use may be substituted with a formulary drug, which results in lower copayments for the beneficiary and lower costs to DOD. Because MTFs are generally limited to dispensing formulary drugs, cost avoidance attributed to the use of formulary drugs over nonformulary drugs is higher at this point of service than at the TMOP and retail network pharmacies, where beneficiaries can obtain more costly nonformulary drugs. Changes in beneficiaries’ use of generic and brand-name drugs within a therapeutic class. For both formulary and nonformulary drugs, DOD requires the substitution of generic drugs for brand-name drugs at MTFs, the TMOP, and retail pharmacies when a generic equivalent is available. A brand-name drug having a generic equivalent may be dispensed only if the prescribing physician establishes medical necessity for its use. A beneficiary’s use of a generic drug in place of a brand-name drug results in lower costs to the beneficiary and to DOD. Changes in beneficiaries’ use of MTFs, the TMOP, and retail pharmacies as a result of formulary designations. For example, a beneficiary may shift from obtaining a 30-day supply of a formulary drug at a retail pharmacy, where the beneficiary’s copayment would be higher, to an MTF where the beneficiary can obtain a 90-day supply of the drug without a copayment. To calculate cost avoidance, DOD first determines the costs it incurred at MTFs, the TMOP, and retail network pharmacies for each drug as a result of its designation as either formulary or nonformulary. DOD then subtracts these incurred costs from the estimated costs it would have incurred at MTFs, the TMOP, and retail network pharmacies if the designation had not been made. Cost avoidance is the difference between the incurred and estimated costs. In addition to costs avoided, DOD has obtained voluntary manufacturer rebates for some of the formulary drugs dispensed at retail network pharmacies—though these rebates are a much smaller proportion of overall savings. Because federal pricing arrangements were not previously applied to drugs dispensed at retail pharmacies, DOD implemented the VARR in August 2006 to allow manufacturers to offer rebates for these drugs. There are two types of VARRs: the Uniform Formulary VARR and the Utilization VARR. The Uniform Formulary VARR is an agreement between DOD and a manufacturer that is contingent on the manufacturer’s drug being selected for the uniform formulary. DOD officials told us that as of October 1, 2007, the agency had collected about $28 million through Uniform Formulary VARRs for fiscal year 2007. As manufacturers continue to enter into these agreements, DOD expects the amount it collects to increase over time. The Utilization VARR allows manufacturers to offer a rebate to DOD for drugs that are not on the uniform formulary. According to DOD, this includes drugs that have not yet been reviewed for the uniform formulary and drugs that have been reviewed and designated nonformulary. Unlike the Uniform Formulary VARR, the Utilization VARR does not secure formulary placement. As of October 2007, no manufacturers had entered into a Utilization VARR with DOD. In our discussions with 10 drug manufacturers about the VARR program, 7 of them told us that they had submitted Uniform Formulary VARRs for DOD’s consideration. Of these 7 manufacturers, 5 indicated that their participation was driven by the possibility that their drug would be selected for the uniform formulary. With regard to the Utilization VARR, 8 of the 10 manufacturers we spoke with indicated that there was little or no incentive provided to manufacturers to enter into these rebate agreements with DOD. DOD has outreach efforts under way intended to help encourage beneficiaries to use the TMOP instead of retail pharmacies. In 2006, according to DOD officials, the agency began to expand its outreach for the TMOP through quarterly newsletters, news releases, and other materials emphasizing its convenience and cost savings for beneficiaries. DOD partnered with, for example, beneficiary organizations and family support groups to help distribute these outreach materials. DOD also encouraged health care providers to promote the use of the TMOP among the TRICARE beneficiaries they serve. MTF pharmacists also participated in these efforts by posting signs advertising the TMOP in their facilities. In addition, DOD launched its Member Choice Center in August 2007, the goal of which is to help beneficiaries transfer their prescriptions from retail pharmacies to the TMOP. To educate beneficiaries about the center’s availability, DOD included information about it in newsletters and other outreach materials. According to DOD officials, the center transferred about 60,000 prescriptions from retail pharmacies to the TMOP as of late December 2007. In addition to these efforts, DOD intended to specifically target those beneficiaries who frequently obtained high-cost drugs from retail pharmacies. DOD officials told us that, as of January 2008, this aspect of the program had not yet begun and that DOD was working with the contractor for the TMOP to develop a letter to be sent to these beneficiaries. DOD has proposed changes to beneficiary copayments for fiscal years 2007 and 2008 in an effort to encourage beneficiaries to obtain prescriptions from more cost-effective points of service. Specifically, DOD proposed to eliminate copayments for generic drugs dispensed at the TMOP and to increase retail pharmacy copayments from $3 for formulary generic drugs to $5, and from $9 for formulary brand-name drugs to $15. DOD first proposed these changes for fiscal year 2007, but Congress prohibited any increase to retail pharmacy copayments for that fiscal year. DOD repeated the proposal for the next fiscal year, but the NDAA for Fiscal Year 2008 prohibits any increase to retail copayments through the fiscal year. In addition, the Task Force on the Future of Military Health Care concluded in its final report that DOD’s copayment policies and formulary tier structure do not create effective incentives to stimulate compliance with clinical best practices or the most cost-effective points of service for obtaining drugs. It recommended that DOD’s pharmacy tier and copayment structures be revised based on clinical and cost-effectiveness standards to promote greater incentive to use preferred medications and cost-effective points of service. Specifically, the task force stated that a four-tier formulary could encourage beneficiaries to use less costly drugs and use them more appropriately. It also stated that when a formulary includes more tiers, it is easier to lower out-of-pocket costs for drugs that treat certain chronic diseases and remove compliance barriers. DOD decides which drugs to include on the uniform formulary based on reviews in which the clinical and cost-effectiveness of a drug is compared with other drugs in its class. This process, established by DOD under the requirements of the pharmacy benefits law, involves three entities: The Pharmacy and Therapeutics (P&T) Committee recommends drugs to be added to the uniform formulary based on clinical and cost-effectiveness reviews. (For P&T Committee membership, see app. I.) The BAP comments on the P&T Committee’s recommendations from a beneficiary perspective. (For BAP membership, see app. I.) The Director of TMA makes final decisions after considering both the P&T Committee’s recommendations and the BAP’s comments. (See fig. 5.) The P&T Committee meets quarterly and generally reviews two to four drug classes at each meeting. The priority for therapeutic class reviews is determined by various factors, such as the conversion of a drug from brand-name to generic and the rate of utilization among beneficiaries. The P&T Committee first reviews the clinical effectiveness of the drugs in a class. It considers such information as indications for which the drug has been approved by the Food and Drug Administration, the incidence and severity of adverse effects, and the results of studies on effectiveness and clinical outcomes. Using this information, the committee determines whether the drugs are therapeutically equivalent. It then reviews the cost- effectiveness of the drugs, considering such information as the price and rebate quotes submitted by manufacturers and the estimated financial effect of possible formulary decisions. The committee then determines the relative cost-effectiveness of each drug in the class. On the basis of the outcomes of both the clinical and cost-effectiveness reviews, the committee recommends that each drug in the class be designated as either formulary or nonformulary. If the committee finds that the drugs in a class are therapeutically equivalent, it generally recommends that the lower-cost drugs be designated as formulary. However, the committee has recommended that certain higher-cost drugs it believed offered additional clinical benefits be designated as formulary. For example, the committee recommended that two drugs used to treat breakthrough pain in cancer patients, Fentora and Actiq, be designated as formulary despite a more than a fortyfold increase in cost over the two most cost-effective drugs in the class. While therapeutically equivalent to the other drugs in the class, both Fentora and Actiq can be dissolved orally, which the committee valued for patients who have difficulty swallowing drugs in tablet form. In addition to recommending that a drug be designated as formulary or nonformulary, the P&T Committee recommends an implementation period to inform pharmacies and beneficiaries of formulary decisions. Its recommendations are then provided to the BAP. Once the BAP receives the P&T Committee’s recommendations, it provides comments on behalf of beneficiaries. It reviews each recommendation and determines whether it agrees or disagrees with the P&T Committee. As of October 2007, the BAP and the P&T Committee disagreed about 17 percent of the time, mostly about the length of implementation periods. For example, the P&T Committee recommended that formulary and nonformulary designations for drugs used to treat overactive bladder conditions become effective about 60 days after the final formulary decision was made. The BAP stated that additional time was needed to notify beneficiaries currently using drugs within the class, suggesting that the formulary designations become effective about 120 days after the final formulary decision was made. Finally, the BAP’s comments are documented and submitted to the Director of TMA for consideration when making final formulary decisions. After reviewing both the P&T Committee’s recommendations and the BAP’s comments, the Director of TMA makes final formulary decisions. In a decision paper, the director approves or disapproves of the P&T Committee’s recommendations and may provide written comments explaining his decision. Although the Director of TMA makes the final decision, no drug may be designated as nonformulary unless the P&T Committee has recommended the nonformulary designation. As of October 2007, the Director of TMA had approved 188 out of the 190 P&T Committee recommendations. Uniform formulary decisions become effective on the date decision papers are signed by the Director, and the papers are made publicly available on the TRICARE Web site. As of October 2007, 28 drug classes representing 322 drugs had been reviewed for the formulary. Of the 322 drugs reviewed, 249 were designated as formulary. DOD uses electronic systems, which detect potential problems related to prescribed drugs, for quality assurance at MTFs, the TMOP, and retail network pharmacies. It also takes steps to obtain beneficiary feedback through surveys and by obtaining beneficiaries’ comments. In addition, DOD uses pharmacy data to identify beneficiaries who might benefit from participating in a disease management program. AHLTA, a global electronic health information system, alerts MTF providers to duplicate drug treatments, therapeutic overlap, drug interactions, and drug allergies when a prescription is entered into the system. MTF providers are required to use AHLTA when prescribing drugs. If, for example, AHLTA identifies a drug allergy, the provider receives an alert and can prescribe an alternative drug. The Composite Health Care System (CHCS) provides similar alerts to staff at MTF pharmacies. When a patient’s prescription is processed, the CHCS informs the staff of duplicate treatments, therapeutic overlap, drug interactions, and drug allergies. DOD officials stated that CHCS acts as a redundant quality assurance mechanism, allowing the pharmacists to double-check prescriptions written by MTF providers. If a beneficiary brings a prescription to the MTF pharmacy from a contract provider (outside of the MTF), CHCS will still inform the pharmacy staff of potential problems when they enter the prescription information into the system. The PDTS detects duplicate drug treatments, therapeutic overlap, and drug interactions at the TMOP and retail network pharmacies. From these points of service, the prescription information is electronically submitted to the PDTS, which verifies the individual’s TRICARE enrollment and provides information on duplicate treatments, therapeutic overlap, and drug interactions. The TMOP and retail network pharmacies are responsible for obtaining drug allergy information from the beneficiary, because the PDTS does not contain that information. Beneficiaries are asked to provide drug allergy information when they sign up to receive prescriptions through the TMOP. At retail pharmacies, the pharmacist is supposed to ask the beneficiary about their drug allergies and check their local pharmacy system for this information. Prescriptions filled at nonnetwork retail pharmacies are input into the PDTS when DOD receives a claim submitted by the beneficiary. DOD administers two surveys that ask specific questions about the TRICARE pharmacy benefit. The Health Care Survey of DOD Beneficiaries is administered quarterly, but questions specific to the pharmacy benefit are asked once a year. The survey asks beneficiaries who had prescriptions filled during the last 90 days about pharmacy access and utilization. The second survey, the TMOP Satisfaction Survey, is a telephone survey administered quarterly. Survey participants are selected randomly among beneficiaries who used the TMOP in the last 90 days. The purpose of this survey is to determine whether Express Scripts, the contractor that administers the TMOP, will receive an incentive payment. Express Scripts is provided this payment when the level of beneficiary satisfaction with the TMOP is 90 percent or greater. Express Scripts has scored 90 percent or greater for 17 of the 18 quarters since March 2003. DOD officials stated that they also obtained beneficiary comments on the pharmacy benefits program during meetings with representatives of military associations that represent many TRICARE beneficiaries. At the local level, MTFs also collect information about beneficiary experience with the MTF pharmacy on such issues as hours of operation, waiting times, and service provided by the pharmacy technicians. These issues are usually addressed at the individual MTFs. DOD generally uses the results of the Health Care Survey of DOD Beneficiaries to tailor articles in newsletters about the pharmacy program and to make improvements to it—for example, to simplify and encourage the use of the TMOP. DOD officials stated that on the basis of the results of the 2006 survey and feedback from military associations, they learned DOD beneficiaries wanted an easy method to transfer their prescriptions from retail pharmacies to the TMOP. In August 2007, DOD launched the Member Choice Center, where beneficiaries can call for assistance, register online for the TMOP, and transfer their prescriptions from retail pharmacies. The center contacts the beneficiary’s physician, at the beneficiaries’ request, to obtain new prescriptions and forward them to the TMOP for processing. According to DOD officials, DOD uses PDTS data to identify beneficiaries who might benefit from participating in DOD’s disease management program, an organized effort to achieve desired health outcomes in populations with prevalent, often chronic diseases, for which care practices may be subject to considerable variation. The PDTS contains data on specific drugs, dosages, and dispensing dates. So, for example, DOD uses PDTS data on drugs dispensed for asthma to identify beneficiaries who have asthma. DOD uses this information and other criteria to determine whether a beneficiary is a candidate for the asthma disease management program. Once identified, DOD provides patient lists to the managed care support contractors, who also provide the information to MTFs. Providers are encouraged to support their patient’s active participation in the disease management program and to facilitate care, such as needed laboratory tests or screening examinations. DOD implemented disease management programs for congestive heart failure and asthma in September 2006 and diabetes in June 2007, which are administered by the managed care support contractors. MTFs are required to provide disease management programs for asthma, diabetes, and screening mammograms. DOD conducts annual comprehensive analyses to quantify the effect of the disease management programs. The NDAA for Fiscal Year 2007 required that DOD’s disease management program address, at a minimum: diabetes, cancer, heart disease, asthma, chronic obstructive pulmonary disorder, and depression and anxiety disorders. DOD is working to expand its disease management program to include all of the specific diseases and conditions mandated and plans to report to Congress in March 2008 on the program’s design, development, and implementation plan. DOD’s pharmacy spending increased at an unsustainable rate from fiscal year 2000 through fiscal year 2006. Retail pharmacy spending drove most of the increase, primarily due to the lack of federal pricing arrangements and increased beneficiary utilization at these pharmacies. In contrast, increases in pharmacy spending at MTFs and the TMOP, typically the more cost-effective points of service, were less pronounced. DOD has taken steps to curtail its rising pharmacy spending, including using its uniform formulary to obtain lower drug prices and creating a rebate program for retail pharmacies—efforts that have saved the agency hundreds of millions of dollars. More recently, DOD established an outreach program to encourage beneficiaries to transfer their prescriptions from retail pharmacies to the TMOP, which has been a less costly option for both DOD and its beneficiaries. DOD’s ongoing efforts are important to limit future prescription drug spending. In addition, the agency has its task force’s proposals to consider, which include changes to the copayment and tier structures aimed at shifting beneficiary utilization away from retail pharmacies. The agency is also undertaking a fundamental reform—the NDAA for Fiscal Year 2008 requirement to apply federal pricing arrangements to drugs dispensed at retail pharmacies—that could have an even greater effect on spending. DOD will need to carefully monitor the effect of this new requirement along with its ongoing efforts in order to assess the progress in controlling spending. DOD will also need to determine what types of additional efforts, if any, will be necessary to ensure the fiscal sustainability of its pharmacy benefits program. To help ensure the fiscal sustainability of DOD’s pharmacy benefits program and complement more fundamental reforms recently enacted or recently proposed, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to monitor the effect of federal pricing arrangements for drugs dispensed at retail pharmacies along with ongoing efforts to limit pharmacy spending to determine the extent to which they reduce the growth in retail pharmacy costs, and identify, implement, and monitor other efforts, as needed, to reduce the growth in retail pharmacy spending. In commenting on a draft of this report, DOD stated that it concurred with our findings and recommendations and that it remains diligent in its efforts to curtail retail pharmacy costs. DOD noted that its recently implemented outreach program to encourage beneficiaries to transfer prescriptions from retail pharmacies to the less expensive TMOP has had an unanticipated level of participation. Specifically, in response to our recommendation to monitor the impact of federal pricing arrangements for drugs dispensed at retail pharmacies, DOD stated that it has requested additional resources to implement this NDAA for Fiscal Year 2008 requirement. DOD acknowledged that, when fully implemented, this authority will have a significant impact on controlling the growth in retail pharmacy costs. While this may likely be the case, we reiterate the need for DOD to monitor the extent to which the federal pricing reduces growth in pharmacy spending in order to determine whether additional efforts to reduce spending are warranted. With regard to our recommendation to implement other efforts, as needed, to reduce growth in retail pharmacy spending, DOD responded that the recommendations of its task force would have an impact on overall DOD pharmacy costs in general and retail pharmacy costs in particular. However, DOD stated that congressional action is necessary for these measures to be implemented and that it stands ready to implement them if granted the authority to do so. Nonetheless, our recommendation was not limited solely to the task force recommendations. DOD could explore other cost saving initiatives, similar to its outreach efforts to encourage beneficiaries’ use of the TMOP, which do not require congressional action. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report were Bonnie Anderson, Assistant Director; Keyla Lee; Lesia Mandzia; and Tim Walker. The Department of Defense Pharmacy and Therapeutics Committee consists of both voting and nonvoting members. Physician Chairman, Health Affairs/TRICARE Management Activity Director, Department of Defense Pharmacy Programs, TRICARE Director, Department of Defense Pharmacoeconomic Center The Army, Navy, and Air Force Surgeons General Internal Medicine One Army, Navy, or Air Force Surgeon General Pediatric specialty One Army, Navy, or Air Force Surgeon General Family Practice specialty One Army, Navy, or Air Force Surgeon General Obstetric/Gynecology One physician or pharmacist from the United States Coast Guard The Army, Navy, and Air Force Pharmacy specialty consultants or One provider at large from the Army, Navy, and Air Force One physician or pharmacist from the Department of Veterans Affairs The Contracting Officer’s Representative for the TRICARE Retail The Contracting Officer’s Representative for the TRICARE Mail Order The Pediatric, Family Practice, and Obstetric/Gynecology positions on the P&T Committee are rotated among the services every 3 years. | Estimated to reach $15 billion by 2015, the Department of Defense's (DOD) prescription drug spending has been a growing concern for the federal government. The John Warner National Defense Authorization Act (NDAA) for Fiscal Year 2007 required GAO to examine DOD's pharmacy benefits program. Specifically, as discussed with the committees of jurisdiction, GAO examined DOD's prescription drug spending trends from fiscal years 2000 through 2006 and DOD's key efforts to limit its prescription drug spending. To conduct this work, GAO analyzed DOD's data on spending trends, including trends in beneficiary pharmacy use. GAO also assessed DOD's cost avoidance data and the agency's efforts to limit spending through its uniform formulary, which is a list of preferred drugs available to all beneficiaries. GAO interviewed DOD officials about these and other efforts to limit spending. Collectively, DOD's drug spending at retail pharmacies, military treatment facilities (MTF), and the TRICARE Mail Order Pharmacy (TMOP) more than tripled from $1.6 billion in fiscal year 2000 to $6.2 billion in fiscal year 2006. Retail pharmacy spending drove most of this increase, rising almost ninefold from $455 million to $3.9 billion and growing from 29 percent of overall drug spending to 63 percent. The growth in retail spending reflects the fact that federal pricing arrangements, which generally result in prices lower than retail prices, were not applied to drugs dispensed at retail pharmacies during this time. In addition, beneficiaries' increased use of retail pharmacies over the less costly options of MTFs or the TMOP exacerbated the effect of these higher prices. For example, 2 million beneficiaries used only retail pharmacies in fiscal year 2006--double the number in fiscal year 2002. However, future growth in retail pharmacy spending may slow as the NDAA for Fiscal Year 2008 now requires that federal pricing arrangements be applied to drugs dispensed at retail pharmacies. DOD's key efforts to limit its prescription drug spending have included its use of the uniform formulary and beneficiary outreach to encourage use of the TMOP. By leveraging its uniform formulary, which was implemented in fiscal year 2005, the agency avoided about $447 million in drug costs in fiscal year 2006 and $916 million in fiscal year 2007, according to DOD's data. In exchange for formulary placement, manufacturers can offer DOD prices below those otherwise available through federal pricing arrangements, which at the time of our review were applied only to drugs dispensed at MTFs and the TMOP. To compensate, in August 2006, DOD began obtaining voluntary manufacturer rebates for formulary drugs dispensed at retail network pharmacies. As of October 1, 2007, DOD collected about $28 million in rebates for fiscal year 2007. Also in 2006, DOD began beneficiary outreach--through quarterly newsletters and other materials--emphasizing the TMOP's convenience and cost savings. To help beneficiaries transfer their prescriptions to the TMOP, DOD launched the Member Choice Center in August 2007 and plans to target related outreach toward beneficiaries who frequently obtain high-cost drugs from retail pharmacies. DOD's ongoing efforts are important to limit future prescription drug spending. In addition, DOD has the recommendations of a congressionally mandated task force to consider--that copayment policies be changed to encourage beneficiaries to purchase preferred drugs from cost-effective sources. The agency is also undertaking a fundamental reform--the NDAA for Fiscal Year 2008 requirement to apply federal pricing arrangements to drugs dispensed at retail pharmacies--that could have an even greater impact on spending. DOD will need to carefully monitor the impact of this new requirement along with its ongoing efforts in order to assess the progress in controlling spending. DOD will also need to determine what types of additional efforts, if any, will be necessary to ensure the fiscal sustainability of its pharmacy benefits program. |
The Food and Agriculture Organization (FAO), the U.S. government, and others define food security to exist when all people at all times have physical and economic access to sufficient food to meet their dietary needs for a productive and healthy life. Food insecurity exists when the availability of nutritionally adequate and safe foods, or the ability to acquire acceptable foods in socially acceptable ways, is limited or uncertain. Although it is generally agreed that the problem of food insecurity is widespread in the developing world, the total number of undernourished people is unknown, and estimates vary widely. For example, estimates for 58 low-income, food-deficit countries range from 576 million people to 1.1 billion people. Appendix I provides further information about these estimates. The summit resulted in an action plan for reducing undernourishment. Included in the plan were a variety of measures for promoting economic, political, and social reforms in developing countries. To reach their goal, summit participants approved an action plan that included 7 broadly stated commitments, 27 objectives, and 181 specific actions (see app. II). Among other things, the plan highlighted the need to reduce poverty and resolve conflicts peacefully. While recognizing that food aid may be a necessary interim approach, the plan encouraged developing countries to become more self-reliant by increasing sustainable agricultural production and their ability to engage in international trade, and by developing or improving social welfare and public works programs to help address the needs of food-insecure people. The plan further noted that governments should work closely with others in their societies, such as nongovernmental organizations (NGO) and the private sector. Although the summit action plan is not binding, countries also agreed to (1) review and revise as appropriate national plans, programs, and strategies with a view to achieving food security; (2) establish or improve national mechanisms to set priorities and develop and implement the components of the summit action plan within designated time frames, based on both national and local needs, and provide the necessary resources; and (3) cooperate regionally and internationally in order to reach collective solutions to global issues of food insecurity. They also agreed to monitor implementation of the summit plan, including periodically reporting on their individual progress in meeting the plan’s objectives. The summit placed considerable emphasis on the need for broad-based political, economic, and social reforms to improve food security. For example, summit countries called for the pursuit of democracy, poverty eradication, land reform, gender equality, access to education and health care for all, and development of well-targeted welfare and nutrition safety nets. Other international conferences have suggested that major policy reforms were needed in connection with food security issues. For example, countries that attended the 1974 World Food Conference and the 1979 World Conference on Agrarian Reform and Rural Development said they would undertake major economic, social, and political reforms. According to some observers, the most important challenge of food security today is how to bring about major socio-institutional change in food-insecure countries, since previous efforts have met with limited success. According to other observers, there is a growing acceptance on the part of developing countries that policy reform must be addressed if food security is to be achieved. However, reports on progress toward implementing summit objectives that many countries provided to FAO in early 1998 did not contain much information on the extent to which countries have incorporated policy reforms into specific plans for implementing summit objectives. As defined by the summit and others, achieving improved world food security by 2015 is largely an economic development problem; however, the summit did not estimate the total resources needed by developing countries to achieve the level of development necessary to cut in half their undernutrition by 2015, much less assess their ability to finance the process themselves. Many developed countries that attended the summit agreed to try to strengthen their individual efforts toward fulfilling a long-standing U.N. target to provide official development assistance equivalent to 0.7 percent of the gross national product each year. However, the countries did not make a firm commitment to this goal, and the United States declined to endorse this target. Assistance from the Organization for Economic Cooperation and Development’s (OECD)Development Assistance Committee members has been declining in recent years—from about $66.5 billion in 1991 to $52.7 billion in 1997 (measured in 1996 dollars). Total official development assistance from these countries in 1997 represented 0.22 percent of their combined gross national product, compared to 0.32 percent during 1990-94. Many developed countries believe that the private sector is a key to resolving the resources problem. Whether the private sector will choose to become more involved in low-income, food-deficit countries may depend on the extent to which developing countries embrace policy reform measures. Private sector resources provided to the developing world have grown dramatically during the 1990s, and by 1997 the private sector accounted for about 75 percent of net resource flows to the developing world, compared to about 34 percent in 1990. However, according to the OECD, due to a number of factors, most of the poorest countries in the developing world have not benefited much from the trend and will need to rely principally on official development assistance for some time to come. (See app. III for additional analysis on official and private sector resource flows to the developing countries.) Among factors that may affect whether the summit’s goal is realized are trade reforms, conflicts, agricultural production, and safety net programs and food aid. Summit participants generally believed that developing countries should increasingly rely on trade liberalization to promote greater food security, and in support of this belief, the summit plan called for full implementation of the 1994 Uruguay Round Trade Agreements (URA). The participants also recognized that trade liberalization may result in some price volatility that could adversely affect the food security situation of poor countries. To help offset these possible adverse effects, the participants endorsed the full implementation of a Uruguay Round decision on measures to mitigate possible negative effects. The summit participants generally acknowledged that the URAs have the potential to strengthen global food security by encouraging more efficient food production and a more market-oriented agricultural trading system. Reforms that enable farmers in developing countries to grow and sell more food can help promote increased rural development and improve food security. Trade reforms that increase the competitiveness of developing countries in nonagricultural sectors can also lead to increased income and, in turn, a greater ability to pay for commercial food imports. However, trade reforms may also adversely affect food security, especially during the near-term transitional period, if such reforms result in an increase in the cost of food or a reduced amount of food available to poor and undernourished people. Reforms may also have adverse impacts if they are accompanied by low levels of grain stocks and increased price volatility in world grain markets. The summit plan acknowledged that world price and supply fluctuations were of special concern to vulnerable groups in developing countries. As part of the plan, food exporting countries said they would (1) act as reliable sources of supplies to their trading partners and give due consideration to the food security of importing countries, especially low-income, food-deficit countries; (2) reduce subsidies on food exports in conformity with the URA and in the context of an ongoing process of agricultural reform; and (3) administer all export-related trade policies and programs responsibly to avoid disruptions in world food agriculture and export markets. Also, to mitigate the possible adverse effects of trade reforms on food security situations, the summit plan called for full implementation of a Uruguay Round ministerial decision made in Marrakesh, Morocco, in 1994. Under this decision, signatory nations to the URA agreed to ensure that implementing the trade reforms would not adversely affect the availability of sufficient food aid to assist in meeting the food needs of developing countries, especially the poorest, net food-importing countries. To date, however, agreement has not been reached about the criteria that should be used in evaluating the food aid needs of the countries and whether trade reforms have adversely affected the ability of the countries to obtain adequate supplies of food. While trade liberalization by developing countries was especially encouraged by summit participants, some observers believe that developed countries have been slow in removing their trade barriers and that this may inhibit developing countries from achieving further trade liberalization. For example, according to reports by the International Food Policy Research Institute (IFPRI) and the World Bank, member countries of the OECD continue to maintain barriers to free trade that are adversely affecting the means and willingness of developing nations to further liberalize their own markets and to support additional trade liberalization. According to the World Bank, without an open trading environment and access to developed country markets, developing countries cannot benefit fully from producing those goods for which they have a comparative advantage. Without improved demand for developing countries’ agricultural products, for example, the agricultural growth needed to generate employment and reduce poverty in rural areas will not be achieved, the Bank report said. This is critical to food security. If developing countries are to adopt an open-economy agriculture and food policy, they must be assured of access to international markets over the long term, particularly those of the developed nations, according to the Bank. (For a more detailed discussion of these issues, see app. IV.) Officials of the Department of State and the U.S. Department of Agriculture (USDA), however, said that the problem of developed countries’ trade barriers against developing countries is not as severe as portrayed by IFPRI and the World Bank. State acknowledged that there are still some significant barriers to trade but said most barriers are being progressively removed because of the Uruguay Round. In addition, it said, the United States has a number of preferential areas and regimes that favor developing countries and allow most agricultural imports. State said the European Union has similar arrangements. USDA officials generally agreed that it is important for developed countries to remove trade barriers but said it is equally important for developing countries to eliminate domestic policies and restrictions on trade that have adversely affected their own economic growth. The price volatility of world food commodities, particularly grains, and its relationship to the level of food reserves, is a key issue related to trade liberalization and a significant problem for food-insecure countries. Views differ over the level of global grain reserves needed to safeguard world food security, the future outlook for price volatility, and the desirability of holding grain reserves. The summit observed that maintaining grain reserves was one of several instruments that countries could use to strengthen food security; however, the summit did not identify a minimum level of global grain reserves needed to ensure food security nor did it recommend any action by countries individually or in concert. Instead, the summit participants agreed to monitor the availability and adequacy of their individual reserve stocks, and FAO agreed to continue its practice of monitoring and informing member nations of developments in world food prices and stocks. FAO, IFPRI, and the World Bank have observed that agricultural markets are likely to be more volatile as the levels of world grain reserves are reduced, an outcome expected as trade reforms are implemented. However, they and other observers have also noted that as a result of trade market reforms, agricultural producers may respond more quickly to rising prices in times of tightening markets, the private sector may hold more reserves than it did when governments were holding large reserves (though not in an amount that would fully replace government stocks), and the increased trade in grains among all nations will help offset a lower level of world grain reserves. Some observers believe that most countries, including food-insecure developing countries, are better off keeping only enough reserves to tide them over until they can obtain increased supplies from international markets, since it is costly to hold stocks for emergency purposes on a regular basis and other methods might be available for coping with volatile markets. Others support the view that ensuring world food security requires maintaining some minimum level of global grain reserves and that developed countries have a special responsibility to establish and hold reserves for this purpose. Some have also suggested examination of the feasibility of establishing an international grain reserve. The U.S. position is that governments should pursue at local and national levels, as appropriate, adequate, cost-effective food reserve policies and programs. The United States has opposed creation of international food reserves because of the difficulties that would arise in deciding how to finance, hold, and trigger the use of such reserves. (See app. IV for additional analysis on grain reserves.) The summit countries concluded that conflict and terrorism contribute significantly to food insecurity and declared a need to establish a durable, peaceful environment in which conflicts are prevented or resolved peacefully. According to FAO, many of the countries that had low food security 30 years ago and failed to make progress or even experienced further declines since then have suffered severe disruptions caused by war and political disturbances. Our analysis of data on civil war, interstate war, and genocide in 88 countries between 1960 and 1989 shows a relationship between the incidence of these disturbances and food insecurity at the national level. A sharp rise in international emergency food aid deliveries during the early 1990s has been largely attributed to an increasing number of armed conflicts in different parts of the world. Summit countries pledged that they would, in partnership with civil society and in cooperation with the international community, encourage and reinforce peace by developing conflict prevention mechanisms, by settling disputes through peaceful means, and by promoting tolerance and nonviolence. They also pledged to strengthen existing rules and mechanisms in international and regional organizations, in accordance with the U.N. Charter, for preventing and resolving conflicts that cause or exacerbate food insecurity and for settling disputes by peaceful means. The FAO Secretariat analyzed progress reports submitted to FAO by member countries in 1998 and cited several examples of country efforts to support peaceful resolution of domestic and international conflicts. However, the analysis did not provide any overall results on the extent to which countries had made progress in ending already existing violent conflicts and in peacefully resolving or preventing other conflicts. (See app. VI for our analysis on the relationship between conflict and food security.) One objective of the summit was to increase agricultural production and rural development in the developing world, especially in low-income, food-deficit countries. FAO estimates show that achieving the required production increases will require unusually high growth rates in the more food-insecure countries and, in turn, greater investments, especially in the worst-off countries. World Bank officials have said that the Bank is committed to emphasizing rural agricultural development in countries that receive its assistance. Its plan calls for country assistance strategies that treat agriculture comprehensively and include well-defined, coherent, rural strategy components. Despite public statements by the World Bank, there are still differences of opinion within the Bank and among its partners as to the priority that should be given to the rural sector. These opinions range from recognizing a positive role for agricultural growth in an overall development strategy, to benign neglect, to a strong urban bias. Achieving needed agricultural production increases will also require other major changes in the rural and agricultural sector and in society more generally. For example, according to the U.S. mission to FAO, the most critical factor affecting progress toward achieving the summit goal is the willingness of food-insecure countries to undertake the kind of economic policies that encourage rather than discourage domestic production in the agricultural sector and their willingness to open their borders to international trade in agricultural products. There must be an “enabling environment,” the mission said, that favors domestic investment and production in the agricultural sector. Moreover, the mission said, these policies are under the control of the food-insecure countries themselves and can have a far greater impact on domestic food security than international assistance. Another issue involving increased agricultural production concerns promotion of modern farming methods, such as chemicals to protect crops, fertilizers, and improved seeds. Agriculture production in developing countries can be substantially improved if such methods are adopted and properly implemented. However, some groups strongly oppose the introduction of such methods because of concerns about the environment. (See app. VII for additional information on this issue.) The summit’s long-term focus is on creating conditions where people have the capability to produce or purchase the food they need, but summit participants noted that food aid—both emergency and nonemergency—could be used to help promote food security. The summit plan called upon governments of all countries to develop within their available resources well-targeted social welfare and nutrition safety nets to meet the needs of their food-insecure people and to implement cost-effective public works programs for the unemployed and underemployed in regions of food insecurity. With regard to emergency food aid, the summit plan stated the international community should maintain an adequate capacity to provide such assistance. Nevertheless, this goal has been difficult to implement and, since the summit, some emergency food aid needs have not been met. For example, according to the World Food Program, which distributes about 70 percent of global emergency food aid, approximately 6 percent of its declared emergency needs and 7 percent of its protracted relief operations needs were not satisfied in 1997. Also, donors direct their contributions to emergency appeals on a case-by-case basis, and some emergencies are underfunded or not funded at all. In addition, according to the World Food Program, lengthy delays between appeals and contributions, as well as donors’ practice of attaching specific restrictions to contributions, make it difficult for the World Food Program to ensure a regular supply of food for its operations. In 1998, the program’s emergency and protracted relief operations were underfunded by 18 percent of total needs. Other problems affecting the delivery of emergency food aid include government restrictions on countries to which the food aid can be sent and civil strife and war within such countries. Notable recent examples of countries that have not received sufficient assistance, according to the World Food Program, include North Korea and Sudan, where both situations involve complex political issues that go well beyond the food shortage condition itself. (See app. V for additional information on food aid.) Summit participants agreed that an improved food security information system, coordination of efforts, and monitoring and evaluation are actions needed to make and assess progress toward achieving the summit’s goal. Many countries participating in the summit acknowledged that they do not have adequate information on the status of their people’s food security. Consequently, participants agreed that it would be necessary to (1) collect information on the nutritional status of all members of their communities (especially the poor, women, children, and members of vulnerable and disadvantaged groups) to enable monitoring of their situation; (2) establish a process for developing targets and verifiable indicators of food security where they do not exist; (3) encourage relevant U.N. agencies to initiate consultations on how to craft a food insecurity and vulnerability information and mapping system; and (4) draw on the results of the system, once established, to report to CFS on their implementation of the summit’s plan. According to FAO and U.S. officials, improvement in data collection and analysis is necessary if countries are to have reasonably accurate data to design policies and programs to address the problem. However, not much progress has been made in this regard over the past 20 years, and serious challenges remain. A major shortcoming is that agreement has not yet been reached on the indicators to be used in establishing national food insecurity information systems. Following the 1996 summit, an international interagency working group was created to discuss how to create such a system. As of November 1998, the working group had not yet decided on or begun to debate which indicators of food insecurity should be used, and the working group is not scheduled to meet again before the mid-1999 CFS meeting. FAO Secretariat officials told us that a proposal will be ready for the 1999 CFS meeting. Thus far, only a few developed and not many more developing countries have participated. (See app. VIII for additional analysis of this issue.) The summit’s action plan incorporates several objectives and actions for improved coordination among all the relevant players. For example, it calls upon FAO and other relevant U.N. agencies, international finance and trade institutions, and other international and regional technical assistance organizations to facilitate a coherent and coordinated follow-up to the summit at the field level, through the U.N.’s resident coordinators, in full consultation with governments, and in coordination with international institutions. In addition, the plan calls on governments, cooperating among themselves and with international institutions, to encourage relevant agencies to coordinate within the U.N. system to develop a food-insecurity monitoring system, and requested the U.N. Secretary General to ensure appropriate interagency coordination. Since the summit, the United Nations, FAO, the World Bank, and others have endorsed various actions designed to promote better coordination. In April 1997, the United States and others expressed concern to FAO about problems related to FAO efforts to help developing countries create strategies for improving their food security. Donor countries noted that nongovernmental groups had not been involved in the preparation of the strategies, even though the summit plan stressed the importance of their active participation. In June 1997, the European Union expressed concern about the uncoordinated nature of food aid, noting that responsibilities were scattered among a number of international organizations and other forums, each with different representatives and agendas. And in October 1997, the World Bank reported that many agricultural projects had failed due to inadequate coordination among the donors and multilateral financial institutions. (See app. IX for additional information on the coordination issue.) The summit participants acknowledged the need to actively monitor the implementation of the summit plan. To this end, governments of the countries agreed to establish, through CFS, a timetable, procedures, and standardized reporting formats for monitoring progress on the national, subregional, and regional implementation of the plan. CFS was directed to monitor the implementation of the plan, using reports from national governments, the U.N. system of agencies, and other relevant international institutions, and to provide regular reports on the results to the FAO Council. As previously noted, as of November 1998, a monitoring and evaluation system had not yet been developed to provide reasonably accurate data on the number, location, and extent of undernourished peoples. In addition, a system had not been created to assess implementation of the various components of the summit’s action plan (that is, 7 broad commitments, 27 major supporting objectives, and 181 supporting actions). Many of these involve multiple activities and complex variables that are not easily defined or measured. In addition, CFS has requested that the information provided allow for analysis of which actions are or are not successful in promoting summit goals. In April 1997, CFS decided that the first progress reports should cover activities through the end of 1997 and be submitted to the FAO Secretariat by January 31, 1998. Countries and relevant international agencies were to report on actions taken toward achieving the specific objectives under each of the seven statements of commitment. As of March 31, 1998, only 68 of 175 country reports had been received. The Secretariat analyzed the information in the 68 reports and summarized the results in a report to the CFS for its June 1998 session. The Secretariat reported it was unable to draw general substantive conclusions because (1) all countries, to varying degrees, were selective in providing the information they considered of most relevance for their reporting; (2) varied emphasis was given to reporting on past plans and programs, ongoing programs, and future plans to improve food security; and (3) the reports did not always focus on the issues involved. Furthermore, some countries chose to provide a report that was more descriptive than analytical, and some countries reported only on certain aspects of food security action, such as food stocks or reserve policies. CFS had not stipulated or suggested any common standards for measuring the baseline status and progress with respect to actions, objectives, or commitments prior to the preparation of the progress reports. In the absence of common standards, the Secretariat is likely to experience difficulty in analyzing relationships and drawing conclusions about the progress of more than 100 countries. In addition, CFS did not ask countries and agencies to report on planned targets and milestones for achieving actions, objectives, or commitments or on estimated costs to fulfill summit commitments and plans for financing such expenditures. The Secretariat provided the June 1998 CFS session with a proposal for improving the analytical format for future progress reports. CFS did not debate the essential points that should be covered in future reports and instead directed the Secretariat to prepare another proposal for later consideration. Given the complexity of the action plan and other difficulties, CFS also decided that countries will not prepare the next progress report until the year 2000 and will address only half of the plan’s objectives. A progress report on the remaining objectives will be made in 2002. Thus, the second report will not be completed until 6 years after the summit. A third set of progress reports is to be prepared in 2004 and 2006. Under the summit plan, countries also agreed to encourage effective participation of relevant civil society actors in the monitoring process, including those at the CFS level. In April 1997, CFS decided to examine this issue in detail in 1998. However, the issue was not included in the provisional agenda for the June 1998 session. Detailed discussion of proposals by Canada and the United States on the issue was postponed until the next CFS session in 1999. The postponement occurred as a result of opposition by many developing country governments to an increased role for NGOs in CFS. (See app. X for additional analysis of this issue.) The Department of State, USDA, FAO, and the World Food Program provided oral comments and USAID provided written comments on a draft of this report. They generally agreed with the contents of the report. State emphasized the important role that broad-based policy reforms play in helping developing countries address food insecurity and suggested that our report further highlight this factor. We agree with State on this matter, and have reemphasized the need for developing countries to initiate appropriate policy reforms as a prelude to addressing food security issues. State and USDA officials also commented that in their opinion, the World Bank and IFPRI overstated the effect of developed countries’ trade barriers on the food insecurity of least-developed countries. We have modified the report to reflect State’s and USDA’s views on this matter more fully. USAID said that, although an unfortunate circumstance, it believes the level of effort by donor and developing countries will probably fall short of achieving the summit’s goal of reducing chronic global hunger by one-half. While we cannot quantify the extent to which developing countries may fall short, we tend to agree with USAID’s observation. USAID’s comments are reprinted in appendix XII. FAO officials said the report’s general tone of skepticism was justified based on the past record and reiterated that reducing by one-half the number of undernourished people by 2015 requires a change in priorities by countries along the lines spelled out in the summit action plan. They also said that work was underway to further investigate the extent to which the target is feasible at the national level in those countries facing political instability or with a high proportion of undernourished people. FAO officials said that our discussion in appendix IX of coordination issues concerning FAO’s Special Program for Food Security and a Telefood promotion did not reflect FAO members’ support for these initiatives. We provided additional information on the initiatives to reflect FAO’s views (see app. IX). World Food Program officials said food aid for nonemergency and developmental purposes is more effective than is suggested by the discussion in our report. However, the officials did not identify any studies or analysis to support the Program’s position that food constitutes an efficient use of assistance resources. The World Food Program said that it has acted on recommendations for improving its operations, and we modified the report to reflect the World Food Program’s views. However, it is important to note that a recent USAID study on the use of food aid in contributing to sustainable development concluded that while food aid may be effective, it is less efficient than financial assistance, although the report pointed out that financial aid is often not available. World Food Program officials acknowledged that important issues remain unresolved concerning establishment of an international database on food insecurity. All of the above agencies and the Department of Health and Human Services also provided technical comments that were incorporated into the report where appropriate. We are sending copies of this report to Senator Joseph R. Biden, Senator Robert C. Byrd, Senator Pete V. Domenici, Senator Jesse Helms, Senator Frank R. Lautenberg, Senator Patrick J. Leahy, Senator Joseph I. Lieberman, Senator Mitch McConnell, Senator Ted Stevens, and Senator Fred Thompson, and to Representative Dan Burton, Representative Sonny Callahan, Representative Sam Gejdenson, Representative Benjamin A. Gilman, Representative John R. Kasich, Representative David Obey, Representative Nancy Pelosi, Representative John M. Spratt, Representative Henry A. Waxman, and Representative C. W. Bill Young. We are also sending copies of this report to the Honorable Dan Glickman, Secretary of Agriculture; the Honorable William M. Daley, Secretary of Commerce; the Honorable William S. Cohen, Secretary of Defense; the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Madeline K. Albright, Secretary of State; the Honorable Robert E. Rubin, Secretary of the Treasury; the Honorable J. Brian Atwood, Administrator, Agency for International Development; the Honorable Carol M. Browner, Administrator, Environmental Protection Agency; the Honorable George J. Tenet, Director, Central Intelligence Agency; the Honorable Jacob J. Lew, Director, Office of Management and Budget; the Honorable Samuel R. Berger, National Security Adviser to the President; and the Honorable Charlene Barshefsky, U.S. Trade Representative. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-4128. The major contributors to this report are listed in appendix XIII. Although the problem of food insecurity is widespread in the developing world, the total number of undernourished people is unknown, and estimates vary widely. An accurate assessment of the number of people with inadequate access to food would require data from national sample surveys designed to measure both the food consumption and the food requirements of individuals. Such studies may include a dietary survey and a clinical survey that involves anthropometric, or body, measurements,and biochemical analyses. According to the Food and Agriculture Organization (FAO), clinical and anthropometric examinations are the most practical and sound means of determining the nutritional status of any particular group of individuals in most developing countries in Africa, Asia, and Latin America because the countries lack vital statistics, accurate figures on agricultural production, and laboratories where biochemical tests can be performed. However, clinical examinations have often been given a low priority by developing countries, and studies of anthropometric measurements have been undertaken very infrequently. National dietary intake surveys are costly and time-consuming and have also been undertaken in very few countries. As a result, there are no internationally comparable, comprehensive survey data for tracking changes in undernutrition for individuals and population groups within countries, according to FAO. For many years FAO has employed a method to estimate the prevalence of chronic undernourishment at the country level that is subject to a number of weaknesses. Nevertheless, FAO estimates are frequently cited in the absence of better estimates. FAO uses (1) food balance sheets that estimate the amount of food available to each country over a 3-year period and (2) estimates of each country’s total population to calculate the average available per capita daily supply of calories during that period. FAO then estimates the minimum average per capita dietary requirements for the country’s population, allowing for only light physical activity. Then, in combination with an estimate of inequality in the distribution of food among households in the country, it derives the percentage distribution of the population by per capita calorie consumption classes. On the basis of this distribution and a cutoff point for food inadequacy based on the estimate of the minimum average per capita dietary energy requirements, the proportion of undernourished is estimated. This is then multiplied by an estimate of the size of the population to obtain the absolute number of undernourished . According to FAO, a minimum level of energy requirements is one that allows for only light physical activity. Depending on the country, FAO says, the minimum level of energy requirements for the average person ranges from 1,720 to 1,960 calories per day. Depending on data availability, FAO’s assessment of equitable food distribution for a country is based on survey data on household food energy intake, food expenditure, total income or expenditure, and/or the weighted average of estimates for neighboring countries. FAO’s method has a number of weaknesses, and the validity of its estimates has not been established. For example, FAO’s food supply figures are based on 3-year averages, and population estimates are for the midpoint of the reference period used. As a result, FAO’s estimates of the prevalence of undernutrition do not reflect the short-term, seasonal variations in food production or availability in countries. In addition, FAO’s method relies on total calories available from food supplies and ignores dietary deficiencies that can occur due to the lack of adequate amounts of protein and essential micronutrients (for example, vitamins essential in minute amounts for growth and well-being). FAO’s method for measuring inequality in food distribution or access is ideally based on food consumption data from household surveys, but the number of developing countries for which such data are available is limited, and the surveys may not be national in scope or may have been done infrequently. FAO uses these data to estimate parameters for countries for which data are not available. FAO acknowledges that the quality and reliability of data relating to food production, trade, and population vary from country to country and that for many developing countries the data are either inaccurate or incomplete. According to one critic of FAO’s method, FAO’s estimates are unreliable indicators of the scope of the undernutrition problem and erroneously find chronic undernutrition to be most prevalent in Africa. The main reasons for the latter finding are systematic bias in methods used by African countries to estimate food production and, to a lesser extent, certain minor food items that are not completely covered in FAO’s food balance sheets. The author concludes that anthropometric measurements, based as they are on measurements of individuals, would be a more promising method for future estimates of undernourishment than estimates based on FAO’s aggregate approach. FAO’s method does not provide information on the effects of chronic undernourishment (for example, the prevalence of growth retardation and specific nutritional deficiencies), does not specify where the chronically undernourished live within a country, and does not identify the principal causes of their undernutrition. According to FAO and other experts, such information is needed to develop effective policies and programs for reducing undernourishment. In addition, FAO does not provide estimates for developed countries and does not provide estimates of chronic undernutrition of less than 1 percent. Overall, according to FAO, its estimates of food availability and/or the prevalence of undernutrition for many countries are subject to errors of unknown magnitude and direction. Nonetheless, FAO believes that its estimates permit one to know generally in which countries undernutrition is most acute. According to FAO, the consensus of a group of experts that it consulted in March 1997 was that (1) despite the deficiencies of its method, FAO had no current substitute for assessing chronic undernutrition than its food balance sheets based on per capita food availability and distribution; (2) FAO’s approach tends to underestimate consistently per capita food availability in African countries because of its inadequate coverage of noncereal crops; (3) attention needs to be given not just to indications of severe malnutrition but also to mild and moderate malnutrition; and (4) more subregional information is needed on malnutrition and on local levels of food stocks and trade, wages and market conditions, and household perceptions of medium-term food insecurity. It was also argued that about 67 percent of child deaths are associated with nonclinically malnourished children. In analyses for the World Food Summit, FAO estimated that about 840 million people in 93 developing countries were chronically undernourished during 1990-92. These countries represented about 98.5 percent of the population in all developing countries. According to the FAO estimates, a relatively small number of countries account for most of the chronically undernourished in the 93 countries (see table I.1). For example, during 1990-92, China and India were estimated to have about 189 million and 185 million chronically undernourished, respectively; collectively, they had nearly 45 percent of the total for all 93 countries. Five countries—Bangladesh, Ethiopia, Indonesia, Nigeria, and Pakistan—accounted for between 20 million and 43 million chronically undernourished each. The next 13 countries represented between about 6 million and 17 million of the chronically undernourished. Altogether, the 20 countries accounted for about 679 million, or nearly 81 percent, of the undernourished in the 93 countries. Number of undernourished (millions) As table I.2 shows, great variation also characterizes the extent to which chronic undernutrition is a problem within countries. According to FAO figures, a majority of the countries were estimated to have chronically undernourished people at a rate ranging between 11 and 40 percent in 1990-92, and 19 had rates ranging between 41 and 73 percent. Total number of chronically undernourished (millions) Table I.3 provides estimates of the number of undernourished people in developing country regions of the world between 1969-71 and 1994-96. (The figures include FAO revised estimates for the periods prior to 1994-96.As a result, the total for 1990-92 is slightly lower than that shown in tables I.1 and I.2.) FAO’s estimates indicate that the developing world as a whole made considerable progress in reducing the level of chronic undernourishment between 1969-71 and 1990-92, from an estimated 37 percent of the total population to 20 percent. However, the absolute number of undernourished was reduced by only 14.3 percent during the period—from 959 million to about 822 million—because the total population of the developing world increased by nearly 1.5 billion people during that time. Also, a large number of states did so poorly that their chronically undernourished people increased both absolutely and as a percentage of their total population. Between 1990-92 and 1994-96, the proportion of undernourished people in the developing world declined another 1 percent, but the number of undernourished increased by about 6 million people. Year (3-year averages) Total population (millions) Persons (millions) Although the percentage of chronically undernourished people in the developing world was considerably reduced between 1969-71 and 1994-96, sub-Saharan Africa’s reduction was very small. According to FAO’s estimates, in 1994-96 the proportion of sub-Saharan Africa’s population that was undernourished greatly exceeded that of the other regions of the world. However, in absolute numbers, the most undernourished persons were still found in East and Southeast Asia and in South Asia. A 1997 U.S. Department of Agriculture (USDA) Economic Research Service (ERS) study employed an alternative indirect method for estimating the amount of undernutrition at the country level that is similar to FAO’s method in some respects. Like FAO, ERS estimates food availability within a country. It also adopts a minimum daily caloric intake standard necessary to sustain life with minimum food-gathering activities. However, the standard is higher than that used by FAO (for light physical activity)—ranging between about 2,000 and 2,200 calories per day, depending on the country. According to ERS, its standard is comparable to the activity level for a refugee; it does not allow for play, work, or any activity other than food gathering. ERS estimates how inequality affects the distribution of available food supplies based on consumption or income distribution data for five different groups of the population. Like FAO’s estimate, ERS’ estimate is highly dependent on the availability and quality of national-level data. In 1997, ERS used its method to estimate the number of undernourished in 58 of the 93 developing countries regularly reported on by FAO. ERS estimated that during 1990-92, about 1.038 billion people could not meet their nutritional requirements—nearly 200 million more than FAO’s estimate of 839 million people for 93 countries. FAO’s data for the same 58 countries indicates 574 million chronically undernourished, about 45 percent less than USDA’s estimate. One reason for the much larger estimates resulting from the USDA approach are the higher standards used for minimum energy requirements that were previously noted . Another important source of data on the status of food security in the developing world is the World Health Organization’s global database on growth in children under age 5. Since 1986, the World Health Organization has sought to assemble and systematize the results of representative anthropometric surveys conducted in different parts of the world. The data indicate that about 2 out of 5 children in the developing world are stunted (low height for age), 1 out of 3 underweight (low weight for age), and 1 out of 11 wasted (low weight for height). In absolute numbers, the estimates for 1990 are 230 million stunted children, 193 million underweight, and 50 million wasted under the age of 5. According to the U.N. Children’s Fund, more than 6 million children in developing countries die each year from causes either directly or indirectly tied to malnutrition. The 185 countries that attended the World Food Summit pledged their actions and support to implement a plan of action for reducing food insecurity. The plan includes 7 major commitments, 27 subordinate objectives, and 181 specific actions. The commitments, subordinate objectives, and 24 of the specific actions relating to a variety of objectives are summarized in table II.1. Table II.1: Commitments, Objectives, and Select Examples of Actions in the World Food Summit’s Plan of Action Ensure an enabling political, social, and economic environment designed to create the best conditions for the eradication of poverty and for durable peace, based on full and equal participation of men and women. Prevent and resolve conflicts peacefully and create a stable political environment through respect for all human rights and fundamental freedoms, democracy, a transparent and effective legal system, transparent and accountable governance and administration in all public and private national and international institutions, and effective and equal participation of all people in decisions and actions that affect their food security. Ensure stable economic conditions and implement development strategies that encourage the full potential of private and public initiatives for sustainable, equitable, economic, and social development that also integrate population and environmental concerns. Establish legal and other mechanisms that advance land reform and promote the sustainable use of natural resources. Ensure gender equality and empowerment of women. Promote women’s full and equal participation in the economy. Encourage national solidarity and provide equal opportunities for all in social, economic, and political life, particularly vulnerable and disadvantaged people. Support investment in human resource development, such as health, education, and other skills essential to sustainable development. Implement policies aimed at eradicating poverty and inequality and improving physical and economic access by all. Pursue poverty eradication and food sustainability for all as a policy priority and promote employment and equal access to resources, such as land, water, and credit, to maximize incomes of the poor. Promote farmers’ access to genetic resources for agriculture. Enable the food insecure to meet their food and nutritional requirements and seek to assist those unable to do so. Develop national information and mapping systems to identify localized areas of food insecurity and vulnerability. Implement cost-effective public works programs for the underemployed. Develop targeted welfare and nutrition safety nets. (continued) Ensure that food supplies are safe, physically and economically accessible, appropriate, and adequate to meet the needs of the food insecure. Promote access to education and health care for all. Pursue participatory and sustainable food, agriculture, fisheries, forestry, and rural development policies and practices, in areas with low as well as high potential, that are essential for adequate and reliable food supplies at the household, national, regional, and global levels and combat pests, drought, and desertification. Pursue, through participatory means, sustainable, intensified, and diversified food production, and increased productivity and efficiency and reduced losses, taking into account the need to sustain resources. Combat environmental threats to food security, in particular droughts and desertification, pests, and erosion of biological diversity, and restore the natural resource base, including watersheds, to achieve greater production. Promote sound policies and programs on the transfer and use of technologies, skills development, and training for food security needs. Strengthen and broaden research and scientific cooperation on agriculture, fisheries, and forestry to support policy and international, national, and local actions to increase productive potential and maintain the natural resource base in agriculture, fisheries, and forestry and in support of efforts to eradicate poverty and promote food security. Formulate and implement integrated rural development strategies, in high and low potential areas, that promote employment, skills, infrastructure, institutions, and services in support of food security. Strengthen local government institutions in rural areas and provide them with adequate resources, decision-making authority, and mechanisms for grassroots participation. Promote the development of rural banking, credit, and savings schemes, including equal access to credit for men and women, microcredit for the poor, and adequate insurance mechanisms. Strive to ensure that food, trade, and overall trade policies are conducive to fostering food security for all through a fair and market-oriented world trade system. Use the opportunities arising from the international trade framework established in recent global and regional trade negotiations. Establish well-functioning internal marketing and transportation systems to facilitate local, national, and international trade. Meet essential food import needs in all countries, considering world price and supply fluctuations and taking into account food consumption levels of vulnerable groups in developing countries. Food-exporting countries should act as reliable sources of supplies to their trading partners and give due consideration to the food security of importing countries. Reduce subsidies on food exports in conformity with the Uruguay Round Agreements. Support the continuation of the reform process in conformity with the Uruguay Round Agreements. Endeavor to prevent and be prepared for natural disasters and man-made emergencies and meet transitory and emergency food requirements in ways that encourage recovery, rehabilitation, and development of a capacity to satisfy future needs. Reduce demands for emergency food assistance through efforts to prevent and resolve man-made emergencies, particularly international, national, and local conflicts. Establish as quickly as possible prevention and preparedness strategies for low-income, food-deficit countries and areas vulnerable to emergencies. (continued) Improve or develop efficient and effective emergency response mechanisms at international, regional, national, and local levels. Strengthen links between relief operations and development programs to facilitate the transition from relief to development. Promote optimal allocation and use of public and private investments to foster human resources, sustainable food and agricultural systems, and rural development. Create the policy framework and conditions that encourage optimal public and private investments in the equitable and sustainable development of food systems, rural development, and human resources necessary to contribute to food security. Endeavor to mobilize and optimize the use of technical and financial resources from all sources, including debt relief, to raise investment in sustainable food production in developing countries. Raise sufficient and stable funding from private, public, domestic, and international sources to achieve and sustain food security. Strengthen efforts towards the fulfillment of the agreed official development assistance target of 0.7 percent of the gross national product. Focus official development assistance (ODA) toward countries that have a real need for it, especially low-income countries. Explore ways of mobilizing public and private financial resources for food security through the appropriate reduction of excessive military expenditures. Implement, monitor, and follow up the summit plan of action at all levels in cooperation with the international community. Adopt actions within each country’s national framework to enhance food security and enable implementation of the commitments of the World Food Summit plan of action. Review and revise, as appropriate, national plans, programs, and strategies to achieve food security consistent with summit commitments. Establish or improve national mechanisms to set priorities and develop, implement, and monitor the components of action for food security within designated time frames. In collaboration with civil society, formulate and launch national food-for-all campaigns to mobilize all stakeholders and their resources in support of the summit plan of action. Actively encourage a greater role for, and alliance with, civil society. Improve subregional, regional, and international cooperation and mobilize and optimize the use of available resources to support national efforts for the earliest achievement of sustainable food security. Continue the coordinated follow-up by the U.N. system to the major U.N. conferences and summits since 1990; reduce duplication and fill in gaps in coverage, making concrete proposals for strengthening and improving coordination with governments. Relevant international organizations are invited, on request, to assist countries in reviewing and formulating national plans of action, including targets, goals, and timetables for achieving food security. Actively monitor the implementation of the summit plan of action. Establish, through FAO’s Committee on Food Security, a timetable, procedures, and standardized reporting formats, on the national and regional implementation of the summit plan of action. Monitor, through the Committee on Food Security, implementation of the summit action.plan. Clarify the right to adequate food and the fundamental right of everyone to be free from hunger, as stated in the International Covenant on Economic, Social, and Cultural Rights and other relevant international and regional instruments. (continued) Share responsibilities for achieving food security for all so that implementation of the summit plan of action takes place at the lowest possible level at which its purpose is best achieved. As defined by the countries at the summit, achieving improved world food security by 2015 is largely a development problem, the primary responsibility for attaining food security rests with individual countries, ODA could be of critical importance to countries and sectors left aside by other external sources of finance, and developing country governments should adopt policies that promote foreign and direct investment and effective use of ODA. There is a growing body of evidence that foreign financial aid works well in a good policy environment. For example, according to a recent World Bank report, financial assistance leads to faster growth, poverty reduction, and gains in social indicators with sound economic management. With sound country management, the report said, 1 percent of gross domestic product in assistance translates into a 1 percent decline in poverty and a similar decline in infant mortality. The report concluded that improvements in economic institutions and policies in the developing world are the key to a quantum leap in poverty reduction and that effective financial aid complements private investment . Conversely, financial aid has much less impact in a weak policy environment. The report’s conclusions are consistent with the approach espoused by the summit. For example, according to the summit countries, a sound policy environment in which food-related investment can fulfill its potential is essential. More specifically, summit participants said governments should provide an economic and legal framework that promotes efficient markets that encourage private sector mobilization of savings, investment, and capital formation. In addition, the participants said that the international community has a role to play in supporting the adoption of appropriate national policies and, where necessary and appropriate, in providing technical and financial assistance to assist developing countries in fostering food security . Table III.1 shows, as could be expected, that a majority of the more food-insecure countries are low-income countries and many of them are also least developed. Of 93 developing countries reported on in the table, 72 had inadequate food supplies in 1990-92. Forty-six of the countries were low income (that is, they had a gross national product per capita of less than $766), and 34 of the 46 countries were designated as “least developed,” meaning they were the poorest countries in the world. Together, the 46 countries accounted for more than 700 million of the chronically undernourished people in developing countries in 1990-92. Table III.1: Relationship Between Income Levels of Developing Countries and Food Security Income level (number of countries) Least developed, low income Average is based on available food supply at the country level. We designated countries as having inadequate or adequate daily per capita energy supplies based on an FAO analysis of the relationship between average per capita daily energy supplies and chronic undernutrition. According to FAO, for countries having an average daily per capita undernutrition threshold ranging between 1,750 calories and 1,900 calories and a moderate level of unequal food distribution, between 21 percent and 33 percent of the population will be below the undernutrition threshold if the average per capita daily energy supply is 2,100 calories. If the average per capita daily energy supply is 2,400 calories, 7 to 13 percent of the population will be undernourished. At 2,700 calories, 2 to 4 percent of the population will be undernourished. If food is distributed more equitably, the percentage of the population that is undernourished decreases, and vice versa. Table III.2 shows that between 1990 and 1997, Organization for Economic Cooperation and Development (OECD) Development Assistance Committee countries’ allocation of ODA averaged $60.9 billion (1996 prices and exchange rates). However, ODA has been steadily declining, from a high of $66.5 billion in 1991 to $52.7 billion in 1997. Table III.2: Total Net Resource Flows From OECD Development Assistance Committee Countries and Multilateral Agencies to Aid Recipient Countries, 1990-97 Dollars in billions (1996 prices and exchange rates) Excluding forgiveness of nonofficial development assistance debt for the years 1990-92. For many years, OECD’s Development Assistance Committee (DAC) has supported a target of providing ODA equivalent to 0.7 percent of the gross national product. This goal was reaffirmed by most DAC countries at the World Food Summit. As table III.3 shows, since the early 1980s ODA as a percent of the gross national product has declined for most DAC countries, including the five largest providers (France, Germany, Japan, the United Kingdom, and the United States). Only four countries met the ODA target in 1997 (Denmark, Norway, the Netherlands, and Sweden), and they represent a small amount of the ODA provided by the DAC countries. For the DAC countries in total, ODA represented 0.34 percent of their combined gross national product during 1980-84 and only 0.22 percent in 1997. Most countries’ ODA in 1997 ranged between only 0.22 percent and 0.36 percent of their gross national product. The United States was the lowest, contributing only 0.08 percent of its gross national product, or about one-ninth of the DAC target. Table III.3: ODA Performance of OECD DAC Countries, 1980-97 1997 ODA in dollars (billions) Target amount of 0.7 percent of GNPThe United States has never approved the ODA target. According to U.S. government officials, the government has no plans to try to meet the target. Apart from ODA, the United States devotes substantial resources to promoting global peace through its participation in a variety of strategic alliances, such as the North Atlantic Treaty Organization, and maintenance of the world’s most sophisticated defense forces. U.S. expenditures on ODA and defense combined in 1995 represented 3.9 percent of the U.S. gross national product—a higher percentage than that for any other DAC country. (The average for all other DAC countries was 2.4 percent, with a range from 1.1 percent for Luxembourg to 3.6 percent for France.) According to the OECD, reasons for the decline include the end of the Cold War, which removed a traditional and well-understood security rationale for development assistance, preoccupation with domestic issues and budgetary pressures in some donor countries, and fiscal restraint policies that have included disproportionate cuts in development assistance budgets. In June 1998, the OECD reported that fiscal restraint programs had succeeded in reducing OECD public deficits from 4.3 percent of combined gross domestic product in 1993 to 1.3 percent in 1997. The OECD said that the continuing decline in ODA ran counter to the widespread improvements in the economic and budgetary situations of the DAC member countries and to their clearly stated policy goals for increasing ODA. According to a June 1998 report by FAO (based on information provided by only some of the DAC countries), Ireland plans to increase its ODA to 0.45 percent of its gross national product by 2002 (compared to 0.31 percent in 1997); Switzerland plans to increase its ODA to 0.45 percent of its gross national product (from 0.32 percent in 1997 ), but the year for reaching this level was not cited; and Norway seeks to raise its assistance to 1 percent of gross national product by the year 2000 (compared to 0.86 percent in 1995). As table III.2 shows, private sector resource flows applied to the developing world have grown dramatically during the 1990s, from $52.4 billion in 1990 to about $286 billion in 1996 (1996 prices and exchange rates), although private flows declined in 1997 to an estimated $222 billion. Although the flow of private resources has increased considerably, the vast majority of the world’s poorest countries continue to rely heavily on official development financing. According to the OECD and the World Bank, with some exceptions, these countries are as yet unable to tap significant, sustainable amounts of private capital; without official assistance, these countries’ progress toward financial independence will be slow and difficult. One measure of the difficulty of attracting private investment to the most food-insecure countries and peoples is shown in table III.4. The table relates creditworthiness ratings of the risk of investing in 92 developing countries to the level of their food security. The ratings are from Euromoney, a leading international publication, that assigns ratings as a weighted average of indicators of economic performance, political risk, debt, credit, and access to bank finance, short-term trade finance, and capital. Ratings can range between a possible low of 0 points (poorest rating) to a possible high of 100 points (most favorable rating). As shown in the table, we grouped countries into four category ranges—0 to 25, 26 to 50, 51 to 75, and 76 to 100 points. The large majority of countries with inadequate average daily calories per capita had a creditworthiness rating of less than 51 points. Only 2 of the 71 countries with inadequate food availability received a creditworthiness rating of more than 75 points. As the table also shows, 358 million chronically undernourished people lived in countries that received a creditworthiness rating of less than 51 points, and another 459 million undernourished people lived in countries that received ratings between 51 and 75 points. Table III.4: Creditworthiness Ratings and Level of Food Security in Developing Countries Average based on available food supply at the country level. We designated countries as having inadequate or adequate daily per capita energy supplies based on an FAO analysis of the relationship between average per capita daily energy supplies and chronic undernutrition. According to FAO, for countries having an average daily per capita undernutrition threshold ranging between 1,750 calories and 1,900 calories and a moderate level of unequal food distribution, between 21 percent and 33 percent of the population will be below the undernutrition threshold if the average per capita daily energy supply is 2,100 calories. If the average per capita daily energy supply is 2,400 calories, 7 to 13 percent of the population will be undernourished. At 2,700 calories, 2 to 4 percent of the population will be undernourished. If food is distributed more equitably, the percentage of the population that is undernourished decreases, and vice versa. The World Food Summit identified trade as a key element for improving world food security and urged countries to meet the challenges of and seize opportunities arising from the 1994 Uruguay Round Trade Agreements (URA). According to the summit plan of action, the progressive implementation of the URA as a whole will generate increasing opportunities for trade expansion and economic growth to the benefit of all participants. The summit action plan encouraged developing countries to establish well-functioning internal marketing and transportation systems to facilitate better links within and between domestic, regional, and world markets and to further diversify their trade. The ability of developing countries to do so depends partly on steps taken by developed countries to further open their domestic markets. Food-insecure countries have concerns about possible adverse effects of trade reforms on their food security and about price volatility in global food markets, particularly in staple commodities such as grains. Trade liberalization can positively affect food security in several ways. It allows food consumption to exceed food production in those countries where conditions for expanding output are limited. Food trade has an important role to play in stabilizing domestic supplies and prices; without trade, domestic production fluctuations would have to be borne by adjustments in consumption and/or stocks. Trade allows consumption fluctuations to be reduced and relieves countries of part of the burden of stockholding. Over time, more liberal trade policies can contribute to economic growth and broaden the range and variety of foods available domestically. However, during the negotiations leading up to the URAs and since then, concerns have been raised about possible adverse impacts of trade liberalization on developing countries’ food security, especially low-income, food-deficit countries. These concerns relate to impacts on food prices, the ability of the developing countries to access developed countries’ markets, food aid levels, and global grain reserves. For example, FAO said that future levels of food aid might be adversely affected, since historically food aid volumes had been closely linked to the level of surplus stocks, and future surplus stocks could be low. FAO also expressed concern that if grain stocks fell to low levels, trade liberalization measures might be less effective in stabilizing world cereal market prices. In 1995, FAO estimated that the effects of the URAs would likely cause a sizable increase in the food import bills of developing countries. For the low-income, food-deficit countries as a whole, FAO projected the food import bill would be 14 percent higher in the year 2000 (about $3.6 billion) as a result of the URAs. However, a World Bank study, issued at about the same time, estimated very modest price increases for most major traded commodities and concluded the changes would have a very minor impact on the welfare of the developing countries. Some more recent studies have also indicated that the impact of the URAs on international food and agricultural prices will be very limited. The authors of one study estimated that grains and livestock product prices will increase by only about 2 to 5 percent by 2005 and concluded that the small increases are not expected to offset a long-term declining trend in food prices. Table IV.1 reports the results of two models that estimated the income effects resulting from reforms in the agricultural sector alone and economywide. Despite the delicate nature of modeling complex trade agreements, both models projected positive economy-wide benefits (from 0.29 percent to 0.38 percent of the base gross domestic product for developing countries as a whole). For agricultural reform alone, one model projected negative benefits and the other positive benefits for developing countries as a whole. Both models projected that Africa and the Near East would experience negative benefits from agricultural reform alone. The study that cited the results concluded that further work was needed to reconcile differences between the various assessments before firm policy recommendations could be made. Elsewhere, FAO commented that studies modeling the impact of the URAs typically cover only the parts of the agreement that are more amenable for quantification. In FAO’s view, estimates of the URA trade and income gains from the increase in market access for goods underestimate the full benefits of the agreement on world trade and income. Economy-wide reform as percent of base gross domestic product $39.6 (Table notes on next page) Legend UR = Uruguay Round In FMN, the Near East region is covered under Africa. Members include Iceland, Liechtenstein, Norway, and Switzerland. Austria, Finland, and Sweden left the association in January 1995. Not applicable. According to some observers, the most important thing that developed countries can do to help food-insecure countries is to open their own markets to developing country exports. Market access is important not only in primary commodities but also in clothing, textiles, footwear, processed foods, and other products into which developing countries may diversify as development progresses. Yet, according to the International Food Policy Research Institute (IFPRI) and the World Bank, the way developed countries are implementing the URAs is adversely affecting the ability of developing countries to improve their food security and may jeopardize their support for further trade liberalization. U.S. government officials state, however, that because of the URAs, most of the relatively few remaining barriers are being progressively eliminated. A State Department official further noted that the United States and the European Union have a number of preferential arrangements that favor developing countries and allow most agricultural imports. One study, by IFPRI, concluded that a large number of developing countries have liberalized foreign trade in food and agricultural commodities in response to structural adjustment programs and the recent URAs, but OECD countries have not matched their actions. While specific quantities of certain commodities from developing countries still receive preferential treatment, OECD countries have been reluctant to open their domestic markets to developing countries’ exports of high-value commodities such as beef, sugar, and dairy products. In IFPRI’s view, this reduces benefits to developing countries and may make continued market liberalization unviable for them. IFPRI recommended that the next round of World Trade Organization (WTO) negotiations emphasize the opening of OECD domestic markets to commodities from developing countries. According to a World Bank report, without an open trading environment and access to OECD country markets, developing countries cannot fully benefit from the goods they produce that give them a comparative advantage. Without improved demand for developing countries’ agricultural products, the agricultural growth needed to generate employment and reduce poverty in rural areas will not occur. Under the Uruguay Round (UR) Agreement on Agriculture, countries generally agreed to eliminate import restrictions, including quotas. However, according to the World Bank, the elimination of agricultural import restrictions through tariffication resulted in tariff levels that in many cases were set much higher than previously existing tariff levels. If developing countries are to adopt an open-economy agricultural and food policy, they must be assured of stable, long-term access to international markets—including those of the OECD, the Bank said. Yet during 1995-96, when international grain prices were soaring, the European Union restricted cereal exports from member countries (by imposing a tax on exports) to protect their domestic customers. An export tax was also applied during a few weeks in 1997. The 1994 URAs included a ministerial decision reached by trade ministers in Marrakesh, Morocco, that recognized that implementation of the UR agricultural trade reforms might adversely affect the least-developed and net food-importing countries. The concern was that as a result of the reforms, these countries might not have available to them adequate supplies of basic foodstuffs from external sources on reasonable terms and conditions and might face short-term difficulties in financing normal levels of commercial imports. To obviate this situation, the decision included, among others, agreements to review the level of food aid established periodically by the Committee on Food Aid under the Food Aid Convention of 1986 and to initiate negotiations in an “appropriate forum” to establish food aid commitments sufficient to meet the legitimate food aid needs of the developing countries during the reform program; adopt guidelines to ensure that an increasing proportion of basic foodstuffs is provided to least-developed countries and net food-importing countries in fully grant form and/or on appropriate concessional terms in line with the 1986 Food Aid Convention; and have the WTO’s Committee on Agriculture monitor, as appropriate, follow-up actions. The decision specifically targeted developing countries whose food aid needs may be adversely affected as a result of the UR agricultural trade reforms. It did not establish or propose criteria for assessing whether trade reforms had adversely affected the availability of and terms and conditions for accessing basic foodstuffs. (Methodologically, it could be difficult to separate the effects of the URAs’ reforms from other factors affecting the ability to access food from external sources.) Nor did the decision establish what criteria would be used in determining the “legitimate needs” of different developing countries. For example, would “legitimate needs” be based on a country’s current overall food aid needs, the amount of food aid it received prior to completion of the URAs, the amount of food aid adversely affected by the agreements, or something else? In addition, the decision did not establish any timetable for resolving these issues. Finally, the decision did not clearly identify what would be the appropriate forum for establishing a level of sufficient food aid commitments. In March 1996, the WTO’s Committee on Agriculture established a list of eligible countries covered by the decision with an understanding that being listed did not confer automatic benefits. During country negotiations over the content of the proposed World Food Summit action plan in the fall of 1996, there was considerable debate about the ministerial decision. Developing countries attributed recent high world grain prices to UR agricultural reforms and wanted the plan to commit countries to prompt and full implementation of the decision. U.S. negotiators disagreed. They recognized that the high market prices for grain had adversely affected the least-developed and net food-importing countries but said that the reforms were just beginning to be implemented and it was thus too early for the reforms to have had any measurable adverse effects. The summit plan that was finally approved by all countries, in November 1996, states that the ministerial decision should be fully implemented. To date, however, decisions still have not been made about criteria that should be used for judging and quantifying the legitimate food aid needs of developing countries. In addition, no decisions have been made about an appropriate forum or criteria for assessing whether the Uruguay Round trade reforms have adversely affected the availability of and terms and conditions for accessing basic foodstuffs. Consequently, no findings have been made as to whether adverse impacts have already occurred. In December 1996, the WTO ministerial meeting in Singapore agreed that the London-based Food Aid Committee, in renegotiating the Food Aid Convention (scheduled to expire in June 1998), should develop recommendations for establishing a level of food aid commitments, covering as wide a range of donors and donatable foodstuffs as possible, sufficient to meet the legitimate needs of developing countries during implementation of the Uruguay Round reform program. In January 1997, Food Aid Committee members indicated they would do so, with an understanding that the committee would direct its recommendations to the WTO and reflect its recommendations in the provisions of a new food aid convention. Agreement on a new convention has not yet been reached. The existing agreement was re-extended and is scheduled to expire in June 1999. According to a U.S. official, if ongoing efforts to negotiate a new agreement are successful, the document should go some distance in assuring food-deficit, low-income countries that the Uruguay Round trade liberalization will not drastically reduce food aid. According to the official, the United States, Australia, Canada, and Japan are pressing hard for conclusion of the negotiations . In January 1998, the FAO Secretariat advised the WTO Committee on Agriculture that there was little it could do in its analyses to isolate the effect of the Uruguay Round from other factors influencing commodity prices. As countries rely more on trade to meet their food needs, they become more vulnerable to possible volatility in world food prices. Price volatility of basic food commodities, especially grains, can be a significant problem for food-insecure countries. Many poor people spend more than half their income on food. FAO and others have suggested that sufficient grain stocks be held to help contain excessive price increases during times of acute food shortages and thus provide support to the most vulnerable countries. However, views differ over the level of global reserves needed to safeguard world food security, the future outlook for price volatility, and the desirability of governments’ holding grain reserves. In response to the world grain crisis of the early 1970s, the 1974 World Food Conference endorsed several principles regarding grain stock-holding policies: (1) governments should adopt policies that take into account the policies of other countries and would result in maintaining a minimum safe level of basic grain stocks for the world as a whole; (2) governments should take actions to ensure that grain stocks are replenished as soon as feasible when they drop below minimum levels to meet food shortages; and (3) in periods of acute food shortages, nations holding stocks exceeding minimum safe levels to meet domestic needs and emergencies should make such supplies available for export at reasonable prices. Subsequently, the Intergovernmental Group on Grains established a stocks-to-consumption ratio of 17 to 18 percent as an indicator of a minimum safe global food security situation. As table IV.2 shows, the world grain stocks-to-use ratio reached and exceeded the minimum level in 1976-77 and remained at or above that level for the next 18 years. In the year before the November 1996 World Food Summit, the ratio fell to 14 percent, the lowest level in the previous 25 years. During 1995-96, world grain prices rose significantly. The price of wheat increased from $151 per ton in April 1995 and reached a peak of $258 in May 1996, a rise of 71 percent. Corn prices rose continuously from $113 in May 1995 to a record $204 in May 1996, an increase of 81 percent. The world price increases were accompanied by high grain prices in many developing countries. In some cases, the latter prices exceeded the world price increases because of simultaneous depreciation of developing countries’ currencies. According to the World Bank, the price increases were a result of a poor U.S. grain harvest in 1995, combined with unusually low world grain stockpiles. Another factor was China’s entry into world grain markets, with a purchase of 5 million tons in 1995 (after exporting nearly 11 million tons of grain in 1993-94). Total carryover stocks as a percent of world grain consumption (Table notes on next page) Not available. Although the high grain prices of 1996 have abated, estimates of the stocks-to-use ratio remained at a low level through early 1998. As recently as April 1998, FAO estimated the ratio would be 15.9 percent for 1997-98. However, FAO revised its figures in June 1998, estimating that the ratio might reach 16.9 percent for 1997-98 and cross the 17-percent threshold in 1998-99. These revisions reflected the expectation of a record grain crop in 1998 and lower feed demand in China, the United States, and some countries affected by the Asian financial crisis. World Food Summit participants said that reserves was one factor, in combination with a number of others, that could be used to strengthen food security. According to the summit action plan, it is up to national governments, in partnership with all actors of civil society, to pursue at local and national levels, as appropriate, adequate and cost-effective emergency food security reserve policies and programs. Summit countries agreed that governments should monitor the availability and nutritional adequacy of their food supplies and reserve stocks, particularly areas at high risk of food insecurity, nutritionally vulnerable groups, and areas where seasonal variations have important nutritional implications. In addition, international organizations and particularly FAO were asked to continue to monitor closely and inform member nations of developments in world food prices and stocks. The summit did not identify a minimum level of global grain reserves needed to ensure food security nor recommend any action by countries individually or in concert to achieve or maintain such a level. In 1996, FAO invited a group of experts to Rome to consider a number of developments that directly or indirectly influence price stability. These included, among others, production variability, the URAs, and the role of cereal stocks. The group agreed that there was little evidence to reach conclusions on whether production variability at the global level would increase or decrease in the future. Price instability caused by shifts in production between countries that may occur because of the URAs was expected to be slight. The group concurred that ongoing market liberalization initiatives, including those under the URAs, regional trading arrangements, and other unilateral initiatives, should as a whole contribute to stability in international markets by inducing greater adjustments to demand/supply shocks in domestic markets. However, changes under the URAs were not considered to be drastic enough for instability to decrease significantly, as many countries, especially some larger trading countries, still retained instruments and institutions (such as policies similar to variable levies and state trading) that had impeded price transmission in the past. The group agreed that a lack of transparency and consistency in government stock-holding and trade policies had been a source of instability in the past and that less involvement of governments in stock management and a more transparent trade policy should contribute to stability in the future. At the same time, there was considerable doubt whether private stocks would increase to the extent required to offset the shocks that previously were countered by the public sector stocks. The group concluded that increased funds in international commodity markets were expected to influence only within-year price volatility and were unlikely to affect annual price levels in the longer run. In addition, there were uncertainties regarding how fast China and countries of the former Soviet Union would be fully integrated into the world agricultural trading system. Overall, the experts agreed that compared to the situation in the past, future world commodity markets would likely retain lower levels of overall stocks but should be less prone to instability due to faster and more broad-based adjustments to production/demand shocks. However, the path to a new market environment was seen as uncertain. The group generally believed that price instability would be greater during the transitional period than after the system had fully adjusted. According to an FAO study prepared for the summit, global stocks are likely to remain relatively low compared with the previous decade, and the chance of price spikes occurring is probably greater than in the past.According to a World Bank study, grain stocks are not likely to return to the high levels of the 1980s, given the current focus on reducing government involvement in agriculture, and with smaller grain stocks, prices could be more volatile than in the past. According to IFPRI, policy changes in North America and Europe could result in a permanent lowering of grain stocks and thus increase future price fluctuations because of a lack of stocks to buffer price variations. IFPRI noted that the moderating or cushioning impact on world price instability that once was exercised by varying world grain stocks has been reduced by the substantial decline in grain stocks in recent years. As a result, IFPRI said, international price instability, if fully transmitted to domestic markets, especially to low-income, food-deficit countries, may raise domestic price instability in these countries. Views differ over whether governments should take action to hold and/or increase grain reserves. Among the views expressed against increasing or maintaining large government-held reserves are the following: Reserves are expensive to accumulate, store, manage, and release. An annual cost of 25 percent to 40 percent of the value of the reserves is not unusual. Developing countries cannot afford such costs; it is cheaper for them to deal with periodic price increases. They should hold only enough stocks to tide them over until replacement supplies can be obtained from international markets. It is much cheaper for most countries to rely on trade, using financial reserves or international loans to make up shortfalls. If reserves are to be held, it is more efficient and cheaper to hold reserves in money than in physical stock. Governments, including the U.S. government, have not been good at managing stocks. Stocks are not the only measures available for coping with price volatility. As a result of market and trade liberalization measures, markets can respond more quickly to shocks, which will lead to much briefer price cycles than those in the past. Free trade permits stocks to be shifted, thereby reducing the need to maintain large amounts of domestic stocks. World food supplies have been adequate since the Second World War. Good and bad weather conditions for growing crops tend to balance out across countries. In addition, some crops and food products can be substituted for others, depending on the weather. The problem is not one of supply but of buying power, including when prices rise to high levels. Other measures are needed, such as policy reforms, that increase economic development and enable people to buy the food they need. Among views advanced for governments’ taking action to increase and maintain emergency reserve levels (some of the views pertain specifically to the United States; others apply to countries more generally) are the following: It is good government policy to store grain during prosperous years in order to survive lean years. Private companies will not hold many reserve stocks, since it is expensive to do so and governments may limit price increases in times of short supply, thus affecting companies’ ability to recoup the added cost of holding emergency reserves. Even if governments do not excel at managing reserves, the social costs of their not doing so may be greater. The use of emergency food reserves to respond quickly to periodic food shortages in developing countries is the most unobtrusive way for governments to intervene in the market. Responsible trade requires that wealthier countries establish and maintain essential grain reserves as a supply safety net (available to other countries when the need arises) and thus to encourage and compensate poorer countries for relying on increased trade liberalization. If a tight U.S. grains supply situation occurs and export customers perceive that a unilateral U.S. export embargo is plausible, they will intensify their food self-sufficiency goals and seek grain commitments from other exporters. A 1996 FAO study identified several possible alternatives for mitigating price volatility problems, including national and international measures. However, it is not clear to what extent developing countries, particularly low-income, food-deficit countries, are capable of establishing such measures or the costs and benefits of such measures relative to one another and to grain reserves. The Uruguay Round Agreement on Agriculture limits the use of quotas and variable levies, two measures traditionally employed to deal with price instability. According to the FAO study, a country may adopt a sliding scale of tariffs related inversely to the level of import prices and keep the maximum rate of duty at a level no higher than its agreed rate of duty in the WTO. If the agreed rate of tariffs is fairly high, which is commonly the case, developing countries may offset variations in import prices by reducing tariffs when prices rise and raising them when prices fall. In addition, at times of sharply rising world prices or sharply rising demand from a neighboring country, it may be possible for a country to limit exports, provided it has taken other countries’ food security into account. (See URA on Agriculture, Article 12.) Commodity exchanges, futures contracts, and options could be used to reduce uncertainty associated with price and income instability. However, not all countries could make use of existing exchanges because of lack of knowledge, lack of economies of scale, and/or higher transaction costs. To ease such constraints, the experts suggested establishing nongovernmental institutions to allow a large number of small entities to pool their risks. Countries with sufficient food reserves or cash to purchase food could seek to mitigate the effect of price spikes by providing food aid to meet the unmet food needs of urban and rural poor. Food aid from international donors could be used to help mitigate the consequences of high increases in the price of imported food. However, with reduced surpluses and budgetary constraints in donor countries, it is not clear how much additional aid would be available when needed. The International Monetary Fund’s Compensatory and Contingency Financing Facility can be used by members to obtain credit if they are experiencing balance of payment difficulties arising from shortfalls in export receipts (that is, foreign exchange) or increases in the costs of grain imports—provided these are temporary and largely attributable to conditions outside the control of the countries. However, partly because of the conditions and interest costs associated with drawings from the facility and the availability of alternative facilities that are more favorable, countries have not used the facility very frequently over the past 15 years. (The International Monetary Fund believes that price spikes have not been sufficiently frequent since the facility’s inception to warrant its use.) The European Union also has a financing mechanism for certain countries, but the financing is limited to covering shortfalls in export earnings (high food import bills are not covered), and the mechanism lacks the funding and concessional terms (below-market interest rates) necessary for wider use by poorer countries. Finally, according to the FAO report, an international insurance scheme could be devised for financing food imports by low-income, food-deficit countries during periods of price instability. Beneficiary countries could finance the system with premium payments. Ideally, such a scheme would operate without conditions. However, according to the FAO study, in practice only a few countries could afford to pay the premiums by themselves. Thus, for countries requiring assistance from developed countries, setting conditions for the use of withdrawals from the insurance facility might be necessary. Following the large increase in grains prices during 1995-96, FAO surveyed the governments of 47 developing countries to determine whether their domestic retail and wholesale prices of grains rose and, if so, how they responded. FAO found that domestic market prices increased considerably in most countries but usually not as much as the world price. (In some countries, prices did not increase or they even fell because of favorable domestic harvests.) Many countries mitigated the price effects by annulling or reducing import duties. Some countries mitigated price effects by further subsidizing already regulated prices of grain products. At the World Food Summit, countries said they would try to prevent and be prepared for natural disasters and man-made emergencies that create food insecurity and to meet transitory and emergency food requirements in ways that encourage recovery, rehabilitation, development, and a capacity to satisfy future needs. The summit’s action plan said that food assistance can also be provided to help ease the plight of the long-term undernourished, but concluded that food aid is not a long-term solution to the underlying causes of food insecurity. The plan called upon countries’ governments to implement cost-effective public works programs for the unemployed and underemployed in regions of food insecurity and to develop within their available resources well-targeted social welfare and nutrition safety net programs to meet the needs of their food insecure. The summit did not recommend an increase in development assistance for the specific purpose of helping countries to establish or improve such programs. However, donor countries generally agreed to strengthen their individual efforts toward providing official development assistance equivalent to 0.7 percent of gross national product each year. Over the past several decades, food aid has helped meet some of the emergency and nonemergency food needs of many food-insecure countries. In recent years, food aid has declined significantly. As table V.1 shows, world grain aid shipments increased from 6.8 million tons in 1975-76 to a peak of 15.2 million tons in 1992-93. Shipments in 1997 were 5.9 million tons, about 40 percent of the peak value and about 60 percent of the former World Food Conference target. FAO estimates that shipments in 1997-98 were at about the same level as in 1996-97 (that is, at about 5.3 million tons). According to FAO, grain shipments in 1996-97 were at the lowest level since the start of food aid programs in the 1950s. Table V.1 also shows a substantial decline in the proportion of food aid provided for program purposes and a steady increase in the proportion of food aid allocated for emergency purposes. In absolute terms, in 1997 project food aid equaled about 54 percent of its peak level (1986-87), emergency food aid was about 55 percent of its peak level (1992), and program aid was about 17 percent of its peak level (1993). Program and project aid combined peaked in 1993 at 11.3 million tons. The combined total for 1997 was 3.5 million tons or 31 percent of the peak-year total. Type of aid (percent) All donors (million tons) According to a recent FAO forecast, cereal food aid shipments are expected to increase substantially in 1998-99, after 4 years of decline, and reach 9 million tons. FAO attributed the increase to a greater availability of grain supplies in donor countries and higher food aid needs, particularly in Asia. According to FAO, food aid availabilities have been growing in recent months, triggered by relatively low international grain prices and accumulating grain stocks, mostly in the European Union and the United States. (The United States announced in July 1998 that it would increase its wheat donations by up to 2.5 million tons, most of which has been allocated.) On the demand side, financial and economic turmoil has affected the economies of many food import-dependent countries, raising the need for food aid. Although grain prices have declined, countries experiencing severe food emergencies will not necessarily be able to increase commercial cereal imports, FAO said. And, the slower growth of the world economy, combined with falling cash crop prices and export earnings, could force some developing countries to sharply cut back on their imports of essential foods. Table V.2 shows how food aid trends have affected the low-income, food-deficit countries (for total food aid, not just grains). Food aid received in 1995-96 was at the lowest level since 1975-76 and represented about 50-55 percent of previous peak-year deliveries. During the 1990s, food aid provided to low-income, food-deficit countries has averaged about 78 percent of food aid deliveries to all developing countries; by way of comparison, between 1983-84 and 1986-87, low-income, food-deficit countries averaged more than 92 percent of deliveries. In 1995-96, the proportion of these countries’ food imports covered by food aid fell to 8 percent, the lowest level in more than 20 years. Food aid to low-income, food-deficit countries (million tons) In 1996, FAO estimated that it would take an additional 30 million tons of grain and over 20 million tons (grain equivalent) of other foods simply to bring 800 million chronically undernourished people up to “minimum nutritional standards” (assuming perfect targeting of food assistance and local absorptive capacity). FAO estimated the value of the additional required food at about $13 per person per year (in 1994 dollars), or about $10.4 billion. According to FAO, the world produces enough food to meet the needs of all people, but hundreds of millions remain chronically undernourished because they are too poor to afford all the food they need. In addition, others are undernourished because they are otherwise unable to provide for themselves (for example, because of humanitarian crises), because not enough food assistance has been provided, or because the assistance has not been sufficiently effective. The provision of food aid costing $10.4 billion would require a large commitment compared to recent expenditures on foreign assistance more generally. For example, during 1996 and 1997, net disbursements of ODA by the Development Assistance Committee members of the OECD averaged about $55 billion (1996 prices and exchange rates). Several studies have questioned whether food aid is an efficient means of satisfying nonemergency, chronic food shortage needs. A joint 1991 study by the World Bank and the World Food Program on food aid for Africareported that food aid may in some cases be a second-best solution and there are problems in its implementation. The study concluded, however, that it is unlikely that an equal amount of financial aid would be available if the food aid is not provided. The study included a number of specific recommendations for improving the effectiveness of food aid and concluded that food aid contributes substantially to growth, long-term food security, and the reduction of poverty and that its use should continue. A 1993 evaluation of the World Food Program found that while emergency food aid was quite effective, food aid for development had a number of weaknesses. There was little evidence that country strategies seriously addressed the use of food aid to support national priorities. At the project level, many weaknesses were found: the targeting of food aid on the poorest areas and the poorest people was often unsatisfactory, the technical content of projects often left much to be desired, and the phasing out of projects was often not planned. The study made several recommendations to improve the effectiveness and efficiency of the food aid development program. In addition, a 1996 study prepared for European Union member states evaluated food aid commodities that were provided directly to a recipient government or its agent for sale on local markets. Such aid was intended to provide some combination of balance of payments support (by replacing commercial imports) and budgetary support (through governments’ use of counterpart funds generated from the sale of the commodities). This study noted the following: The impacts of the food aid on food security were marginally positive, but transaction costs were very high, suggesting the need for radical changes to improve effectiveness and efficiency. Minor, short-term negative impacts on local food production were common. Food aid was still being used, though to a decreasing extent, to support subsidized food sales, which in some countries favored food-insecure and poor households and, in others, urban middle-class and public sector groups. The little available evidence suggested that the food aid had modest positive impacts on the nutritional status of vulnerable groups. The European Commission and the member states should consider (1) either phasing out such assistance, especially in the case of donors with smaller programs or (2) making radical changes in policies and procedures to increase effectiveness and reduce transaction costs to acceptable levels. A group of experts meeting at FAO in June 1996 opposed food aid as a regular instrument to deal with market instability because of its market displacement and disincentive effects. A 1997 report prepared for the Australian government recommended that Australia considerably reduce its food aid commitment to the Food Aid Convention and in the future use food aid primarily for emergency relief. In October 1998, USAID reported on the results of a 2-year study that it conducted to assess the role of U.S. food aid in contributing to sustainable development during the past 40 years. It examined 6 case studies. The U.S. Agency for International Development (USAID) concluded that U.S. food aid had at times been successfully used to leverage or support a sound economic policy environment and thus promote sustainable development. At other times, however, U.S. food aid had hampered sustainable development by permitting governments to postpone needed economic policy adjustments and, at still other times, had no discernible effect on a country’s economic policy environment. USAID found that providing large quantities of food aid for sale on the open market at the wrong time has at times been a disincentive to domestic food production. However, targeting food aid to those who lack purchasing power and are unable to buy food has at other times increased food consumption and incomes without adversely affecting domestic food production. In addition, USAID concluded that it is normally more efficient to transfer resources as financial aid rather than as food aid, but in practice this is a moot point because generally the choice is between U.S. food aid or no aid. According to the World Food Program, which distributes about 70 percent of global emergency food aid, some of its emergency relief projects tend to be underfunded or not funded at all because donors direct their contributions to the program’s emergency appeals on a case-by-case basis. In addition, the program has problems in ensuring a regular supply of food to its operations more generally because of lengthy delays between its appeals and donor contributions and donors’ practice of attaching specific restrictions to their contributions. In 1997, about 6 percent of the program’s declared emergency needs were unmet and 7 percent of its protracted relief operations needs were not satisfied. Table V.3 shows the program’s resource shortfall for emergency food aid, including emergency operations and protracted relief operations, for 1998. As the table shows, 33 operations were underfunded and 18 percent of total 1998 needs were not covered. Dollars (millions) Assistance to victims of Kosovo crisis Vulnerable groups, refugees, others Refugees, returnees, internally displaced persons, war victims Displaced persons and vulnerable groups Displaced persons and vulnerable groups Food assistance for Afghan refugees Food assistance to refugees, returnees, internally displaced persons (continued) Dollars (millions) Central America regional “El Niño” Crop failure caused by drought (“El Niño”) Assistance to Western Sahara refugees Food assistance for Somali refugees Locust infestation and crop losses Ethiopia, Somalia, Djibouti refugees Somalia, Sudanese, Djibouti, Kenya refugees Victims of Meher crop failure Somalia, Ethiopia, Sudanese refugees Feeding for schools affected by unrest Internally displaced persons & returning Sierra Leone refugees (continued) Dollars (millions) The countries attending the World Food Summit acknowledged a clear relationship between conflict and food insecurity and agreed that an environment in which conflicts are prevented or resolved peacefully is essential to improving food security. They also noted that conflicts can cause or exacerbate food insecurity. Table VI.1 presents the results of an analysis in which we examined the relationship between four different types of conflict (genocide, civil war, interstate war, and revolution) and the level of food security in 88 developing countries. In general, the table shows an association between countries experiencing conflict and food inadequacy. For example, countries with low levels of average daily calories per capita generally experienced more involvement in conflict proportionately than did countries with higher levels of average daily calories per capita. In terms of types of conflict, for each of the 3 decades shown, all countries that experienced genocide had an inadequate level of food security. For 2 out of the 3 decades (that is, the 1960s and the 1980s), countries that experienced civil war were more likely to have experienced food inadequacy. Similarly, for 2 out of the 3 decades (the 1960s and the 1970s), countries that experienced interstate war on their own territory were more likely to have been food insecure. In the case of revolution, the relationship is more in the other direction; for 2 out of the 3 decades, food-secure countries were more likely to have experienced revolution than food-inadequate countries. Average daily calories per capita(Table notes on next page) The summit’s policy declaration and action plan stress the importance of promoting sustainable agricultural development in developing countries. In an analysis prepared for the summit, FAO concluded that it was technically possible for the more food-insecure developing countries to increase their agricultural production by substantial amounts and in so doing to contribute significantly to the summit’s goal of halving the number of their undernourished people by 2015. According to a U.S. official, the FAO analysis was an important basis underlying the agreement of summit countries to try to halve undernutrition by 2015. At issue is whether the developing countries will be able to achieve the kind of production increases indicated by the FAO study. Table VII.1 shows the key results of the FAO analysis. FAO differentiated between three levels of food-insecure countries: (1) countries with an estimated average per capita daily energy supply (DES) of less than 1,900 calories, (2) countries with an estimated average per capita DES of 2,300 calories, and (3) countries with an estimated average per capita DES of more than 2,700 calories. As the table shows, the proposed goal for 17 group 1 countries is to raise their DES to at least 2,300 and if possible 2,500 calories by 2010. The normative goal for 38 group 2 countries is to raise their DES to at least 2,500 calories and, if possible, to 2,700 calories by 2010. The normative goal for 38 group 3 countries is to maintain DES above 2,700 calories and to achieve a more equitable distribution of food supplies among their citizenry. Table VII.1: FAO Analysis of Daily Per Capita Calorie Levels, Grain Production Growth Rates, and Millions of Undernourished to 2010 for 93 Developing Countries (percent per year) (millions) Not available. According to FAO’s analysis, if the normative goals were achieved, additional production would deliver 60 percent of the developing countries’ additional needed food for consumption. The balance would have to be covered by net imports, which would increase from the 24 million tons in 1990-92 to 70 million tons in 2010 (instead of the 50 million tons projected by a 1995 FAO study). FAO estimated that the additional export supply was within the bounds of possibility for the main grain exporting countries. Achieving the production increases previously discussed is not likely to be easy because it requires unusually high growth rates in the more food-insecure countries and, in turn, higher amounts of investments, especially in the worst-off countries. In addition, it requires numerous major changes in these countries, particularly in the rural and agricultural sector. According to FAO, aggregate production must increase rapidly in countries with too-low daily caloric levels and must also contribute to development and generate incomes for the poor. As table VII.1 shows, the group 1 countries would have to more than double their aggregate agricultural production growth rate during 1970-92, from 1.7 percent to 3.8 percent per year. FAO considered 3.2 percent the most likely production increase. For several group 1 countries, production increases of 4 to 6 percent annually are implied, according to FAO. For group 2 countries, the goal is to slow an expected decline in the agricultural production growth rate per year relative to the 3 percent rate during 1970-92. FAO estimated the most likely production increase for these countries at 2.3 percent but said the rate would need to be at least 2.5 percent to achieve the summit goal of halving the number of food insecure by 2010. FAO based its normative targets on fairly optimistic assumptions about expanding domestic production and access to imports, including food aid.In fact, FAO said, extraordinary measures would have to be taken to realize the normative goals. FAO offered the following rationale to justify the targets. Previously, some of the countries had already achieved average per capita daily caloric levels above the proposed minimum of 2,300 calories. For most of the countries, daily caloric levels were at the minimum or near the minimum recorded for them during the previous 30 years. There was a marked correlation between these low levels and the prevalence of unsettled political conditions, which suggested that progress could be made during a recovery period if more peaceful conditions prevailed. Finally, FAO said, the historical record showed that periods of 10-20 years of fairly fast growth in production and consumption had not been uncommon—mostly during periods of recovery (usually from troughs associated with war, drought, or bad policies). Thus, if conditions were created for the onset of a period of recovery, policies and efforts to achieve the required high growth rates could bear fruit. According to one expert, most low-income developing countries and countries of the former Soviet Union and Central and Eastern Europe have large, unexploited gaps in agricultural yields. He estimated that yields can be increased by 50-100 percent in most countries of South and Southeast Asia, Latin America, the former Soviet Union and Eastern Europe and by 100-200 percent in most of sub-Saharan Africa. According to the expert, it is technically possible for the world population to meet growing food demands during the next few decades, but it is becoming increasingly difficult because of groups that are opposed to technology, whether it be developed from biotechnology or more conventional methods of agricultural science. The expert has expressed particular concern about the effect of these groups on the ability of small-scale farmers in developing countries to obtain access to the improved seeds, fertilizers, and crop protection chemicals that have allowed affluent nations plentiful and inexpensive foodstuffs. Under its scenario of the most likely increase in agricultural production in developing countries by 2010, FAO roughly estimated, in a presummit analysis, that gross investment in primary agricultural production in the developing countries would require an increase from $77 billion annually in the early 1990s to $86 billion annually during 1997-2010 (constant 1993 dollars). FAO estimated that another $6 billion of investment would be needed to halve the number of undernourished people in countries with low daily per capita caloric levels. While the $6 billion increase represented only a 7-percent rise, FAO noted that all of the additional investment would be required in the lagging countries. Thus, group 1 countries (table VII.1) would require a 30-percent annual increase in investment, and group 2 countries a 17-percent increase. However, according to FAO, the low-income, food-deficit countries will mostly continue to have very low domestic savings and access to international credit. As a result, both private and public sectors will have difficulty, at least in the short and medium term, in raising the investment funds needed to respond to new production opportunities, even when they have a comparative economic advantage, and there will be a continuing need for external assistance on grant or concessionary lending terms. FAO’s presummit analysis did not address, for countries with low daily per capita caloric levels, added investment needs for (1) post-production agriculture and improved rural infrastructure (excluding irrigation), (2) public services to agriculture, and (3) social support in rural areas. Consequently, the analysis may understate the amount of additional investment required in those countries to attain the normative production goals. In addition, there is no indication that bilateral or multilateral donors will increase their assistance by the amounts indicated by the FAO study. In fact, ODA for primary agriculture steadily declined from a peak of $18.9 billion in 1986 (1990 constant prices) to $9.8 billion in 1994. According to FAO, external assistance is almost the only source of public investment in agriculture for many of the poorer developing countries. According to an October 1997 World Bank report, several major regions of the world and many countries that receive the Bank’s assistance are agricultural underperformers. These regions and countries have institutions and agricultural policies that discriminate against the rural sector, underinvest in technology development, maintain inappropriate agrarian structures, use arable land for low-productivity ranching, undervalue natural resources and therefore waste them, seriously underinvest in the health and education of their rural populations, discriminate against private sector initiatives in food marketing, and fail to maintain existing or invest in new rural infrastructure. Unless these policies, institutions, and public expenditure patterns are corrected, the Bank said, they will not have abundant food supplies. In the Bank’s view, rural areas have not been developed for three reasons. First, countries are not politically committed to the broad vision of rural development. Second, for many reasons, international interest in agricultural and rural matters has waned over the past decade. Third, the Bank has in the past been poorly committed to rural development, and its performance on rural development projects has been weak. For example, according to a Bank official, a 1993 review found that Bank expenditures on agriculture and rural development had declined from $6 billion to about $3 billion and that less than half of the Bank’s projects in the area were successful. Following the review, the Bank conducted additional analyses and developed a vision statement for its future work in the area. In September 1996, the Bank’s President announced that rural development would be one of six key Bank objectives. To tackle the issue of weak commitment at the country level, the Bank is focusing on improving its strategies for country assistance. According to the Bank, the strategies define the key issues for development, analyze the current and future prospects for dealing with the issues, and provide the overall context within which Bank operations are undertaken. The Bank believes that the strategies are crucial to renewing the commitment by countries and the Bank to rural growth. The Bank plans to build a comprehensive rural development strategy into each of its overall country assistance strategies. According to the Bank, no approach to rural development will work for all countries, and developing and implementing rural strategies will be complex for most countries. The Bank believes that if country assistance strategies include well-defined, coherent rural strategies and treat agriculture comprehensively, the chances for a sustained and effective rural sector program will be substantially improved. Even so, in October 1997, a Bank report acknowledged that there were still wide differences of opinion within the Bank and among its partners as to the priority that should be given the rural sector. Summit countries agreed to set out a process for developing targets and verifiable indicators of national and global food security where they do not exist, to establish a food insecurity and vulnerability information and mapping system, and to report to the Committee on World Food Security on the results produced by the system. On March 24-25, 1997, FAO convened a group of experts to discuss ways and means of implementing such a system. This group recommended a series of initial steps to take prior to the CFS meeting in June 1998. Subsequently, an interagency working group was established to promote development of the information and mapping system. (Membership included 21 international agencies and organizations, including bilateral donor agencies.) The working group met in December 1997 and April 1998. The FAO Secretariat helps staff the work of the group between meetings. According to FAO, among some of the key tasks identified for establishing the information and mapping system are the following: Designate country focal points for all the information and mapping system matters. Develop an awareness and advocacy strategy for end-users of the system; where key national policymakers are not fully aware of the need for strong food insecurity and vulnerability information systems, secure their commitment to provide adequate and continuing support for the establishment and maintenance of such systems. Inventory available as well as planned data collection systems at both the international and national levels, and evaluate the quality and coverage of their data; at the national level, identify and prioritize the information needs of key food security decisionmakers and determine to what extent needs are already met; define a priority set of information required by national decisionmakers and a set of verifiable objectives; set out a scheduled program of initiatives and activities to meet those objectives. Define the conceptual framework and scope of the information and mapping system, including the indicators to be used at both national and international levels for identifying (down to at least the household level) people who are food insecure or at risk of becoming food insecure, the degree of their undernutrition or vulnerability, and the key factors or causes for their food insecurity or vulnerability. When agreement on system indicators is reached, complete and issue guidelines for the establishment of the system at the national level. Inventory national systems to determine to what extent the information and mapping system indicator needs are already met; identify significant gaps and weaknesses; assess the cost and time required to implement the information and mapping system and to what extent, if any, countries require technical or financial assistance; and set out a scheduled program of initiatives and activities for establishing an effective system. Identify and prepare a computerized system for compiling and analyzing multisectoral data and an information system for mapping, posting, and disseminating information accessible to all users. Ensure the exchange of information among international agencies and organizations on all aspects related to food insecurity and vulnerability information and mapping. Do the same at the national level. By the time of the June 1998 CFS meeting, none of these tasks was complete. Two reports, based on the interagency working group’s work, were provided to CFS for its June 1998 meeting. The first was a proposed plan for continuing and future work on the information and mapping system. The plan included a long list of tasks, but the items were not prioritized, and no schedule for completing them was suggested. The second was a report providing background information and principles that could be followed in establishing national information and mapping systems. The report could be useful to officials interested in how to go about developing an awareness and advocacy strategy for end-users of the system within their countries, including securing the support of national decisionmakers. The interagency working group and FAO Secretariat had been taking an inventory of available information for use in the information and mapping system at the international level. However, no report on the results was available for the June 1998 CFS. The Secretariat, interagency working group, and member countries had not yet begun to debate what indicators should be used for the system. At the June 1998 CFS meeting, a number of countries stressed the need for a decision on what indicators to use so that member countries could take steps toward measuring progress in achieving the overall summit goal. A March 1997 technical advisory group and the CFS have stressed the need to involve FAO countries in the design of the information and mapping system. However, the interagency working group has not asked member countries to identify and prioritize their information needs, determine the extent to which those needs have already been met, and share the results with the interagency working group. Only a few developing countries sent representatives to the first interagency working group meeting. Fourteen developing countries were invited to the second meeting, and 12 countries sent representatives. The interagency working group met for the third time in November 1998. No developing countries sent representatives to the meeting. There was some discussion of indicators that might be used at the national and international levels for a food insecurity and vulnerability mapping system and of existing international data systems from which some indicators could be drawn. However, no proposals were offered and no attempt was made to reach agreement on a common set of indicators for use at the national or international level. The group is not scheduled to meet again before the next CFS meeting, which will be held in June 1999. Since agreement had not been reached on the information and mapping system indicators, detailed technical guidance to countries on how to develop information on the indicators and establish the system at the national level also had not been developed. Similarly, member countries had not been able to identify whether their existing systems meet their needs or assess the time, financial resources, and technical assistance required to establish national systems. The interagency working group and the Secretariat have made progress in identifying a computer system for compiling and analyzing data and an information system for mapping, posting, and identifying the information. However, the work is not yet complete. A cooperative process is underway among U.N. and other international agencies. For example, FAO and the International Fund for Agricultural Development hosted the first and second meetings of the interagency working group, respectively, and the World Bank hosted the third meeting. Agreements have been reached for sharing information among some of the agencies, for example, between FAO and the World Food Program. However, FAO officials told us that problems have arisen in the exchange of information and that the World Food Program and the World Health Organization had not yet made important data sets available. As of mid-December 1998, only about 60 countries had identified focal points. In commenting on a draft of this report, FAO officials said considerable progress has been made in addressing the key tasks for establishing an information and mapping system, and implementation of many of the tasks requires a longer period of time. In addition, FAO said, many developing countries have difficulty in mobilizing the required resources. According to FAO, only about 15 countries are currently engaged in establishing national food insecurity and vulnerability mapping systems, with or without international assistance. FAO said that the interagency group is working on a technical compendium, to be issued in mid-1999, which will provide more detailed technical guidance to prospective users on technical issues related to the selection of indicators, the cut-off points, the analysis of data, and so forth. World Food Program officials noted that their program is actively involved in the interagency working group that is promoting development of a food insecurity and vulnerability information mapping system, cited several specific areas of cooperation that involve the agency and FAO, and said the program recently made available a data base on China that includes data at the provincial and county level. At the same time, program officials said that the November 1998 meeting of the interagency working group did not resolve the issue of mechanisms to be used in the development of an international food insecurity and vulnerability mapping system data base as well as the possible technical composition of the data base. Several different systems (FAO, World Bank, and the World Health Organization) offer possible alternatives, the officials said. They said the meeting discussed the issue of availability of data sets and data-sharing, and all participants are aware that many complications relate to data copyrights issues. Such issues will need to be resolved at the political level, officials said, before free data-sharing becomes a practical reality. The summit action plan stressed a need to improve coordination among governments, international agencies, and civil society. Numerous organizations are involved in food security issues, including FAO, the World Health Organization, the U.N. Development Program, the World Bank, the International Monetary Fund, the WTO, regional development banks, key donor countries, for-profit private sector companies, and NGOs. Since the summit, international groups have taken steps to promote better coordination, but problems still exist. In February 1997, FAO and the International Fund for Agricultural Development proposed that the U.N. resident coordinator in each country facilitate inter-U.N. coordination and that FAO headquarters establish and manage a network among the U.N. and non-U.N. agencies. The Administrative Coordination Committee of the United Nations (ACC)endorsed this proposal in April 1997 and authorized FAO to consult with other U.N. agencies on detailed arrangements to establish the network and a detailed work plan. The United States succeeded in placing the issue of food security coordination on the agendas of the 1997 Group of Seven developed countries’ economic summit in Denver, Colorado, and the 1997 U.S.-European Union Summit. Despite these actions, coordination problems continued. For example, at a June 1997 meeting of the Food Aid Forum, the European Union and 11 other countries attending the meeting expressed concern about the uncoordinated nature of food aid in contributing to food security goals.The European Union and 11 of the other countries attending the meeting said global food aid policy components were scattered among a number of international organizations and other forums, each with different representatives and agendas, and that they lacked effective coordination. In addition, they said that systemic coordination of food aid at the regional and national levels was needed. To improve coordination and the effectiveness of food aid, the European Union is drafting a proposed code of conduct for food aid. The code of conduct is to include a statement of responsibility for both food aid donors and recipients and stress the need to ensure optimal use of food aid resources. Another coordination problem concerned rural agricultural development. In October 1997, the World Bank reported that in virtually all of the countries it works with, many donors and multilateral financial institutions are promoting often disjointed projects. According to the Bank, these projects are launched when the policy environment is not favorable and a coherent rural strategy is lacking. Consequently, many of the projects fail to achieve their development objectives and undermine local commitment and domestic institutional capacity. Other examples of coordination problems concern FAO’s Special Program on Food Security, a telefood promotion to raise money, efforts to assist developing countries develop food security action plans for implementing summit commitments, FAO coordination with NGOs, and FAO coordination with other U.N. agencies. The intent of FAO’s Special Program for Food Security, an initiative of FAO’s Director-General, is to provide technical assistance to help low-income, food-deficit countries increase their agricultural production. The program began in 1995 with a pilot phase involving 18 countries. At a spring 1997 meeting of the CFS, many developed countries expressed concern about the program. For example, the European Union representative said FAO was not sufficiently emphasizing the need for policy reform, donor coordination, and rural development, as called for by the summit, and was not developing the program in a sufficiently participatory manner to allow recipient countries to take ownership of the program. The United States and other countries also complained about a lack of information on the costs and results of the program and expressed concern that the program was using FAO resources needed for summit implementation and FAO’s traditional normative work. According to a U.S. official, the United States was concerned that FAO was using the special program to become a development agency rather than an agency that sets standards for countries to follow. The official also said that the FAO Director-General had not been responsive to donor concerns about the program. In commenting on a draft of this report, FAO officials said that we did not adequately reflect the views of developing countries that are the main beneficiaries of the program, nor did we recognize that the special program was an initiative of the Director-General that was approved by the FAO membership. Moreover, FAO said that the special program is now part of its regular Program of Work and Budget. USDA officials advised us that our discussion of the April 1997 events was correct, but that since then, the FAO Director-General had been responsive to concerns expressed about the program. For example, FAO has provided factual data on the program’s activities, and that while early discussions about the program had emphasized supporting large capital projects that were questionable, the focus of the program has since shifted to encourage many small projects. In 1997, the FAO Director-General announced plans to put on a 48-hour global television program to mobilize public opinion and financial resources to pay for the Special Program and other food security activities. Participating countries were to organize national broadcasts, to be held on October 18 and 19, 1997, centered on World Food Day, an annual event designed to raise awareness about food security problems. According to the Director-General, the telecast was an important way to raise money for FAO’s Special Program in light of declining aid levels from donor countries. The main purpose originally was to raise public awareness of food problems and, only as a secondary suggestion from member countries, to mobilize resources for micro-projects providing direct support to small farmers. In general, donor countries did not initially support the telefood initiative when it was discussed at the April 1997 CFS meeting. Some key donor countries, such as the United States, Australia, and Canada, announced they would not participate in the telecast, because the proposal (1) had not been reviewed or approved by FAO members; (2) lacked participation by civil society in each country; (3) was designed to help fund the Special Program, which was viewed as not fully reflecting World Food Summit commitments; and (4) would impinge upon national NGO fundraising activities centered on World Food Day. In November 1997 FAO indicated the operation was successful, and invited FAO members to take all measures they deem appropriate to promote Telefood in the future. According to FAO, 58 countries participated in awareness-raising activities in the 1997 Telefood, including 5 developed countries (France, Greece, Italy, Japan, and Turkey). Twenty of the countries also engaged in fundraising, including one developed country (Japan). For the 1998 Telefood, 45 countries participated in awareness activities and 35 of these countries also engaged in fund-raising. Five developed countries participated , including in both sets of activities (Italy, Japan, Portugal, Spain, and Turkey). In commenting on this report, FAO officials acknowledged that concerns had been expressed about supporting events that might be seen as competing with the activities of nongovernmental organizations (NGO) but said that most Telefood supporters came from civil society. USDA officials said that the United States was critical of Telefood in spring 1997 but expressed support for the program later in the year. They said that the United States now recognizes that Telefood may be a significant activity for other countries and that it can help in raising consciousness about food insecurity. Shortly before the summit was held, the FAO Director-General ordered that food security strategy papers be drafted for each member country, including developed countries. (According to FAO officials, papers for the developed countries would simply describe the food security situation in each country and not include recommendations.) The Director-General did so without advising or securing the approval of at least some member countries, including the United States. The strategies for the developing countries reportedly included recommendations for improving food security that focused on the agricultural sector. FAO officials told us that each paper cost approximately $2,000 to produce and was drafted over a 2-week period. Sixty strategy papers, prepared before the summit was held, were reviewed jointly by FAO, the associated member country governments, and the World Bank. By April 1997, about 90 papers had been drafted, and parliaments in about 20 countries had approved the documents as national action plans for implementing World Food Summit commitments, according to FAO officials. At the April 1997 CFS session, donor countries expressed concern that civil societies of the countries had not been involved in preparation of the strategies, even though the summit action plan stressed the need for civil society to participate in planning, promoting, and implementing measures for improving food security. Donors were also concerned that the presummit strategies would not reflect the full range of commitments and actions agreed upon by summit participants. Also of concern was the short amount of time allotted for drafting the papers. Several FAO officials indicated that 2 weeks was not sufficient time to prepare sound country strategy papers. They noted that prior FAO preparation of country strategies typically took about 6 months. FAO officials also acknowledged that FAO lacked expertise in several key areas related to food security, such as macroeconomic and political policy reform, that were emphasized by the summit. In general, the donors were also displeased about FAO’s funding of country briefs for the developed countries. Countries had written position papers on their individual approaches to food security during preparations for the summit. Representatives from several developed countries noted that neither FAO nor FAO contractors had contacted their governments to obtain key data and information on the status of country efforts to develop country action plans. The European Union representative instructed FAO to stop preparing briefs on the European Union’s member states unless one of its countries specifically requested that FAO do so. FAO staff told us that the country strategies had been well received by the developing countries, were not meant to substitute for action plans developed by the civil society of each country, and were only a starting point to stimulate discussion and debate. However, donor country governments and other key groups were not invited to critique the drafts. Moreover, completed strategy papers and briefs have not been made available to other FAO members. According to FAO, as of June 1998, FAO had provided assistance to 150 countries in preparing strategy briefs. The summit action plan said coordination and cooperation within the U.N. system, including the World Bank and the International Monetary Fund, are vital to the summit follow-up. Governments agreed to cooperate among themselves and with international agencies to encourage relevant agencies within the U.N. system to initiate consultations on the further elaboration and definition of a food insecurity and vulnerability information and mapping system. As part of an already existing effort by U.N. agencies to coordinate follow-up with major U.N. conferences and summits since 1990, these governments also agreed to seek to reduce duplications and fill gaps in coverage, defining the tasks of each organization within its mandate, making concrete proposals for their strengthening, for improved coordination with governments, and for avoiding duplication of work among relevant organizations. The summit plan also requested that the ACC ensure appropriate interagency coordination and, when considering who should chair any mechanisms for interagency follow-up to the summit, recognize the major role of FAO in the field of food security. In April 1997, the ACC approved a proposal to establish a network on rural development and food security as the mechanism for providing interagency follow-up to the summit. At the country level, the network consists of thematic groups established under the U.N. Resident Coordinator System. According to FAO, these groups typically include U.N. agencies, national institutions, bilateral donors, and civil society representatives. At the headquarters level, the network includes 20 U.N. organizations that participate in and support the country-level groups. The network is jointly coordinated and backstopped by FAO and the International Fund for Agricultural Development, in close cooperation with the World Food Program. Despite these efforts, FAO, other U.N. agency officials, and U.S. officials advised us that coordination problems continue. For example, an FAO official said that in May 1998, the U.N. Economic and Social Council met to review a set of indicators for measuring follow-up to the various U.N. conferences and summits. According to the official, FAO had not been involved in the exercise to create the indicators, and the proposed indicators did not adequately represent food security issues. As discussed in appendix VIII, FAO officials told us that although the World Food Program and World Health Organization have been cooperating in establishing an information and mapping system, FAO was still waiting to receive previously promised data from the organizations. According to both FAO and U.N. Children’s Fund officials, their two agencies have had problems coordinating with each other. In commenting on a draft of our report, FAO officials noted that coordination problems exist even at the national level among ministries and agencies, and said that such problems cannot be absent in the U.N. system of agencies. However, FAO said great efforts had been made, particularly in the framework of the Administrative Committee on Coordination, to improve the cooperation and synergy among the different institutions. According to officials, the network on rural development and food security is growing rapidly and proceeding satisfactorily. The summit directed FAO’s Committee on Food Security to monitor and evaluate progress toward national, subregional, regional, and international implementation of the action plan, using reports from national governments, the U.N. system of agencies, and other relevant international institutions. Governments are to provide regular reports on progress made to the FAO Council and the U.N. Economic and Social Council. The summit also directed that NGOs and other interested parties should play an active role in this process, at the national level and within CFS itself. Since the summit, countries have provided their first progress report to CFS and the FAO Secretariat, and planning has begun for a revised format for future reports. NGOs have made some progress in increasing their involvement in food security efforts, but not as much as they would like. In April 1997, CFS decided that the first report would cover progress through the end of 1997 and the reporting procedure would be provisional. Reports would be prepared by national governments, U.N. agencies, and other relevant international institutions and were to be received by the FAO Secretariat by January 31, 1998. Countries agreed to report on actions taken toward achieving the specific objectives under each of the seven statements of commitment (following the format of the summit plan of action) and include information on the actors and, if available, results, including quantitative assessments, under each of the objectives. CFS allowed each country to decide whether to report on the specific actions included in the summit’s action plan. CFS emphasized that the information should include some analysis on how national policies and actions were geared toward, and effective in, achieving the food security objective of reducing the number of undernourished. A more detailed reporting format, proposed to CFS by the Secretariat, was not approved. CFS did not set any other requirements concerning the information to be provided. A proposal by some delegates that countries provide baseline information on actions taken to implement each of the seven commitments was noted but not endorsed as a requirement. Countries were not asked to provide baseline information on the number of their undernourished, the extent of undernourishment, or the principal causes of undernourishment. Nor were they asked to provide baseline information regarding actions already underway or planned or information on targets and milestone dates for implementing actions. They were not asked to provide information on actual or planned expenditures for implementing actions. Although CFS did not ask for baseline or target information, in a July 1997 letter to countries, FAO’s Director-General said that the first report after the World Food Summit was of the utmost importance and would be of critical value in setting baselines and the orientations that governments intend to pursue. He also said it was expected that governments’ reports would cover the contributions of all relevant partners at the national level, including governmental institutions, as well as nongovernmental and private sector actors. In addition, he asked for a one-page summary of the major food security issues that each country was facing and the priority targets being addressed through implementation of the plan. By the January 31, 1998, due date, only 5 countries had provided progress reports to the Secretariat; as late as March 31, 1998, only 68 of 175 country reports had been received. The Secretariat analyzed and summarized the results in a report for the CFS’ June meeting but drew no overall substantive conclusions because (1) information on policies and programs predominantly covered continuing actions already taking place at the time of the summit, (2) the Secretariat’s analysis of country actions was limited to 68 reports, (3) the countries only provided selective information rather than focusing on all the issues involved, (4) some countries provided descriptive rather than analytical information, and (5) some countries reported only on certain aspects of food security action such as food stocks or food reserve policies. The Secretariat said future reports need to be oriented more toward providing a precise analysis of selected situations, actions conducted over time to address them, results obtained, and reasons for such results. To date, CFS’ approach to monitoring and evaluation of country performance has focused on encouraging countries to report on actions taken and the impact of the actions on food security. Under this approach, the FAO Secretariat seeks to summarize the results across all countries. CFS has not considered directly assessing the quality of a country’s overall action plan—including strategy, programs, resources, targets, and milestones for achieving the summit commitments, objectives, and actions. Secretariat officials told us that they lack sufficient staff to evaluate action plans for all CFS members. The Secretariat prepared a report for the June 1998 CFS session that included a proposed standard format for reporting future progress in implementing the plan. The proposal was considerably more structured than that which CFS asked members to use for the provisional report provided in 1998. The proposal included suggestions regarding essential substantive points to be addressed in future reports. Prior to convening on June 2, CFS held a 1-day working group meeting on June 1 to examine the Secretariat’s proposals and report on them to CFS. However, the working group did not debate and CFS did not reach any decisions on the essential points to be included in future progress reports . CFS directed the Secretariat to collaborate with member states and other concerned partners in the continuing preparation of a set of indicators for measuring progress in implementing the plan and said the work should be completed sufficiently in advance to be used by CFS in preparing for its session in the year 2000. CFS also directed the Secretariat to further develop an analytical framework for preparing future reports and assessing progress in implementing the summit action plan. The summit action plan directed that civil society be involved in CFS’ monitoring and that governments, in partnership with civil society, report to CFS on national implementation of the plan. The plan’s directive is consistent with a growing interest in involving civil society to help promote the objectives and work of international agencies during the past decade in response to various transformations within and across countries. For example, the globalization of the economy has reduced the ability of individual governments to control the direction of development. Structural adjustment reforms have led to a redefinition of the role of the state in many countries, reducing its function as a doer and provider and leaving it to the private sector and citizen initiatives to take on responsibilities for services it no longer provides. The demise of authoritarian regimes in many countries has created opportunities for groups and collective initiatives of many kinds to spring up and make their voices heard. Increasing the role of civil society in CFS is not easily accomplished since FAO was created as an intergovernmental forum and operates by consensus of all the members. Unless the members of CFS agree to allow for NGO participation, this cannot occur. According to several U.N. officials with whom we spoke, developing countries are generally opposed to greater involvement by NGOs in U.N. agencies, including FAO. According to FAO and other participants, if CFS member countries agree that civil society should have a greater role, a variety of practical questions must be addressed. For example, how can FAO deal effectively and equitably with the large number of civil society organizations that would like to be heard, the variety and number of conflicting views and interests that they express, the disparities in their legitimacy and representativeness, and the difficulties many NGOs in developing countries have in gaining access to information and policy forums? In addition, given limited resources, where should priorities lie in promoting policy dialogue, and how can links between national and global levels be promoted? Some NGOs believe that some of these issues could be addressed if NGOs were allowed to hold separate meetings for developing consensus positions and selecting a few NGOs to represent them in CFS meetings. At the April 1997 CFS session, several delegates suggested that ways be considered for strengthening or widening the participation of civil society organizations in the work and deliberations of CFS. CFS asked the Secretariat to take interim measures to broaden NGO participation at the 1998 session of CFS and agreed to examine the issue in greater detail at that time. In responding to the April 1997 CFS session, the Secretariat took several positive actions prior to June 1998. It increased the number of NGOs invited to the June 1998 CFS meeting, made documents available on the FAO website about 1 month prior to the meeting, and provided FAO countries with a copy of a proposal by a group of NGOs for enhanced civil society participation. The proposal identified a number of specific actions that could be taken to increase NGO opportunities for participation before and around CFS meetings. NGOs expressed particular disappointment about not being allowed to make prepared statements in CFS meetings until after government delegates have spoken and said if they were to make the effort of participation, they needed to be assured of a say in decision-making and to know that NGO positions could at least be reflected in CFS reports. environment for civil society organizations and building dialogue with governments and how civil society’s views could be better taken into account given the intergovernmental nature of FAO. The seven NGOs provided their views in an information paper that was made available for the CFS June meeting. In addition, the Secretariat drafted its own paper on how the NGOs’ role could be enhanced in CFS and invited the CFS Bureau to approve the paper for use at the June 1998 meeting. Notwithstanding the positive steps taken by the Secretariat and CFS’ April 1997 decision, CFS did not seriously consider the issue in 1998. For example, the CFS Bureau, a small executive committee, did not approve the Secretariat’s paper for use at the June 1998 CFS session, and the issue was not included in the provisional agenda for the meeting. At the opening of the session, Canada, with support from the United States, proposed that the provisional agenda be amended to include a discussion of the role of civil society. However, rather than permitting debate on the proposal, the CFS Chairman announced that he had decided to seek to satisfy NGOs’ interests by holding informal discussions with them. Subsequently, the Chairman advised the NGOs that he and the CFS Bureau would meet with representatives of five NGOs. During the morning of the second day of the CFS meeting, the United States again proposed that civil society participation be added to the agenda and asked that it be addressed without further delay. The Chairman agreed to add the item to the agenda but postponed discussion until the end of the third day’s meeting. During the abbreviated discussion, various ideas for broadening civil society participation were noted. However, some delegates, including China, stressed that CFS is an intergovernmental forum and that any measures taken to broaden participation would need to respect that principle. At the conclusion of the June session, CFS countries agreed to make the issue of increased civil society participation in its activities a main agenda item for the 1999 meeting. It asked the Secretariat to prepare and circulate a discussion paper at least 6 months prior to the next meeting to allow ample time for consultations between governments and national civil society organizations. The Secretariat was also asked to analyze the pros and cons of proposals, including their legal, procedural, and financial implications. According to a statement presented on behalf of NGOs that attended the June 1998 CFS session, the involvement of civil society organizations in preparing national reports on progress in implementing the summit’s action plan was varied. In some cases, NGOs had written inputs; in other cases, NGOs gave their views orally in meetings with government officials; and in numerous other cases, civil society was not invited to participate in the drafting of the national report. At the request of Senator Russell D. Feingold, Ranking Minority Member of the Subcommittee on African Affairs, Senator John Ashcroft, and Congressman Tony P. Hall, we reviewed the outcome of the 1996 World Food Summit and key factors that could affect progress toward achieving the summit’s goal. Our overall objective was to comment on key issues and challenges related to developing countries’ achieving the summit’s goal of reducing undernourishment by half by 2015. Our overall approach was to analyze and synthesize information from a wide variety of primary and secondary sources. To address the current status of global food security, the summit’s approach to reducing food insecurity, and the summit’s possible contribution to reducing hunger and undernutrition, we did the following: reviewed documents and studies by the FAO, the U.N. Children’s Fund, the World Health Organization, the World Bank, and the World Food Program; the Organization for Economic Cooperation and Development; the Consultative Group on International Agricultural Research; IFPRI; USDA, USAID, the Department of State, and the Department of Health and Human Services; and various academics, NGOs, and private sector entities concerned with past and possible future efforts to reduce poverty and undernutrition; discussed issues concerning the extent and causes of undernutrition with national and international experts in food security, including experts at FAO, the World Food Program, the World Bank, IFPRI, USDA, USAID, the Department of State, various NGOs, and universities and international food companies; observed presummit negotiations over the text to be included in the World Food Summit’s policy declaration and plan of action, the World Food Summit, and subsequent FAO follow-up meetings to the summit (the latter include the April 1997 CFS meeting, the November 1997 FAO Conference meeting, and the June 1998 CFS meeting; attended various other conferences and seminars where food security and related issues were discussed; and developed a database on country-level estimates of undernutrition and various economic, political, and social variables possibly associated with food insecurity. private sector resource flows, and investors’ ratings of the risk associated with investing in countries. We did not validate the reliability of these data. To address the current status of global food security, more specifically, we reviewed methodological issues associated with efforts to accurately identify and measure the extent of undernutrition; reviewed FAO, USDA, and World Health Organization estimates of the number of undernourished people or children in up to 93 developing countries that collectively account for about 98 percent of the population in the developing world; used FAO estimates of the number of undernourished people in 93 developing countries to calculate and describe (1) the distribution of the total number of undernourished people across countries and (2) the variation across countries in the proportion of population that is undernourished; and compared FAO and USDA estimates of the number of undernourished people in 58 low-income, food-deficit countries to show to what extent the estimates differ. To describe the summit’s policy declaration and action plan for reducing food insecurity, we reviewed both and prepared a table summarizing the 7 major commitments, 27 supporting objectives, and 24 of the 181 supporting actions. The latter were selected to further illustrate the depth and specificity of the summit’s plan. To provide perspective on the summit’s goal of halving the number of undernourished people by 2015, we reviewed and compared FAO and USDA estimates on the number of undernourished people in developing countries. In addition, we analyzed a variety of key issues associated with the summit’s proposed commitments, objectives, and actions for halving undernutrition by no later than 2015. These issues concern the ability and willingness of countries to reasonably measure the prevalence of undernourishment and the possible effects of trade liberalization, grain reserves, food aid, conflict, increased agricultural production, policy reforms, resources, coordination, and monitoring and evaluation of progress in reducing food insecurity. production growth rates relative to food insecurity levels and the aggregate number of undernourished people of the countries. To assess the impact of trade liberalization on food security, we reviewed various analyses of the subject, including two detailed estimates of the projected income impacts of the URAs on major regions of the world and several major trading countries. To provide perspective on trends and issues associated with grain reserves and food aid, we analyzed data on (1) world private and government grain reserves and the ratio of total grain reserves to world cereal consumption; (2) world and U.S. cereals shipments of food aid in terms of total quantities and the proportion provided as program, project, and emergency aid; and (3) total food aid deliveries to low-income, food-deficit countries and as a percent of total global food aid deliveries. We also analyzed country-level data on average per capita caloric levels and related this measure of food security to other country-level variables, including (1) the incidence of civil war, war, revolution and genocide during 1960-89; (2) the level of income; and (3) creditworthiness ratings of the risk associated with investing in these countries; related country-level data on the number of undernourished people to (1) income levels of developing countries, (2) total official and private resources provided to these countries, and (3) creditworthiness ratings of the risk associated with investing in the countries; and analyzed data on the role of official development assistance and private sector investment in developing countries during 1990-97. To comment on the issues of (1) improving coordination among governments, international agencies, and civil society and (2) monitoring and evaluating their progress in implementing the summit action plan, we considered information that became available to us in some of our previously discussed actions. For example, we relied heavily on the FAO Secretariat’s assessment of individual developing and developed country progress reports that were provided to the Secretariat during early 1998. We did not undertake a comprehensive study of actions taken by governments, international agencies, and civil society to improve coordination and monitor and evaluate progress toward achieving summit commitments. We conducted our review from February 1997 to September 1998 in accordance with generally accepted government auditing standards. Phillip J. Thomas Wayne H. Ferris Gezahegne Bekele Edward George The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the outcome of the 1996 World Food Summit, focusing on factors that could affect progress toward meeting world food security goals. GAO noted that: (1) the 1996 World Food Summit brought together officials from 185 countries and the European Community to discuss the problem of food insecurity and produced a plan to guide participants' efforts in working toward a common goal of reducing undernutrition; (2) to reach this goal, they approved an action plan, the focus of which is to assist developing countries to become more self-reliant in meeting their food needs by promoting broad-based economic, political, and social reforms at local, national, regional, and international levels; (3) the participants endorsed various actions but did not enter into any binding commitments; (4) they also agreed to review and revise national plans, programs, and strategies, where appropriate, so as to achieve food security consistent with the summit action plan; (5) according to U.S. officials, a willingness on the part of food-insecure countries to undertake broad-based policy reforms is a key factor affecting whether such countries will achieve the summit goal; (6) other important factors that could affect progress toward achieving the summit goal are: (a) the effects of trade reform; (b) the prevalence of conflict and its effect on food security; (c) the sufficiency of agricultural production; and (d) the availability of food aid and financial resources; (7) also needed are actions to monitor progress, such as the ability and willingness of the participant countries to develop information systems on the status of food security and to coordinate, monitor, and evaluate progress in implementing the summit's plan; (8) given the complexity of the problems in each of these areas, participants acknowledged that progress will be difficult; (9) the Food and Agriculture Organization's (FAO) Committee on World Food Security requested that countries report to the FAO Secretariat on their progress in meeting the summit's goal in 1998, but many countries did not respond in a timely fashion; (10) in addition, some reports were more descriptive than analytical, and some reported only on certain aspects of food security actions; (11) thus, the Secretariat was unable to draw general substantive conclusions on progress made to reduce food insecurity; and (12) the Agency for International Development said that the level of effort by both donor and developing countries will probably fall short of achieving the summit's goal of reducing chronic global hunger by one-half. |
To be eligible for Medicaid, individuals must be within certain eligibility categories, such as children or those who are aged or disabled. In addition, individuals must meet financial eligibility criteria, which are based on individuals’ assets—income and resources together. Once eligible for Medicaid, individuals can receive basic health and long-term care services, as outlined by each state and subject to minimum federal requirements. Long-term care includes many types of services needed when a person has a physical disability, a mental disability, or both. Individuals needing long- term care have varying degrees of difficulty in performing some ADLs and instrumental activities of daily living (IADL). Medicaid coverage for long-term care services is most often provided to individuals who are aged or disabled. Within broad federal standards, states determine the need for long-term care services by assessing limitations in an applicant’s ability to carry out ADLs and IADLs. Most individuals requiring Medicaid long-term care services have become eligible for Medicaid in one of three ways: (1) through participation in the Supplemental Security Income (SSI) program, (2) by incurring medical costs that reduce their income and qualify them for Medicaid, or (3) by having long-term care needs that require nursing home or other institutional care. The SSI program provides cash assistance to aged, blind, or disabled individuals with limited income and resources. Those who are enrolled in SSI generally are eligible for Medicaid. Individuals who incur high medical costs may “spend down” into Medicaid eligibility because these expenses are deducted from their countable income. Spending down may bring their income below the state- determined income eligibility limit. Such individuals are referred to as “medically needy.” As of 2000, 36 states had a medically needy option, although not all of these states extended this option to the aged and disabled or to those needing nursing home care. Individuals can qualify for Medicaid if they reside in nursing facilities or other institutions in states that have elected to establish a special income level under which individuals with incomes up to 300 percent of the SSI benefit ($1,737 per month in 2005) are Medicaid-eligible. Individuals eligible under this option must apply all of their income, except for a small personal needs allowance, toward the cost of nursing home care. The National Association of State Medicaid Directors reported that, as of 2003, at least 38 states had elected this option. SSI policy serves as the basis for Medicaid policy on the characterization of assets—income and resources. Income is something, paid either in cash or in-kind, received during a calendar month that is used or could be used to meet food, clothing, or shelter needs; resources are cash or things that are owned that can be converted to cash. (Table 1 provides examples of different types of assets.) States can decide, within federal standards, which assets are countable or not. For example, states may disregard certain types or amounts of income and may elect not to count certain resources. In most states, to be financially eligible for Medicaid long-term care services, an individual must have $2,000 or less in countable resources ($3,000 for a couple). However, specific income and resource standards vary by eligibility category (see table 2). The Medicaid statute requires states to use specific income and resource standards in determining eligibility when one spouse is in an institution, such as a nursing home, and the other remains in the community (referred to as the “community spouse”). This enables the institutionalized spouse to become Medicaid-eligible while leaving the community spouse with sufficient assets to avoid hardship. Resources. The community spouse may retain an amount equal to one- half of the couple’s combined countable resources, up to a state-specified maximum resource level. If one-half of the couple’s combined countable resources is less than a state-specified minimum resource level, then the community spouse may retain resources up to the minimum level. The amount that the community spouse is allowed to retain is generally referred to as the community spouse resource allowance. Income. The community spouse is allowed to retain all of his or her own income. States establish a minimum amount of income—the minimum monthly maintenance needs allowance (for this report we will refer to it as the minimum needs allowance)—that a community spouse is entitled to retain. The amount must be within a federal minimum and maximum standard. If the community spouse’s income is less than the minimum needs allowance, then the shortfall can be made up in one of two ways: by transferring income from the institutionalized spouse (called the “income- first” approach) or by allowing the community spouse to keep resources above the community spouse resource allowance, so that the additional funds can be invested to generate more income (the “resource-first” approach). Federal law limits Medicaid payments for long-term care services for persons who dispose of assets for less than fair market value within a specified time period to satisfy financial eligibility requirements. As a result, when an individual applies for Medicaid coverage for long-term care, states conduct a review, or “look-back,” to determine whether the applicant (or his or her spouse, if married) transferred assets to another person or party and, if so, whether the transfer was for less than fair market value. Generally, the look-back period is 36 months. If an asset transfer for less than fair market value is detected, the individual is ineligible for Medicaid long-term care coverage for a period of time, called the penalty period. The penalty period is calculated by dividing the dollar amount of the assets transferred by the average monthly private-pay rate for nursing home care in the state (or the community, at the option of the state). For example, if an individual transferred $100,000 in assets, and private facility costs averaged $5,000 per month in the state, the penalty period would be 20 months. The penalty period begins at approximately the date of the asset transfer. As a result, some individuals’ penalty periods have already expired by the time they apply for Medicaid long- term care coverage, and therefore they are eligible when they apply. Federal law exempts certain transfers from the penalty provisions. Exemptions include transfers of assets to the individual’s spouse, another individual for the spouse’s sole benefit, or a disabled child. Additional exemptions from the penalty provisions include the transfer of a home to an individual’s spouse, or minor or disabled child; a sibling residing in the home who meets certain conditions; or an adult child residing in the home who has been caring for the individual for a specified time period. Transfers do not result in a penalty if the individual can show that the transfer was made exclusively for purposes other than qualifying for Medicaid. Additionally, a penalty would not be applied if the state determined that it would result in an undue hardship, that is, it would deprive the individual of (1) medical care such that the individual’s health or life would be endangered or (2) food, clothing, shelter, or other necessities of life. Elderly households’ asset levels varied on the basis of level of disability, marital status, and gender; additionally, the extent to which elderly households transferred cash varied with the level of household assets and these same demographic factors. In general, disabled elderly households had lower asset levels than nondisabled elderly households, and the asset levels decreased as the level of disability increased. Elderly couples made up 46 percent of elderly households and had higher levels of assets than single elderly; single elderly females, who made up 41 percent of elderly households, generally had lower assets than single elderly males, who made up 13 percent of elderly households. For all elderly households, the higher their asset levels, the more likely they were to have reported transferring cash to another individual. Elderly households with both incomes and nonhousing resources above the elderly household median were responsible for over one-half of all transfers made. Overall, severely disabled elderly households—those reporting three or more limitations in ADLs—were less likely to transfer cash than nondisabled elderly households. Single individuals were less likely to transfer cash than couples, and single males had a higher likelihood of transferring cash than single females. According to data from the 2002 HRS, total income for the nation’s approximately 28 million elderly households was $1.1 trillion and total nonhousing resources were $6.6 trillion. Approximately 80 percent of elderly households had annual incomes of $50,000 or less. (See fig. 1.) The median annual income for all elderly households was $24,200 and ranged from $0 to $1,461,800. About half of all elderly households had nonhousing resources of $50,000 or less, while almost 20 percent had nonhousing resources greater than $300,000. (See fig. 2.) For all elderly households, median nonhousing resources were $51,500 and ranged from less than zero to $41,170,000. In terms of total resources, elderly households had median total resources of $150,000, ranging from less than zero to $41,640,000, and a primary residence with a median net value of $70,000, ranging from less than zero to $20,000,000. Disabled elderly households—which are at higher risk of needing long- term care—had lower levels of assets than nondisabled elderly households. Generally, as the level of disability increased, the level of assets decreased. Severely disabled elderly households, which made up about 6 percent of total elderly households, had significantly lower median income ($13,200) and median nonhousing resources ($3,200) compared with all elderly households ($24,200 and $51,500, respectively). (See fig. 3.) Elderly couples, which made up approximately 46 percent of elderly households, had higher levels of assets than single elderly individuals. Of the single elderly, males, who made up approximately 13 percent of elderly households, were generally likely to be better off financially than females, who made up approximately 41 percent of elderly households. (See fig. 4.) The likelihood that elderly households transferred cash and the amounts they transferred varied with the level of assets held and demographic characteristics, such as the level of disability, marital status, and gender. Approximately 6 million, or about 22 percent, of all elderly households reported transferring cash during the 2 years prior to the HRS survey. Almost all of these cash transfers were made to children or stepchildren. Of the elderly households that transferred cash, the median income was $37,000 and ranged from $0 to $725,600; median nonhousing resources were $128,000 and ranged from less than zero to $12,535,000. Generally, elderly households with higher asset levels were more likely to have transferred cash than households with lower asset levels (see fig. 5). Of the 22 percent of elderly households that reported having transferred cash in the 2 years prior to the survey, nondisabled elderly households and couples were most likely to do so. Among disabled elderly households, severely disabled households were the least likely to transfer cash. With regard to the amounts transferred, among single elderly individuals, males were more likely to transfer larger amounts of cash than females, with median cash transfer amounts of $4,500 and $3,000, respectively. (See table 3.) Transfers of cash were also more likely to occur in households with higher income and resource levels. Elderly households with both income and resources above the median—approximately 37 percent of all elderly households—were the most likely to transfer cash. In contrast, elderly households with both income and resources at or below the median were the least likely to transfer cash. With regard to amounts of cash transferred, the median amounts transferred for elderly households with both income and resources above the median were twice as high ($4,000) as those for elderly households with both income and resources at or below the median ($2,000). (Table 4 shows the cash transferred by elderly households in relation to the median income and resource levels.) Methods elderly individuals use to reduce their countable assets do not always result in a penalty period. Reducing debt and making purchases, such as for home modifications, for example, do not result in a penalty period and thus would not lead to delays in Medicaid eligibility for long- term care coverage. Other methods, however, could result in a penalty period, depending on the specific arrangements made and the policies of the individual state. For example, giving away assets as a gift generally results in the imposition of a penalty period, but giving away assets valued at less than the average monthly private-pay rate for nursing home care may not, depending, in part, on whether the state imposes partial-month penalties. Some methods individuals use to reduce their countable assets do not result in a penalty period and thus would not lead to delays in eligibility for Medicaid long-term care coverage. According to several elder law attorneys and some state officials we contacted, one of the first methods Medicaid applicants use to reduce assets is to spend their money, often by paying off existing debt, such as a mortgage or credit card bills, or by making purchases. When such purchases and payments convert a countable resource, such as money in the bank, to noncountable resources, such as household goods, they effectively reduce the assets that are counted when determining Medicaid eligibility. Common purchases mentioned included renovating a home to make it more accessible for persons with disabilities, repairing or replacing items such as a roof or carpeting, prepaying burial arrangements, buying a home, or having dental work done. Elder law attorneys explained that once individuals are Medicaid-eligible, they and their families will have limited means. Therefore, they advise these individuals to update, renovate, repair, or replace old or deteriorating items such as homes and cars to reduce the need for maintenance and repairs in the future. No penalty is associated with paying a debt or making a purchase as long as the individual receives something of roughly the same value in return. Another method married individuals use that does not result in a penalty period is seeking to raise the community spouse’s resource allowance above a state’s maximum level, which reduces the amount of income or resources considered available to the spouse applying for Medicaid coverage. States establish, under federal guidelines, a maximum amount of resources that a community spouse is allowed to retain. In general, the remaining resources are deemed available to be used to pay for the institutionalized spouse’s long-term care needs. In addition, if the community spouse’s income is less than the state’s minimum needs allowance, the state can choose to make up the shortfall by (1) transferring income from the institutionalized spouse or (2) allowing the community spouse to keep resources above the resource allowance so that the additional funds can be invested to generate more income. Under the latter approach, the community spouse may be able to retain a significant amount of resources in order to yield the allowable amount of income. For example, a community spouse might ask to retain a savings account with $300,000 and an annual interest rate of 2 percent that would yield an additional $500 in income per month. Some of the other methods elderly individuals use to reduce their countable assets could result in a penalty period and thus could delay Medicaid coverage for long-term care services, according to the elder law attorneys and state and federal officials we contacted. Whether or not an asset reduction method results in a penalty period depends on the specific arrangements made and the policies of the state. Therefore, the extent to which each of the following methods is used is likely to vary by state. Gifts. Under this method, an individual gives some or all assets to another individual as a gift, for example, by giving his or her children a cash gift. Although this is probably the simplest method to reduce assets, some elder law attorneys told us that this method would be one of the last things a person would want to do. Not only would the individual lose control of his or her assets, but giving a gift would likely be a transfer for less than fair market value and therefore result in a penalty period. As with other asset transfers, if individuals can prove that they gave away their assets exclusively for a purpose other than qualifying for Medicaid long-term care coverage, or if the transfer is to a spouse or a disabled child, then there would be no penalty. Additionally, if a state treats each transfer as a separate event and does not impose penalty periods for time periods shorter than 1 month, then transfers for amounts less than the average monthly private-pay rate for nursing home care in that state do not result in a penalty period. Because the penalty period begins at approximately the date of asset transfer, individuals that meet Medicaid income eligibility requirements can give away about half of their resources and use their remaining resources to pay privately for long-term care during which time any penalty period would expire. This is often referred to as the “half a loaf” strategy because it preserves at least half of the individual’s resources. Financial Instruments. Some financial instruments, namely annuities and trusts, have been used to reduce countable assets to enable individuals to qualify for Medicaid. Annuities, which pay a regular income stream over a defined period of time in return for an initial payment of principal, may be purchased to provide a source of income for retirement. According to a survey of state Medicaid offices, annuities have become a common method for individuals to reduce countable resources for the purpose of becoming eligible for Medicaid because they are used to convert countable resources, such as money in the bank, to a resource that is not counted, and a stream of income. If converting the resource to an annuity results in individuals’ having countable resources below the state’s financial eligibility requirements, then these individuals can become eligible for Medicaid if their income, including the income stream from the annuity, is within the Medicaid income requirements for the state in which they live. Married individuals can use their joint resources to purchase an annuity for the sole benefit of the community spouse. Since a community spouse’s income is not counted in a Medicaid eligibility determination, an annuity effectively reduces the countable assets of the applicant. Annuities must be actuarially sound—that is, the expected return on the annuity must be commensurate with the reasonable life expectancy of the beneficiary—or they are considered a transfer of assets for less than fair market value and result in a penalty. Trusts are arrangements in which a grantor transfers property to a trustee with the intention that it be held, managed, or administered by the trustee for the benefit of the grantor or certain designated individuals. The use of trusts as a method of gaining Medicaid eligibility for long-term care services was addressed in 1993 legislation. The law and associated CMS guidance indicate how assets held in a trust, as well as the income generated by a trust, are to be counted in the Medicaid eligibility process. According to CMS, since this legislation was enacted, the use of trusts as a Medicaid asset reduction method has declined. Transfer of Property Ownership. Medicaid allows individuals to transfer ownership of their home, without penalty, to certain relatives, including a spouse or a minor child (under age 21). Other transfers of a home or other property within the look-back period may result in a penalty period if they were for less than fair market value. For example, individuals might transfer ownership of their home while retaining a “life estate,” which would give them the right to possess and use the property for the duration of their lives. According to the CMS State Medicaid Manual, this would be a transfer for less than fair market value and thus would result in a penalty period. Personal Services Contract or Care Agreement. Personal services contracts or care agreements are arrangements in which an individual pays another person, often an adult child, to provide certain services. Based on CMS guidance, relatives can be legitimately paid for care they provide, but there is a presumption that services provided without charge at the time they were rendered were intended to be provided without compensation. Under this presumption, payments provided for services in the past would result in a penalty period. “Just Say No” Method. Under this method, the institutionalized spouse transfers all assets to the community spouse, which is permitted under the law. The community spouse then refuses to make any assets available to support the institutionalized spouse and retains all of the couple’s assets. In turn, the institutionalized spouse may seek Medicaid coverage for long- term care. Whether this method results in a delay in Medicaid coverage for long-term care services depends on the policies of the individual state. Promissory Notes. A promissory note is a written, unconditional agreement, usually given in return for goods, money loaned, or services rendered, whereby one party promises to pay a certain sum of money at a specified time (or on demand) to another party. According to CMS and state officials, some individuals have given assets to their children in return for a promissory note as a means to reduce their countable assets. For example, we were told of a case in which a mother gave her daughter money in return for a promissory note with a schedule for repayments. Although the note was scheduled to be repaid during the mother’s expected lifetime, the payment arrangements called for the child to repay only the interest until the final payment, when the entire principal was due. Additionally, each month the mother forgave a portion of the note that equaled slightly less than the average monthly nursing home cost. Whether promissory notes result in a delay in Medicaid coverage for long- term care would depend on the specific details of the note and the policies of the state. None of the nine states we reviewed systematically tracked or analyzed data that would provide information on the incidence of asset transfers and the extent to which penalties were applied in their states. Nationwide, all states requested information about applicants’ assets, including transfers of assets, through Medicaid application forms, interviews to determine Medicaid eligibility, or both. The nine states we reviewed generally relied on applicants’ self-reporting of financial information and varied in the amount of documentation they required and in the extent to which they verified the assets reported. According to officials in these states, transfers that were not reported by applicants were difficult to identify. Although officials from the nine states reviewed reported that some individuals transferred assets for purposes of qualifying for Medicaid, these states did not systematically track and analyze data on the incidence of asset transfers or associated penalties. As a result, the states could not quantify the number of people who transferred assets, the assets transferred, or the penalties applied as a result of transfers for less than fair market value. Officials in four of the nine states informed us that they had computer-based systems for recording applicant information, including data on penalties that resulted in a delay in Medicaid eligibility but they did not regularly analyze these data and thus did not have information available on the number of applicants who transferred assets. One of these states—Hawaii—was able to determine that there were no individuals serving a penalty at the time of our interview. However, because the state’s system only kept data on applicants currently serving a penalty, the state could not provide us with data on the number of people who had served penalties in the past. One state—Montana—that did not report having a computer-based application system, did report collecting several months of data on asset transfers from its counties in the fall of 2004, but a state official told us that as of mid-July 2005, the data had not been analyzed. Although states could not systematically track and analyze asset transfers, state officials were familiar with and had observed different methods that elderly individuals used to transfer assets in their states. For example, state officials frequently identified cash gifts as the most common method used to reduce the amount of countable assets. Some states had taken steps to try to deter the use of financial instruments, such as annuities. For example, two states reporting changing their laws to expand the circumstances under which annuities are counted as available resources for purposes of determining Medicaid eligibility for long-term care. Similarly, some states have tried to deter the use of the “Just Say No” method by pursuing financial support from the community spouse or by requiring the institutionalized spouse to take the community spouse to court to recover his or her share of the assets. Some officials commented that as states took actions to identify and prevent methods used to make transfers in order to become eligible for Medicaid long-term care coverage, new ways emerged to make transfers for this purpose that are permitted under the law. For example, one state took action to try to deter multiple small transfers by adding the amount of the transfers together, under certain circumstances, for purposes of calculating the penalty period. According to this state’s officials, however, some attorneys had advised their clients to transfer very small amounts of money in consecutive months and make one final transfer of a significant amount before applying for Medicaid. Under the state’s policy, these transfers are added together and the penalty period begins at the month of the first transfer, as opposed to the month of the final transfer. As a result, some or all of the penalty period may have expired by the time the applicant applies for Medicaid long-term care coverage. Nationwide, states used the application process—application forms, interviews, or both—to determine the level of assets held by Medicaid applicants and whether applicants transferred assets. Applications in 38 states requested comprehensive information about assets—for example, by requiring applicants to respond to questions regarding whether they had certain types of assets, such as checking accounts or real estate. Another 7 states’ applications requested general information about applicants’ assets, and the remaining 6 states reported relying on the interview process to collect information on assets. Thirty states required in-person or telephone interviews with either the applicant or an applicant’s appointed representative. Table 5 summarizes states’ application processes. (See app. III for more details on the application processes in each state.) Medicaid application forms in 44 states asked applicants to report whether they had transferred assets. Eleven of the 44 states’ applications asked whether applicants had transferred assets in the past 36 months, the required look-back period for most assets; 13 asked applicants whether they had transferred assets in the past 60 months, the required look-back period for trusts; and 17 did both. Of the applications in the remaining 3 states, 1 asked about assets ever transferred; 1 asked applicants to report any transfers, including the date of the transfer, on a separate form; and 1 asked about transfers in the prior 30 months. (See app. IV for details on the characteristics of Medicaid application questions related to transfers of assets in each state.) Although the 7 remaining states did not have a question about transfers on their applications, they all required interviews as part of the application process. The nine states we reviewed generally relied on the information applicants reported during the application process—the application, supporting documentation, and interviews—to identify transfers of assets. The states generally required applicants to submit documentation of their assets as part of the application process (see table 6). The type of documentation required varied by type of asset. For example, for trusts, annuities, and life insurance, states generally required a copy of the agreement or policy; for real estate, states generally required a copy of the deed or documentation of the value from a tax assessment or broker. For more liquid assets, such as checking and savings accounts, four of the nine states contacted reported requiring a copy of 1 month’s statements. However, the remaining five states reported requiring or collecting documentation for longer periods of time ranging from 3 months to 3 years. For example, Florida generally collected at least 3 months of bank statements from individuals seeking nursing home coverage, South Carolina required applicants to submit a total of 14 months of statements covering points in time over a 3- year period, and Montana generally collected bank statements dating back 3 years. To verify applicants’ assets, the nine states used other information sources, to varying degrees, in addition to the documentation provided by applicants. Generally, states were more likely to verify information related to possible income sources for applicants, such as the Social Security Administration and unemployment offices, than for data sources on possible resources, such as motor vehicle departments and county assessor offices. For example, seven of the nine states reported using information from an Income and Eligibility Verification System (IEVS), a system that matches applicant-reported income information with data from the Internal Revenue Service, the Social Security Administration, and state wage reports and unemployment benefits, for all or almost all of their applicants. In contrast, five of the nine states used information from county assessor offices that provide information on property taxes and thus property ownership, and four of these states used this source to verify resources for half of their Medicaid applicants or less. (See table 7 for the proportion of applicants for which the nine states used specific sources to verify applicants’ assets.) Regarding transfers of assets, the nine states asked on their Medicaid application forms, in interviews, or both, whether applicants had transferred assets. Officials from the nine states indicated that transfers that are not reported by applicants or a third party are generally difficult to identify. Three of the nine states did not have a process to identify unreported transfers. The remaining six states generally relied on certain indicators from applicants’ asset documentation, the states’ asset verification data, case worker interviews, or a combination of these factors to try to identify unreported transfers. Following are two examples of how states used these indicators: South Carolina asked for the previous 12 months of bank statements and also asked for statements from the 24th and 36th month preceding the application. South Carolina officials reviewed these bank statements to ascertain whether there had been large reductions in the amount of money in the account over the past 3 years. If a large reduction was detected, the state would ask the applicant for information regarding the use of the money. Ohio officials told us that the state generally relied on case workers’ experience to decide whether additional review was necessary, noting that there are certain indications that a transfer might have occurred, which would prompt additional review of the application. Examples include the opening of a new bank account, an applicant who is living beyond his or her means, and an applicant who recently sold his or her house but reports having no resources. To help states comply with requirements related to asset transfers and Medicaid, CMS has issued guidance primarily through the State Medicaid Manual. The agency has also provided technical assistance, through its regional offices, to individual states in response to their questions; communicated to states through conferences; and funded a special study on the use of annuities to shelter assets. Officials from the majority of CMS regional offices and the nine states we contacted indicated that some additional guidance, such as on the use of financial instruments, would be helpful. CMS officials, however, noted that it would be difficult to issue guidance that would be applicable in all situations given the constantly changing methods used to transfer assets. In response to provisions in the Omnibus Budget Reconciliation Act of 1993, CMS updated the State Medicaid Manual in 1994 to include provisions relating to transfers of assets, including the treatment of trusts. The portion of the manual relating to asset transfers and trusts generally includes definitions of relevant terms, such as assets, income, and resources; information on look-back periods; penalty periods and penalties for transfers of less than fair market value; exceptions to the application of such penalties; and spousal impoverishment provisions. The portion of the manual regarding trusts includes other definitions relating specifically to trusts, provisions on the treatment of the different types of trusts (such as revocable and irrevocable), and exceptions to the specified treatment of trusts. CMS is in the process of revising certain policies in the manual related to funeral and burial arrangements. CMS officials were not able to provide a date for when revisions to the manual would be completed and stated that they did not anticipate any major revisions to the asset transfer provisions in the Medicaid manual. CMS has provided additional guidance to states about asset transfers through conferences and one special study: Conferences. CMS officials reported providing states with information on asset transfer issues at its annual Medicaid eligibility conference. At this conference, issues regarding transfers of assets have been discussed as a formal agenda item, in panels on state experiences, or in question and answer sessions. Special study. In 2005, the agency released a report that examined the use of annuities as a means for individuals to shelter assets to become Medicaid-eligible. While this study did not identify a universal recommendation for the policy on annuity use or determine the extent to which the use of annuities is growing or declining, it suggested that annuities established for the purpose of becoming Medicaid-eligible do lead to additional costs for federal and state governments in that individuals may shift assets from countable resources into a resource that is not counted, and into a stream of income. In some cases, the use of annuities results in individuals qualifying for Medicaid more quickly. Using the estimated cost of annuities to Medicaid from a sample of five states and an examination of policies regarding annuities in all states, the study estimated that annuities cost the Medicaid program almost $200 million annually. Officials from CMS’s regional offices informed us that they provided technical assistance on asset transfer issues to 29 states over the past year. The types of technical assistance provided to these states ranged from confirming existing Medicaid policy to advising them on ways to address specific asset transfer methods. When asked for examples of the specific issues for which states sought technical assistance, officials in seven regional offices said they had responded to states’ questions about annuities. Other issues for which states requested technical assistance included the treatment of trusts, the policy on spousal impoverishment, and promissory notes. Officials from the majority of CMS regional offices noted that the states in their regions could benefit from additional guidance. Additionally, the majority of states we contacted concurred that guidance related to transfers of assets would be helpful. These states and regional office officials indicated a need for more guidance on topics such as annuities, trusts, and the relationship between asset divestment and spousal impoverishment. CMS central office officials said that the agency faces challenges in issuing guidance that would be applicable to all situations given the constantly changing methods individuals use to transfer assets in a manner that avoids the imposition of a penalty period. CMS officials said that states’ efforts to identify and address asset transfer issues are constantly changing, as methods for reducing countable assets are identified, increase in use, and then diminish. For example, CMS officials cited the use of personal care agreements, where the individual applying for Medicaid long-term care coverage hires a family member to perform services, as a practice that at one time was frequently used to transfer assets. In some cases, these agreements paid exorbitant fees for the services provided, and CMS officials provided technical assistance to states to help them limit the use of such agreements, at which point the practice diminished in use. CMS officials maintain that blanket guidance from the agency cannot necessarily address all of the issues that states face. We provided CMS and the nine states in our sample an opportunity to comment on a draft of this report. We received written comments from CMS (see app. V). We also received technical comments from CMS and eight of the nine states, which we incorporated as appropriate. CMS noted that the Medicaid program will only be sustainable if its resources are not drained to provide health care assistance to those with substantial ability to contribute to the costs of their own care. CMS acknowledged, however, the difficulty of gathering data on the extent and cost of asset transfers to the Medicaid program. In particular, CMS commented that the law is complex and that the techniques individuals and attorneys devise to divest assets are ever-changing. CMS reiterated the President’s budget proposal to tighten existing rules related to asset transfers, and associated estimated savings, which we had noted in the draft report. CMS further noted one limitation to our analysis that we had disclosed in the draft report—that the HRS only addressed cash transfers provided to relatives or other individuals. CMS commented that it believes that substantial amounts of assets are sheltered by individuals who transfer homes, stocks and bonds, and other noncash property. We agree with CMS’s view that information on such noncash transfers would be valuable, but as we noted in the draft report the HRS does not include such data. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7118 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The Health and Retirement Study (HRS) is a longitudinal national panel survey of individuals over age 50 sponsored by the National Institute on Aging and conducted by the University of Michigan. HRS includes individuals who were not institutionalized at the time of the initial interview and tracks these individuals over time, regardless of whether they enter an institution. Researchers conducted the initial interviews in 1992 in respondents’ homes and conducted follow-up interviews over the telephone every second year thereafter. HRS questions pertain to physical and mental health status, insurance coverage, financial status, family support systems, employment status, and retirement planning. For this report, we used the most recent available HRS data (2002), for which the data collection period was February 2002 through March 2003. These data include information for over 18,000 Americans over the age of 50. We limited our analysis to data for households with at least one elderly individual, which we defined as an individual aged 65 or older. Thus, the data we used were from a sample of 10,942 individuals (8,379 households) that represented a population of 28.1 million households. From these data we estimated the nationwide level of assets held by households with at least one elderly individual, the extent to which these households transferred cash, and the amounts transferred. Our analysis underestimates the extent to which elderly households transferred assets and the amounts of assets transferred because the study data included only cash transfers, not other types of transfers. HRS also did not assess whether the transfers were related to individuals’ attempts to qualify for Medicaid coverage for long-term care services. To assess the reliability of the HRS data, we reviewed related documentation regarding the survey and its method of administration, and we conducted electronic data tests to determine whether there were missing data or obvious errors. On this basis, we determined that the data were sufficiently reliable for our purposes. To select a sample of states to review in more detail regarding their Medicaid eligibility determination practices, including the process for identifying whether applicants had transferred assets, we assessed the prevalence of five factors in each of the 51 states. 1. The percentage of the population aged 65 and over, which we determined using 2000 census data from the Census Bureau. 2. The cost of a nursing home stay for a private room for a private-pay patient based on data from a 2004 survey conducted for the MetLife Company. 3. The proportion of elderly (aged 65 and over) with incomes at or above 250 percent of the U.S. poverty level, which was based on information from the Census Bureau using the 2000 and 2002 Current Population Surveys. 4. Medicaid nursing home expenditures as reported by states to CMS. 5. The availability of legal services specifically to meet the needs of the elderly and disabled, based on membership data from the National Academy of Elder Law Attorneys. For each factor, we ranked the states from low to high (1 to 51) and then summed the five rankings for each state. On the basis of these sums, we grouped the states into three clusters (low, medium, and high) using natural breaks in the data as parameters (see table 8). We then selected three states from each cluster using randomly generated numbers, for a total sample of nine states. Appendix III: Characteristics of Medicaid Long-Term Care Application Processes, by State New York The state had a brief application that did not ask about assets. While the state asked applicants to respond to whether they had certain types of assets, the application was limited with respect to the types of assets applicants were required to address. For example, the application may have only asked about cash, bank accounts, life insurance, real property, and “other.” The state required interviews for applicants who the state deemed to have complex assets, including those who reported transferring assets. The state had applicants complete their application during the interview process with eligibility case workers. Appendix IV: Characteristics of Medicaid Long-Term Care Applications Related to Transfers of Assets, by State North Carolina The state’s application had a specific question about trusts that could be used to indicate whether further review for a transfer of assets was necessary. While the state’s application did not include specific questions regarding transfer of assets, it included a separate form for the applicant to report any transfers of assets, including the date of such transfers. The state’s application asked about transfers within 30 months. Prior to the Omnibus Budget Reconciliation Act of 1993, the federally mandated look-back period for transfers of assets was 30 months. The state’s application did not ask about transfers of assets. The state had applicants complete their application during the interview process with eligibility case workers. The state’s application asked if an applicant had ever transferred assets. In addition to the contact named above Carolyn Yocom, Assistant Director; JoAnn Martinez-Shriver; Kaycee Misiewicz; Elizabeth T. Morrison; Michelle Rosenberg; Sara Sills; LaShonda Wilson; and Suzanne M. Worth made key contributions to this report. | In fiscal year 2004, the Medicaid program financed about $93 billion for long-term care services. To qualify for Medicaid, individuals' assets (income and resources) must be below certain limits. Because long-term care services can be costly, those who pay privately may quickly deplete their assets and become eligible for Medicaid. In some cases, individuals might transfer assets to spouses or other family members to become financially eligible for Medicaid. Those who transfer assets for less than fair market value may be subject to a penalty period that can delay their eligibility for Medicaid. GAO was asked to provide data on transfers of assets. GAO reviewed (1) the level of assets held and transferred by the elderly, (2) methods used to transfer assets that may result in penalties, (3) how states determined financial eligibility for Medicaid long-term care, and (4) guidance the Centers for Medicare & Medicaid Services (CMS) has provided states regarding the treatment of asset transfers. GAO analyzed data on levels of assets and cash transfers made by the elderly from the 2002 Health and Retirement Study (HRS), a national panel survey; analyzed states' Medicaid applications; and interviewed officials from nine states about their eligibility determination processes. In 2002, over 80 percent of the approximately 28 million elderly households (those where at least one person was aged 65 or over) had annual incomes of $50,000 or less, and about one-half had nonhousing resources, which excluded the primary residence, of $50,000 or less. About 6 million elderly households (22 percent) reported transferring cash, with amounts that varied depending on the households' income and resource levels. In general, the higher the household's asset level, the more likely it was to have transferred cash during the 2 years prior to the HRS study. Overall, disabled elderly households--who are at higher risk of needing long-term care--were less likely to transfer cash than nondisabled elderly households. Certain methods to reduce assets, such as spending money to pay off debt or make home modifications, do not result in penalty periods. Other methods, such as giving gifts, transferring property ownership, and using certain financial instruments, could result in penalty periods, depending on state policy and the specific arrangements made. None of the nine states GAO contacted tracked or analyzed data on asset transfers or penalties applied. These states required applicants to provide documentation of assets but varied in the amount of documentation required and the extent to which they verified the assets reported. These states generally relied on applicants' self-reporting of transfers of assets, and officials from these states informed GAO that transfers not reported were difficult to identify. To help states comply with requirements related to asset transfers, CMS has issued guidance primarily through the State Medicaid Manual. CMS released a special study in 2005 to help states address the issue of using annuities as a means of sheltering assets. Additionally, CMS officials provide ongoing technical assistance in response to state questions, but noted the challenge of issuing guidance applicable to all situations given the constantly changing methods used to transfer assets in an attempt to avoid a penalty period. In commenting on a draft of this report, CMS noted the complexity of the current law and commented that data on the precise extent and cost of asset transfers to the Medicaid program have been difficult to gather. |
The U.S. Navy currently operates 288 surface ships and submarines. Four ship classes, with 23 ships under construction or recently completed, make up 96 percent of the Navy’s fiscal year 2005 budget for new construction shipbuilding. (See table 1.) Navy ships are complex defense systems, using advanced designs with state-of-the-art weapons, communications, and navigation technologies. Ships require many years to plan, budget, design, and build. Like other weapon acquisition programs, ship acquisitions begin with developing a system design. For ships, system design is followed by a detail design phase where specific construction plans are developed. Ship construction follows and typically takes 4 to 7 years. Construction time for other defense systems is much shorter—a fighter aircraft takes about 2 years from start of production to roll out from the factory floor; a tank takes about a year. (See fig. 1.) The long construction times increase the uncertainty that ship cost estimates—and budgets—must provide for. Moreover, the total cost for a ship must be budgeted for in its first year of construction. Provisions are made in the event cost growth occurs during construction. The Navy’s budgeting for cost growth has changed over the past 2 decades. During the early 1970s and through most of the 1980s, the Navy used program cost reserves built into ship construction budgets and the Ship Cost Adjustment process to manage cost growth. During the 1980s, the Navy procured an average of 17 ships each year. In fiscal year 1988, the Navy removed program cost reserves from ship construction budgets and began exclusively using the Ship Cost Adjustment process, shifting funding between shipbuilding construction programs underrunning cost to programs that were overrunning costs. Following the end of the Cold War, the Navy decreased the procurement rate of ships to about 6 per year. Beginning in fiscal year 1999, cost increases could no longer be covered using the Ship Cost Adjustment process because no shipbuilding program was under cost. In 2001, the process was eliminated, which required the Navy to fund cost growth through the current mechanism of prior year completion bills. The cost of building a ship can be broken down into four main components: labor, material, and overhead associated with the shipbuilders’ contract for the basic ship, and Navy-furnished equipment— that is, items purchased by the Navy and provided to the contractor for installation on the ship. (See table 2.) The shipbuilding contract also includes profit (referred to as fee). Two broad categories of contracts are used to procure ships: fixed-price and cost-reimbursement. Fixed price contracts provide for a firm price or an adjustable price with a ceiling price, a target price, or both. If the ceiling is reached the shipbuilder is generally responsible for all additional costs. Cost reimbursement contracts provide for payment of allowable incurred costs, to the extent prescribed in the contract. If the ship cannot be completed within agreed upon cost limits, the government is responsible for the additional costs to complete. The level of knowledge, or certainty, in the cost estimates for a ship is key to determining which type of contract to use. Contracts for the first ship of a new class are often negotiated as cost-reimbursable contracts because these ships tend to involve a high-level of uncertainty and, thus, high cost risks. Cost reimbursement contracts were used to procure the San Antonio and Virginia class ships we reviewed. More mature shipbuilding programs, where there is greater certainty about costs, are typically fixed-price contracts with an incentive fee (profit). Fixed-price contracts were used to procure the Arleigh Burke and Nimitz class ships we reviewed. Both cost- reimbursable and fixed-price incentive fee contracts can include a target cost, a target profit, and a formula that allows the profit to be adjusted by comparing the actual cost to the target cost. Construction contracts for ships generally include provisions for controlling cost growth with incentive fees, whereby the Navy and the shipbuilder split any savings when the contract cost is less than its anticipated target. Conversely, when costs exceed the target, the excess is shared between the Navy and the shipbuilder. Ship cost growth continues to pose additional funding demands on the budget. Budgets for the eight case study ships alone have required increases of $2.1 billion, and Congress has appropriated funds to cover these increases. However, the total projected cost growth on contracts for the eight ships is likely to be higher. Consequently, the Navy will need in excess of $1 billion in additional appropriations to cover the total projected cost growth. Cost growth was more pronounced for the lead ships in the two new classes we looked at—the Virginia class and especially the San Antonio class—than the more mature Arleigh Burke and Nimitz classes. (Our forecasts for cost growth on all ships that are more than 30 percent complete are shown in appendix VI.) The fiscal year 2005 budget for the eight case study ships was about $20.6 billion—representing cost growth of $2.1 billion above the initial budget request of $18.5 billion for these ships. (See table 3.) Ship construction costs comprise the majority of this increase. We were not able to determine how much of this increase was due to changes in the scope of the contract and how much of the growth funded increases in the costs of completing the initial contract scope. Amounts identified by shipbuilders and Navy program offices differed substantially. However, the initial program budgets included funding to support changes in the scope of the construction contract. These funds amounted to a small share of the initial program budget: 3 percent for DDGs 91 and 92; 5 percent for CVN 76 and CVN 77; 7 percent for LPD 17 and 4 percent for LPD 18; and 3 and 4 percent for SSNs 774 and 775, respectively. While the Congress has appropriated funds to cover a $2.1 billion increase in the ships’ costs, more funds will likely be needed to cover additional cost growth likely for these eight ships. At the time we completed our analysis in 2004, we calculated a range of the potential growth for the eight case study ships and found that the total projected cost growth would likely exceed $2.8 billion and could reach $3.1 billion. (See table 4.) These cost growth estimates have already proven to be too conservative. In its fiscal year 2006 budget submission, the Navy recognizes an additional cost growth of $223 million for SSN 775 and $908 million for CVN 77 above its fiscal year 2005 request. In addition, our estimates assumed that the shipyards will maintain their current efficiency through the end of their contracts and meet scheduled milestones. Any slips in efficiency and schedules would likely result in added costs. For example, the delivery date for SSN 775 is expected to slip by as many as 9 months, which, according to the fiscal year 2006 President’s budget has increased the final cost of the ship even more. According to program officials, the delivery date for the LPD 17 has been changed from December 2004 to May 2005, and the delivery date for the CVN 77 is expected to slip into 2009. Cost growth on new ships has a number of implications. Most tangible, perhaps, is the significant portion of the ship construction budget that must be devoted to overruns on ships already under construction. From fiscal years 2001 to 2005, 5 to 14 percent of the Navy’s ship construction budget, which totaled about $52 billion over the 5-year period, went to pay for cost growth for ships funded in prior years. This reduces the buying power of the budget for current construction and can slow the pace of modernization. The Navy is in the early stages of buying a number of advanced ships, including the Virginia class submarine, DD(X) destroyer, CVN 21 aircraft carrier, and Littoral Combat Ship. The Navy’s ability to buy these ships as scheduled will depend on its ability to control cost growth. Increases in labor hour and material costs account for 78 percent of the cost growth on shipbuilding construction contracts, while overhead and labor rate increases account for 17 percent. Navy-furnished equipment— including radars and weapon systems—represents just 5 percent of the cost growth. (See fig. 2.) Shipbuilders cited a number of direct causes for the labor hour, material, and overhead cost growth in the eight case study ships. The most common causes were related to design modifications, the need for additional and more costly materials, and changes in employee pay and benefits. Labor hour increases for the eight case study ships ranged from 33 percent to 105 percent—for a total of 34 million extra labor hours. For example, the shipbuilders for LPD 17 and CVN 76 each needed 8 million additional labor hours to construct the ships Cost growth due to increased labor hours totaled more than $1.3 billion. (See table 5.) While the total dollars were the greatest for LPD 17 ($284 million), the labor cost as a percent of total cost growth was the greatest for DDG 91 (105 percent). The lack of design and technology maturity led to rework, increasing the number of labor hours for most of the case study ships. For example, the design of LPD 17 continued to evolve even as construction proceeded. When construction began on DDG 91 and DDG 92—the first ships to incorporate the remote mine hunting system—the technology was still being developed. As a result, workers were required to rebuild completed areas of the ship to accommodate design changes. Most of the shipbuilders cited a lack of skilled workers as a driver behind labor hour cost growth. According to the shipbuilders we interviewed, many of the tasks needed to build ships are complex and require experienced journeymen to efficiently carry them out. Yet, the majority of the shipbuilders noted that the shipyards have lost a significant portion of their highly skilled and experienced workers. Delays in delivery of materials also resulted in increased labor hours. Table 6 shows the reasons for labor hour increases for each case study ship. For several of the case study ships, the costs of materials increased dramatically above what the shipbuilder had initially planned. (See table 7.) Materials cost was the most significant component of cost growth for three ships: LPD 17, SSN 775, and CVN 76. However, for LPD 17, which experienced over 100-percent growth in material costs, 70 percent of the material cost increases were actually costs for subcontracts to support design of the lead ship. Growth in materials costs was due, in part, to the Navy and shipbuilders’ underbudgeting for these costs. For example, the materials budget for the first four Virginia-class submarines was $132 million less than quotes received from vendors and subcontractors. The shipbuilder agreed to take on the challenge of achieving lower costs in exchange for providing in the contract that the shipbuilder would be reimbursed for cost growth in high value, specialized materials. In addition, the materials budget for CVN 76 and CVN 77 was based on an incomplete list of materials needed to construct the ship, leading to especially sharp increases in estimated materials costs. In this case, the Defense Contract Audit Agency criticized the shipbuilder’s estimating system, particularly the system for material and subcontract costs, and stated that the resulting estimates “do not provide an acceptable basis for negotiation of a fair and reasonable price.” Underbudgeting of materials has contributed to cost growth recognized in the fiscal year 2006 budget. Price increases also contributed to the growth in materials costs. For example, the price of array equipment on the Virginia class submarines rose by $33 million above the original price estimate. In addition to inflation, a limited supplier base for highly specialized and unique materials made ship materials susceptible to price increases. According to the shipbuilders, the low rate of ship production has affected the stability of the supplier base—some businesses have closed or merged, leading to reduced competition for the services they once produced and that may be a cause of higher prices. In some cases, the Navy lost its position as a preferred customer and the shipbuilder had to wait longer to receive materials. With a declining number of suppliers, more ship materials contracts have gone to single and sole source vendors. Over 75 percent of the materials for the Virginia class submarines—which were reduced in number from 14 to 9 ships over a 10-year period—is produced by single source vendors. Spending on subcontracts and leased labor also increased material costs on some case study ships. On LPD 17, for example, subcontracts to support lead ship design accounted for 70 percent of the increase in material costs. Table 8 highlights the various reasons cited for increased materials costs on case study ships. Program overhead costs, which include increases in labor rates, represented approximately 17 percent of the total cost growth for the eight case study ships. (See table 9.) While increases in overhead dollars totaled more than $1 billion, almost half of the increase was related to growth in labor hours. (See table 9.) Increases in program overhead were largely due to decreased workload at the shipyards. Six of the eight case study ships experienced increased overhead because there were fewer programs to absorb shipyard operation costs. Increases in benefit costs, such as pensions and medical care costs, and labor rate increases—the result of negotiations with labor unions and inflation—also drove up program overhead costs. Table 10 highlights the various reasons cited for increased overhead costs on case study ships. Navy-furnished equipment covers the costs for the technologies and equipment items—such as ship weapon systems and electronics— purchased by the Navy and provided to the contractor for installation on the ship. While Navy-furnished equipment accounts for 29 percent of the budget for the eight case study ships, such equipment accounted for only 6 percent of the total cost growth. According to Navy officials, much of the Navy-furnished equipment is common among many programs and, therefore, benefits from economies of scale. However, the integration and installation of these systems—especially the warfare systems—contributes to cost growth and is captured in the shipbuilders’ costs rather than Navy- furnished equipment. There was considerable variance from program to program. In addition, in some cases, decreases and increases in Navy-furnished equipment were the result of funds being reallocated. For example, the Integrated Warfare System on CVN 77 was originally funded through the shipbuilder construction contract, but was later deleted from the contract in favor of an existing system furnished by the Navy. Navy practices for estimating costs and for contracting and budgeting for ships have resulted in unrealistic funding of programs and when unexpected events occur, tracking mechanisms are slow to pick them up. Tools exist to manage the challenges inherent in shipbuilding, including measuring the probability of cost growth when estimating costs, making full use of design and construction knowledge to negotiate realistic target prices, and tracking and providing timely reporting on program costs to alert managers to potential problems. For the eight case study ships, however, the Navy did not effectively employ them to mitigate risk. In developing cost estimates for the eight case study ships, Navy cost analysts did not conduct uncertainty analyses to measure the probability of cost growth, nor were independent estimates conducted for some ships—even in cases where major design changes had occurred. Uncertainty analyses and independent estimates are particularly important given the inherent uncertainties in the ship acquisition process, such as the introduction of new technologies and volatile overhead rates over time, creating a significant challenge for cost analysts to develop credible initial cost estimates. The Navy must develop cost estimates as much as 10 years before ship construction begins—before many program details are known. As a result, cost analysts have to make a number of assumptions about certain ship parameters, such as weight, performance, or software, and about market conditions, such as inflation rates, workforce attrition, and supplier base. In the eight case study ships we examined, cost analysts relied on the actual cost of previously constructed ships without adequately accounting for changes in the industrial base, ship design, or construction methods. Cost data available to Navy cost analysts were based on higher ship construction rates from the 1980s. As a result, these data were based on lower costs due to economies of scale—which were not reflective of the lower procurement rates after 1989. In addition, in developing cost estimates for DDG 91, DDG 92, LPD 17, and SSN 774, cost analysts relied on actual cost data from previous ships in the same class or a similar class but that were less technologically advanced. By using data from less complex ships, Navy cost analysts tended to underestimate the costs needed to construct the ships. For CVN 76, cost analysts used proposed costs from CVN 74 with adjustments made for design changes and economic factors. However, CVN 74 and CVN 75 were more economical ships because both were procured in a single year—which resulted in savings from economies of scale. While cost analysts adjusted their estimates to account for the single-ship buy, costs increased far beyond the adjustment. Even in more mature programs—like the Arleigh Burke destroyers and the Nimitz aircraft carriers—improved capabilities and modifications made the costs of previous ships in the class essentially less analogous. Other unknowns also led to uncertain estimates in the case study ships. Labor hour and material costs were based not only on data from previous ships but also on unproven efficiencies in ship construction. We found analysts often factored in savings based on expected efficiencies that never materialized. For example, cost analysts anticipated savings through the implementation of computer-assisted design/computer-assisted manufacturing for LPD 17, but the contractor had not made the requisite research investments to achieve the proposed savings. Similar unproven or unsupported efficiencies were estimated for DDG 92 and CVN 76. Changes in the shipbuilders’ supplier base also created uncertainties in the shipbuilders’ overhead costs. Despite these uncertainties, the Navy did not test the validity of the assumptions made by the cost analysts in estimating the construction costs for the eight case study ships nor did the Navy identify a confidence level for estimates. Specifically, it did not conduct uncertainty analyses, which generate values for parameters that are less than precisely known around a specific set of ranges. For example, if the number of hours to integrate a component onto a ship is not precisely known, analysts may put in a low and high value. The estimate will generate costs for these variables along with other variables such as—weight, experience, and degree of rework. The result is a range of estimates that enables cost analysts to make better decisions on likely costs. Instead, the Navy presented its cost estimates as unqualified point estimates, suggesting an element of precision that cannot exist early on and obscures the investment risk remaining for the programs. While imprecision decreases during the program’s life cycle as more information becomes known about the program, experts emphasize that to be useful, each cost estimate should include an indication of its degree of uncertainty, possibly as an estimated range or qualified by some factor of confidence. Other services qualify their cost estimates by determining a confidence level of 50 percent. The Navy also did not conduct independent cost estimates for some ships, which is required at certain major acquisition milestones. Independent cost estimates can provide decision makers with additional insight into a program’s potential costs—in part because these estimates frequently use different methodologies and may be less burdened with organizational bias. Independent cost analysts also tend to incorporate cost for risk as they develop their estimates, which the Navy cost analysts did not do. As a result, these independent estimates tend to be more conservative— forecasting higher costs than those forecast by the program office. Department of Defense officials considered the CVN 68 and DDG 51 programs mature programs and, therefore, did not require independent estimates. Yet, an independent cost estimate has never been conducted on a CVN 68 class carrier because the program for this class of ships began prior to the establishment of an independent cost-estimating group in DOD. However, Navy officials noted that every carrier is a new program, different from previous carriers. Although an independent cost estimate was conducted for the DDG 51 program, it was conducted in 1993, and since that time, the DDG ships have undergone four major upgrades. The Navy has begun taking some actions to improve its cost estimating capabilities. For example, future programs will be funded at the DOD independent estimators’ level, which should provide a more conservative estimate and include risk analysis. In addition, Navy officials told us that they are in the process of revising cost estimating guidance to include requirements for risk and uncertainty analysis. The degree to which this guidance will enable the Navy to provide more realistic cost estimates for its shipbuilding programs will depend on how it will be implemented on individual programs. Uncertainty about costs is especially high for new classes of ships, since new classes incorporate new designs and new technologies. Yet, the Navy’s approach to negotiating contract target prices for construction of the lead ship and early follow-on ships does not manage this uncertainty sufficiently—evidenced by substantial increases in the prices of the first several ships. Target prices for detail design and construction of the lead and early follow-on ships are typically negotiated at one time. In these cases the Navy does not make use of knowledge gained during detailed design or during construction of the lead ship to establish more realistic prices. When this approach to negotiating prices was used, it also affected the information that was available to the Congress at the time it funded construction of lead and follow-on ships. Target prices for all of the case study ships increased, but, as shown in table 11, the increase was greater for the two San Antonio class ships and the two Virginia class ships—both new classes of ships. Increases in the target prices of the LPD 17 and LPD 18 were particularly pronounced, reaching 139 and 95 percent, respectively. The realism of target prices reflects the Navy’s approach to negotiating contract prices—the Navy negotiates target prices for the first several ships at a stage of the program when uncertainty is high and knowledge limited. For example, for the San Antonio class ships, the Navy negotiated prices for the detail design and construction of the lead ship (LPD 17) and the first two follow-on ships (LPD 18 and LPD 19) at the same time. By negotiating target prices for these ships before detail design even began, target prices for these three ships did not benefit from information gained during detail design about the materials and equipment or specific processes that will be used to construct the ship. Target prices for the follow-on ships, LPD 18 and LPD 19, did not benefit from knowledge gained in initial construction of LPD 17. In contrast, for the Virginia class ships, the Navy negotiated detail design separately from construction, benefiting from the knowledge gained from detail design in negotiating prices for construction. However, 2 years after negotiating the detail design contract, the Navy negotiated target prices for the SSN 774 and SSN 775, both considered lead ships for the two shipyards involved in constructing submarines. Target prices for the first two follow-on ships, SSN 776 and SSN 777 were agreed on at this time as well. As a result, target prices for these follow-on ships did not benefit from the knowledge gained from constructing the lead ships. The practice of setting target prices early on affects not only the realism of the contract target prices, but also the realism of the budgets approved by the Congress to fund these contracts. In order to fund a contract covering both detail design and lead ship construction, authorization and funding for detail design and lead ship construction is approved by the Congress in one budget year, before detail design begins. For example, the Congress funded detail design and construction of LPD 17 in the fiscal year 1996 budget. While the follow-on ships, LPDs 19 and 20, were funded in later years, budgets were still unrealistic because the target prices were used as a basis for the budget request. The size of the budget and the contract conditions can also affect the realism of target prices. In negotiating the contract for the first four Virginia class ships, program officials stated that the target price they could negotiate was limited to the amount included in approved or planned budgets. The shipbuilders said that they accepted a challenge to design and construct these ships for $748 million less than their estimated costs because the contract protected their financial risk. The contract included a large minimum fee (profit), in addition to the incentive fee that would be reduced in the event of cost growth. Moreover, the contract was structured so that the Navy would pay the full cost of increases for specialized, highly engineered components rather than share the cost increases with the shipbuilder. The Navy also was responsible for the full amount of growth in certain labor costs. Recently, the Navy has supported the preparation of more realistic budget requests. Program managers are encouraged to budget to their own estimate of expected costs rather than at target prices that are not considered realistic. For example, for the LPD 17, an acquisition decision memorandum stated that the program will be budgeted to the Cost Analysis Improvement Group estimate. Also, in negotiating recent contracts for additional Virginia class and San Antonio class ships, the Navy structured the contracts to encourage more realistic target prices. Beyond target prices, shifting priorities, and inflation accounting can have a significant impact on the realism of ship budgets. Specifically, budget requests are susceptible to across-the-board reductions to account for other priorities, such as national security and changes in program assumptions. Competing priorities create additional management challenges for programs that receive a reduced budget without an accompanying reduction in scope. For example, during the budget review cycles of 1996 through 2003, the initial cost estimate for DDGs 89-92 was decreased by $119 million—or 55 percent of the total cost growth for the four DDGs. Had the initial estimate not been reduced, the cost growth would have only amounted to $96 million. Inflation rates can also have a significant impact on ship budgets. Until recently, Navy programs used Office of the Secretary of Defense and Office of Management and Budget inflation rates. Inflation rates experienced by the shipbuilding industry have historically been higher. As a result, contracts were signed and executed using industry specific inflation rates while budgets were based on the lower inflation rates, creating a risk of cost growth from the outset. For the case study ships, the difference in inflation rates, while holding all other factors constant, explains 30 percent of the $2.1 billion in cost growth for these ships. In February 2004, the Navy changed its inflation policy directing program offices to budget with what the Navy believes are more realistic inflation indices. The Navy anticipates this policy change should help curtail future requests for prior year completion funds. While DOD guidance allows some flexibility in program oversight, we found that reporting on contractor performance was inadequate to alert the Navy to potential cost growth for the eight case study ships. With the significant risk of cost growth in shipbuilding programs, it is important that program managers receive timely and complete cost performance reports from the contractors. However, earned value management—a tool that provides both program managers and the contractor insight into technical, cost, and schedule progress on their contracts—was not used effectively. Cost variance analysis sections of the reports were not useful in some cases because they only described problems at a high level and did not address root causes or what the contractor plans were to mitigate them. Earned value management provides an objective means to measure program schedule and costs incurred. Among other requirements, DOD guidance on earned value management requires that “at least on a monthly basis” schedule and cost variances be generated at levels necessary for management control. Naval Air Systems Command, which is considered a center of excellence for earned value management, recommends that cost performance reports be submitted at a minimum on a monthly basis, in part to help the program manager mitigate risk. Officials from the command stressed that because earned value management acts as an early warning system, the longer the time lapse in receiving the cost performance report, the less valuable the data become. However, shipbuilders for the Nimitz and Virginia class ships we reviewed submitted their official earned value management cost performance reports to the Navy on a quarterly basis instead of monthly, delaying the reports—and corrective action—by 3 to 4 months. Had the reporting been monthly, negative trends in labor and materials on the Virginia class submarine would have been revealed sooner and enabled corrective action to occur quickly in areas of work that were not getting completed as planned. Earlier reporting would have also alerted managers of cost performance problems on the CVN 76 carrier. Because data on actual cost expenditures for CVN 76 were provided incrementally and late, the program manager did not identify a funding shortage until it was too late to remedy the problem. As a result, a contractwide stop-work order was given. LPD 17 also experienced cost and schedule problems. To allow for better tracking of schedule and costs and more timely response to problems, the program manager changed the cost performance reporting requirement from quarterly to monthly. The quality of the cost performance reports, whether submitted monthly or quarterly, was inadequate in some cases—especially with regard to the variance analysis section, which describes any cost and schedule variances and the reasons for these variances and serves as an official, written record of the problems and actions taken by the shipbuilder to address them. Both the Virginia class submarine and the Nimitz class aircraft carrier programs’ variance analysis reports discussed the root causes for any cost growth and schedule slippage and described how these variances were affecting the shipbuilders’ projected final costs. However, the remaining case study ship programs generally tended to report only high-level reasons for cost and schedule variances with little to no detail regarding root cause analysis or mitigation efforts—making it difficult for managers to identify risk and take corrective action. Finally, the periodic reassessment of the remaining funding requirements on a program and a good faith estimate at completion—another part of earned value management—were inadequate to forecast the amount of anticipated cost growth. Managers are required to evaluate the estimate at completion and report it in the cost performance report, updating when required. The Defense Contract Audit Agency recently observed the importance of the shipbuilders’ developing credible estimates at completion and ensuring all estimates at completion revisions are justified and made in a timely manner. However, the shipbuilders’ estimates for the study ships tended to be optimistic—that is, they fell at the low end of our estimated cost growth range. Specifically, shipbuilder estimates for four ships that are still under construction were near our low estimate, (See fig. 3), leading management to believe that the ships will cost less than what is likely to be the case. See appendix VI for more details on the cost growth forecasts for ships currently under construction. The challenge in accurately estimating and adequately funding the construction of Navy ships is framed by the long construction time cost estimates must account for and the fact that ships must be fully funded in the first year of their construction. Thus, an underestimation of costs, a budget reduction, or an increase in cost, creates a need for additional money that must be requested and appropriated. The fact that requests have been sizable and have occurred routinely over the years suggests that the Navy can do better in getting a match between the estimated costs of new ship construction and the money it budgets to pay for them. The goal is not necessarily to eliminate all requests for additional funds, for that could lead to overbudgeting or deferring necessary design changes. Rather, the goal is to get a better match between budgeted funds and costs in order that the true impact of investment decisions is known. Our work shows that currently, the Navy’s cost estimating, budgeting, and contracting practices do not do a good enough job of providing for the likely costs of building ships. This is particularly true for first of class ships, for which uncertainty is highest. Moreover, when actual costs begin to go astray of budgeted funds, management tools intended to flag variances and enable managers to act early are not always effectively employed. If these practices are to lead to more realistic results---and reduced overruns—they will have to produce and take advantage of higher levels of knowledge. In some cases, improved techniques, such as performing uncertainty analyses on cost estimates, can raise the level of knowledge. In other cases, such as contracting for detail design and construction on first-of-class ships, contracting in smaller steps can allow necessary knowledge to build before major commitments are made. The Navy has recognized the need to get a better match between funding and cost and is providing guidance to achieve this match. The success of this guidance will depend on how well it is implemented on individual programs. There are additional steps the Navy can take, which are detailed in our recommendations. Taking these steps now is especially important for the Navy as it embarks on a number of new, sophisticated shipbuilding programs. If a better match between funding and cost is not made, additional funds needed for cost growth will continue to compete for the funds needed for new investments in ships or other capabilities. Difficult budget choices are ahead making it essential that priorities are set with a clear understanding of the financial implications of different spending and investment alternatives. To the extent unplanned demands on the budget can be reduced, better informed decisions can be made. We are recommending that the Secretary of Defense take the following seven actions. To improve the quality of cost estimates for shipbuilding programs and reduce the magnitude of unbudgeted cost growth, we recommend the Secretary of Defense conduct independent cost reviews for all follow-on ships when significant changes occur in a program and establish criteria as to what constitutes significant changes to a shipbuilding program, conduct independent reviews of every acquisition of an aircraft carrier, direct the Secretary of the Navy to develop a confidence level for all ship cost estimates, based on risk and uncertainty analyses. To assure that realistic prices for ship construction contracts are achieved, we recommend that the Secretary of Defense direct the Secretary of the Navy to negotiate prices for construction of the lead ship separately from the pricing of detail design and separate pricing of follow-on ships from pricing of lead ships, negotiating prices for early ships in the budget year in which the ship is authorized and funded. To improve management of shipbuilding programs and promote early recognition of cost issues, we recommend that the Secretary of Defense direct the Secretary of the Navy to require shipbuilders to submit monthly cost performance reports and require shipbuilders to prepare variance analysis reports that identify root causes of reported variances, associated mitigation efforts, and future cost impacts. DOD agreed with our recommendations to conduct independent reviews of every aircraft carrier and to develop a confidence level for all ship cost estimates, based on risk and uncertainty analysis. DOD partially agreed with our recommendations about contract pricing and cost performance reporting—areas the Navy noted it has taken some measures to improve. While the Navy has taken steps in the right direction, we believe more must be done to reduce ship cost overruns, consistent with our recommendations. We made a recommendation in our draft report that independent reviews be conducted for all follow-on ships when significant changes to the program occur. DOD responded that it will request additional assessments, if needed after Milestone B. It is important that criteria be established for determining when additional assessments are needed. Programs may undergo several changes after the required estimate, such as the Arleigh Burke destroyer, which underwent four major upgrades since its only independent estimate in 1993. We believe DOD needs to establish criteria concerning what significant changes to a program trigger an independent cost estimate and have modified our recommendations accordingly. DOD could clarify whether these changes include baseline, profile, or major systems upgrades, for instance. DOD stated that it will consider, on a case-by-case basis, negotiating detail design separately from the lead ship and negotiating early follow-on ships separately from the lead ship. We believe that this approach should be the normal policy, if overruns are to be reduced. Ships represent a substantial investment--more than $1 billion for each destroyer and amphibious transport, about $2.5 billion for the lead ship in the next class of destroyers, $2.5 billion for submarines, and several billion for carriers. Ships costing substantially less—for example, $220 million for each Littoral Combat Ship—are the exception rather than the norm. A realistic target price is important for structuring contract incentives and providing informed budgets to the Congress. Deciding prices for the lead ship and follow-on ships together before detail design has even begun on the lead ship is unlikely to yield realistic prices. Insight gained into material costs and labor effort even in the first year of detail design will make realistic pricing of the lead ship more feasible. Similarly, experience gained in the first years of construction can benefit the realism of prices for follow-on ships. DOD noted that the Navy is already requiring shipbuilders to submit cost performance reports monthly with one exception. With the Nimitz class program beginning monthly reporting in March 2006, the Virginia class will be the only program to submit quarterly instead of monthly cost performance reports. DOD states that the Navy has access to labor hour data in the interim. While informal access to timely data is preferable to delayed access, without written, formal cost reporting there is less visibility or accountability from the last formal report to the next cost performance report 3 months later. The Virginia class program has experienced significant cost increases and experienced one of the largest prior year funding requests of programs we reviewed. LPD 17 and carrier program officials recognized that more frequent formal reporting and review of cost performance helped them to better manage cost growth and changed their program reporting requirements from quarterly to monthly. Although variance analysis reporting is required as part of cost performance reporting and is being conducted by the shipbuilders, we observed that there is wide variation in the quality of these reports. DOD rightly observes that these reports are one of many tools used by the shipbuilders and DOD to track performance. To be a useful tool, however, we believe it is important that shipbuilders provide the government with detailed analyses of the root causes and impacts of cost and schedule variances. Cost performance reports that consistently provide thorough analysis of the causes of variances, their associated cost impacts, and mitigation efforts will allow the Navy to more effectively manage, and ultimately reduce, cost growth. DOD’s detailed comments are provided in appendix VII. As agreed with your office, unless you announce its contents, we will not distribute this report further until 30 days from its date. At that time, we will send copies to the Secretary of Defense, the Secretary of the Navy and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or Karen Zuckerstein, Assistant Director, at (202) 512-6785. Key contributors to the report are identified in appendix VIII. Our methodology for all three objectives included a case study analysis of eight ships. These ships were in four ship classes: Virginia class submarines, LPD 17 amphibious assault ships, Arleigh Burke destroyers, and Nimitz class carriers. We selected these ship classes and these ships based on data contained in the “Naval Sea Systems Command Quarterly Progress Report for Shipbuilding and Conversion Status of Shipbuilding Programs,” dated July 1, 2003. This report identifies all ships under construction and the progress in terms of “percent complete” for each ship. We looked only at new construction and excluded ship conversions. This report identified eight ship classes with ships under construction. In addition to the four ship classes that we studied, the report identified ships in the Seawolf attack submarine, LHD amphibious assault ship, T-AKE cargo ship, and T-AKR vehicle cargo ship classes. We did not review the Seawolf and T-AKR ship classes because construction of these classes was at an end and were unlikely to affect future budgets. We did not include ships from the remaining two classes because we limited the ship selection to those ships that were more that 30 percent complete and none of the ships in those two classes met those criteria. We selected two ships per class for the four classes we reviewed. Where possible, we chose a lead and follow-on ship. We also looked at which shipyards were building these ships in order to get coverage of the major shipyards. We limited the selection to ships more than 30 percent complete so we had sufficient information on program performance. Three Virginia-class submarines, three amphibious assault ships, two carriers, and 12 destroyers met this criterion. For the Virginia class program, we initially chose SSN 774 and SSN 776; both built and integrated at the Electric Boat, Connecticut shipyard. As we gained knowledge of the program and Newport News’ role in constructing and launching half of the submarines in this class, we substituted the SSN 775 for the SSN 776. Characteristics of the ships we selected are summarized in table 12. Because a large percentage of the ship construction budget is allocated to fund the shipbuilding contracts, we assessed the shipbuilders’ cost performance for the four classes of ships in our study. To make these assessments, we applied earned value analysis techniques to data captured in shipbuilder cost performance reports. We also developed a forecast of future cost growth. For ships currently under construction (and more than 30 percent complete), we compared the initial target costs with the likely costs at the completion of the contracts using established earned value formulas. We based the lower end of our cost forecast range on the costs spent to date added to the forecast cost of work remaining. The remaining work was forecast using the cumulative cost performance index efficiency factor. Studies have shown that using this method is a reasonable estimate of the lower bound of the final cost. For the upper end of our cost range, we relied on either the actual costs spent to date added to the forecast of remaining work with an average monthly cost and schedule performance index or a cost/percent complete trend analysis, whichever was higher. In order to understand the components of cost growth, we used cost data provided by the shipbuilders for each of the case study ships. In most cases we compared the initial target cost to the current target cost. As a result some of the increases in target cost could have resulted from additional contract modifications initiated by the Navy, cost overruns due to the shipbuilder, or unanticipated events. Most shipbuilders allocate contract costs into three categories: material costs, labor costs, and overhead costs. We, however, used these data to allocate costs into the following categories: labor hours, material costs, and labor and overhead rates. Since labor costs and overhead costs can change due to labor hours, and labor and overhead rates, we separated the program overhead cost associated with an increase in labor and overhead rates from the program overhead cost associated with an increase in labor hours. This was accomplished by holding each component constant to isolate the impact. After we isolated the program overhead cost associated only with additional labor hours, we added this to the shipbuilders’ reported labor cost growth and subtracted this from the shipbuilders’ reported overhead cost growth. Our analysis captures all costs associated only with overhead and labor rate changes. Increases in overhead related to growth in labor hours are captured only in our analysis of labor hour increases. We used the latest cost performance data available to us in July 2004. The latest available cost performance reports for the case study ships were as follows DDG 91 June 2004, DDG 92 May 2004, CVN 76 July 2003, CVN 77 March 2004, LPD 17 and LPD 18 May 2004, and SSN 774 and SSN 775 July 2004. In order to understand the funding and management practices that contribute to cost growth, we reviewed Navy acquisition guidance and reviewed best practices literature for weapons systems construction. To better understand the budgeting of ships and the acquisition process we met with officials at the Navy and Office of Secretary of Defense Comptroller. Based on indicators from our case study analysis that cost estimating practices may contribute to cost growth, we met with cost estimators, including those from Naval Sea Systems Command, Cost Analysis Improvement Group, and the Navy Cost Analysis Division. We reviewed DOD and Navy cost estimating policies, procedures, and guidance. Additionally, we met with cost estimators from the Naval Air Systems Command, the Air Force, and the Army to compare how Naval Sea Systems Command estimating practices vary from other military cost estimating practices. We interviewed program officials, contracting officers, and shipbuilders and reviewed shipbuilder reports, which included explanations for cost growth. We met with officials at Supervisor of Ships and the Defense Contract Audit Agency both at the shipyards and at headquarters to review their oversight policies, procedures, and practices. We met with Navy Audit Service officials to gain information on earned value management reviews at shipyards. We also reviewed contract documentation and audit reports. Our analysis relied on shipbuilders’ earned value data. To establish the reliability of the data, we examined the integrated baseline reviews that are conducted at the beginning of a contract. We also confirmed that the shipbuilders had validated earned value systems. We performed our review from July 2003 to December 2004 in accordance with generally accepted government auditing standards. The USS Arleigh Burke class destroyer (DDG 51) provides multimission offensive and defensive capabilities, and can operate independently or as part of carrier strike groups, surface action groups, and expeditionary strike groups. The DDG 51 class, which is intended to replace earlier surface combatant classes, was the first U.S. Navy ship designed to reduce radar cross-section and its detectability and likelihood of being targeted by enemy sensors and weapons. Originally designed to defend against Soviet aircraft, cruise missiles, and nuclear attack submarines, the ship is to be used in high-threat areas to conduct antiair, antisubmarine, antisurface, and strike operations. As of May 2004, 43 Arleigh Burke destroyers have been delivered to the Navy, with a total of 62 to be delivered at the end of the production. Funding for the lead ship (DDG 51) was provided in fiscal year 1985. The lead ship construction contract was awarded to Bath Iron Works in April 1985. With the award of the follow-on ship—DDG 52—to Ingalls Shipbuilding Incorporated—a second shipbuilder was established. The DDG 91 and DDG 92, which are covered in this report, include a number of upgrades, such as the most current Aegis weapons system; installation of a remote mine-hunting system capability and the introduction of commercially built switchboards. DDG 91 and DDG 92 cost $135 million more than budgeted. (See table 14.) The Congress has appropriated almost $100 million to cover these increases. Construction costs—especially the costs associated with the number of labor hours needed to build the ships—were the major source of cost growth. Navy-furnished equipment, including the Aegis weapon system, was also a significant source of cost growth for the two DDGs, representing 21 percent of the cost growth. Increases in the number of labor hours account for 67 percent of the cost growth on shipbuilding construction contracts. We found that ship overhead—such as employee benefits and shipyard support costs—and labor rate increases accounted for 21 percent of cost growth. The two DDGs actually underran material costs, due to DDG 91 material cost savings. Labor hour increases account for the majority of the cost growth on DDG 91 and DDG 92. (See fig. 5.) DDG 91 required almost 1 million hours of additional labor hours and DDG 92 required an additional 2 million hours above the original contract proposal. DDG 91 and DDG 92 incorporated a number of new technologies in their design, including the remote mine-hunting system, which consists of a remote operated vehicle and a launch and recovery system stored within the ship. To accommodate this system, designers had to make significant structural changes to 26 of the ship’s 90 design zones. When construction began on DDG 91 and DDG 92, the remote mine-hunting system’s design was not mature. As a result, significant details of the design could not be captured in the shipbuilders’ planned contract costs. Moreover, the shipbuilders anticipated that the system’s design would be completed in July 1999— several months before the start of ship fabrication in November 1999. However, it was not completed until November 2001, with additional revisions to the design occurring through March 2003. Because the design was changing as installation of the system began, laborers re-installed parts of the system, increasing the engineering and production hours. As the number of hours to construct the ship increased, total labor costs grew, with the shipbuilder paying for additional employee wages and overhead costs. As table 15 shows, we separated the overhead and labor rates associated with the additional hours and added this to the shipbuilders reported labor cost growth. Our analysis thus captures all costs associated with labor hour growth—including overhead and labor rates. The methodology we used to separate the overhead costs associated with rate increases and labor hour increases is discussed in appendix 1. According to the shipbuilder, additional labor hours were also needed to complete DDG 91 because many experienced workers had left the trade in favor of higher paying jobs in the area and, as a result, less experienced workers took longer to finish tasks and made mistakes that required rework. For DDG 92, workers encountered challenges in building the ship due to a new transfer facility that enabled the shipyard to construct a greater proportion of the ship on land. The ship was constructed using larger subsections or units. While the shipbuilder expects that the facility will improve efficiency, on DDG 92, workers had to learn new processes and had difficulties aligning larger units of the ship to one another. Labor hours increased as workers spent additional time realigning and combining the units to make larger sections of the ship. About $38 million of Navy-furnished equipment cost growth is associated with the Aegis weapon system, specifically the purchase of an additional SPY-D radar used in system testing. The Navy originally planned to move the developmental radar from the engineering and development site to the final testing and certification center. However, increased complexity involved with the introduction of a new radar and new computing plant required more development time than was originally planned. In order to ensure timely delivery of DDG 91, the Navy procured a second radar for the testing facility, allowing the Navy to simultaneously finish final development of the radar, while at the same time beginning testing and certification of the Aegis weapon system computer program. Our analysis shows that program overhead costs and increases in labor rates accounted for approximately 21 percent of the cost growth on the DDG 91 and 92 contracts. Table 15 includes overhead increases that were a consequence of labor hour increases. Table 16 isolates the remaining portion of overhead increases due to increases in rates. Despite savings incurred through the consolidation of Ingalls Shipyard into Northrop Grumman, overhead rates were about 13 percent higher than anticipated in 2001. According to shipbuilders, increases in overhead rates can be attributed largely to changes in the shipyard’s workload and employee benefit costs. After the cancellation of the construction contract for a commercial cruise ship due to the company’s bankruptcy and the delay in signing the contract for the next generation destroyer, overhead costs had to be absorbed by the remaining contracts at the yard, including the DDG 91. Similarly, on DDG 92, the shipbuilder based its overhead rates on anticipated work from the construction of the next generation destroyer and the San Antonio class ships. When these programs did not materialize as expected, the other programs at the yard assumed overhead costs. At both shipyards health and dental care costs increased. For example, at one shipyard, the shipbuilder negotiated a favorable medical insurance contract but the insurance company went bankrupt, forcing the shipbuilder to become self-insured—at a higher cost. Both shipbuilders were also affected by labor rate increases. Following a strike at Bath Iron Works, the union negotiated a $1.12 increase in labor rates or a $6 million increase above the costs projected in the contract. For Northrop Grumman, between the initial proposal and the latest estimate, the labor rate increased by $1.50 per hour for a total impact on the DDG 91 of $7 million. As shown in table 17, material cost increases did not represent a major source of cost increases for DDG 91 and DDG 92—largely because the materials were purchased for four ships at one time. However, DDG 92 overran its material budget by $30 million—73 percent of which was due to information technology, small tooling, and other material costs. Although these costs comprise only 17 percent of the material cost budget, their costs are driven by labor hour usage—as additional labor hours were needed to construct DDG 92, additional tools were needed, raising material costs. Material costs also increased because the shipbuilder began allocating information technology costs to materials—not overhead, as it had initially done. DDG 91 experienced a $22 million underrun of material costs. According to the shipbuilder, the underrun was due to efficiencies gained through the consolidation of Ingalls Shipyard with nearby Avondale Shipyard—also owned by Northrop Grumman Ship Systems. With the consolidation, Northrop Grumman stated it could purchase materials for both shipyards—creating cost savings that were not anticipated in DDG 91’s original material cost budget. The mission of the Nimitz class nuclear powered aircraft carriers—which are intended to replace the Navy’s conventionally powered carriers—is to provide a sustained presence and conventional deterrence in peacetime; act as the cornerstone of joint allied maritime expeditionary forces in crises; and support aircraft attacks on enemies, protect friendly forces, and engage in sustained independent operations in war. Nine Nimitz class nuclear carriers—CVN 68 through CVN 76—have been delivered since acquisition of the first ship in October 1967. CVN 77, the tenth and final ship of the class, is a modified version of CVN 76 and will serve as a transition ship to the next generation of aircraft carriers. Both CVN 76 and CVN 77 included several significant design changes, including a bulbous bow; larger air-conditioning plants; a redesigned island; weapons elevator modifications; and an integrated communications network. The Fiscal Year 2005 President’s Budget showed that budgets for the CVN case study ships had increased by $173 million, and the Congress has appropriated funds to cover these increases. However, based on March 2004 data, we projected additional cost growth on contracts for the carriers is likely to reach $485 million and could be higher. Therefore, the Navy will need additional appropriations to cover this cost growth. The fiscal year 2005 budget for the carriers is about $9.6 billion— $173 million more than the initial budget request for these ships. (See table 19.) As a result, the Navy has requested $275.4 million through both the prior year completion bill and other financial transfers to fund cost increases on the CVN program. Ship construction costs comprise the majority of this increase. On CVN 76, ship construction costs grew by $252 million above the initial budget. As a result of cost growth, CVN 76 was in danger of running out of funding. The program office issued over 75 stop work orders—including one contract wide stop work order to temporarily save funding. Lower priority work was cancelled or halted to avoid further cost growth. While stop work orders saved money in the short term, they resulted in significant costs later. On CVN 76 some work had to be completed in a post-delivery contract—at a higher cost. We calculated a range of the potential growth for CVN 77 and found that the total projected cost growth is likely to exceed $485 million and could reach $637 million. (See table 20.) Our cost growth estimates have proven to be understated. The Fiscal Year 2006 President’s Budget recognizes cost growth of $908 million for ship construction above the prior year’s budget request. In addition, we assumed that the shipbuilder will maintain its current efficiency through the end of the contracts and meet scheduled milestones. For example, Navy officials told us that delivery of CVN 77 is likely to slip to January 2009, further increasing the final cost of the ship. Based on 2004 data, increases in labor hour and material costs account for 80 percent of the cost growth on CVN 76 and CVN 77, while the costs for Navy-furnished equipment—including propulsion and weapon systems— declined. (See fig. 7.) The remaining 23 percent of cost growth resulted from increases in overhead costs. The shipbuilder cited a number of direct causes for the labor hour, material, and overhead cost growth in the case study ships. The most common causes were related to demands for labor on other programs at the shipyard, the need for additional and more costly materials, and changes in employee pay and benefits. Material costs increased on CVN 76 by $294 million and on CVN 77 by $134 million since the contracts were first awarded. On both CVN 76 and CVN 77, material costs grew, in part, because the shipbuilder underestimated the original budget for materials. In April 2002—7 years after construction began on CVN 76—about $32 million in errors in material purchase estimates were revealed. CVN 77 has also experienced a significant increase in material costs due to under budgeting. According to the shipbuilder, a compressed construction schedule on CVN 77 resulted in the budget for materials being established prior to the completion of the carrier’s design and even the completion of design work on certain systems on CVN 76. As a result, the true magnitude of the carrier’s material costs was not known at the time of the contract negotiation. Early in CVN 77 construction, however, the shipbuilder reassessed the materials needed for construction in order to have a more realistic estimate of final material costs. The Navy and the Defense Contract Audit Agency recognized the absence of needed information on materials during its review of the shipbuilder’s proposal and expressed concerns about the adequacy of the cost estimating system. According to Newport News officials, the shipyard and Defense Contract Audit Agency are working to resolve their concerns. The shipbuilder is estimating $200 million in material cost increases and additional funds are being requested to cover this increase. According to the shipbuilder, material cost increases on both CVN 76 and CVN 77 can be attributed to increases resulting from a declining supplier base and commodity price increases. Both carriers’ material costs have been affected by an over 15 percent increase in metals costs, which, in turn, increases costs for associated components used in ship construction. Moreover, many of the materials used in the construction of aircraft carriers are highly specialized and unique—often produced by only one manufacturer. With fewer manufacturers competing in the market, the materials are highly susceptible to cost increases. Other reasons for material cost increases include the following: Expenses of about $20 million in non-nuclear engineering effort that were subcontracted for in late 1997 and of about $50 million for information services were transferred from overhead to material in the middle of the project. The expansion of commercial-off-the-shelf equipment in CVN 77 resulted in additional costs to test the materials to make sure military specifications were met. Costs on both carriers grew because of additional labor hours required to construct the ships. At delivery, CVN 76 required 8 million hours of additional labor hours to construct, while CVN 77 has required 4 million hours. As the number of hours to construct the ship increased, total labor costs grew, with the shipbuilder paying for additional employee wages and overhead costs. Increases in labor hours were due in part to an underestimation of the labor hours necessary to construct the carriers. The shipbuilder negotiated CVN 76 for approximately 39 million labor hours—only 2.7 million more labor hours than the previous ship—CVN 75. However, CVN 75 was constructed more efficiently because it was the fourth ship of two concurrent ship procurements. (See table 23.) CVN 76 and CVN 77, in contrast, were procured as single ships. As table 23 shows, single ship procurement is historically less efficient than two ship procurements, requiring more labor hours. The shipbuilder and Navy budgeted the same number of hours to construct CVN 77 as to construct CVN 76, despite forecasts showing that at 55 percent complete CVN 76 would need almost 2 million more hours above the negotiated hours to complete the ship. To date, CVN 77 is expected to incur over 4 million man-hours more than negotiated. Some of the labor hour increase on CVN 76 occurred as a result of demands for labor on other programs at the shipyard. During construction of CVN 76, 1 million hours of labor were shifted from the construction of the carrier to work on the refueling and overhaul of CVN 68. The Navy deemed the carrier overhaul and refueling effort as a higher priority than new ship construction because carriers were needed back in the fleet to meet warfighting requirements. Many of the most skilled laborers were moved to the refueling effort, leaving fewer workers to construct CVN 76. Without many of the necessary laborers to construct the ship, the CVN 76 construction schedule was delayed. In order to meet construction schedule deadlines employees were tasked to work significant overtime hours. Studies have shown, however, that workers perform less efficiently under sustained high overtime. Problems with late material delivery also led to labor hour increases on both CVN 76 and CVN 77. When material did not arrive on time, the shipbuilder tried to work around the missing item in order to remain on schedule—which is less efficient than had the material been available when planned. On CVN 77, for example, parts for a critical piping system were delivered over a year late, necessitating work-arounds and resequencing of work, driving labor costs up. Other reasons for labor hour increases on CVN 76 and CVN 77: A 4-month strike in 1999 led to employee shortages in key trades, contributing to a loss of learning with many employees not returning to the shipyard. According to Navy officials, the shipbuilder was given $51 million to offset the strike’s impact. Program schedule required concurrent design, planning, material procurement, and production activities. Additional labor hours were spent responding to design changes, which ultimately affected CVN 77 cost and schedule. Due to unavailability of large-sized steel plates the shipbuilder had to re- plan the ship’s structure so it could be constructed with smaller-sized plates. This required not only extensive redesign, but resulted in additional production hours because laborers needed additional time to fit and weld the smaller plates together. While the total overhead and labor rate costs on both the CVN 76 and CVN 77 grew by $232 million over the life of the contract, labor hour increases accounted for over half of that amount (See table 6.) According to Navy officials, some of the overhead cost growth on CVN 76 can be attributed to three major accounting changes since the contract was awarded in late 1994. While these accounting changes increased overhead costs, they resulted in a reduction of material costs. According to the shipbuilder, overhead cost increases on CVN 77 can be attributed to increases in pension and healthcare costs. Changes in the shipyard’s workload and employee benefit costs also led to overhead cost increases on CVN 77. After delays in signing contracts for a carrier overhaul and the next generation aircraft carrier, overhead costs had to be absorbed by the CVN 77 program. According to the shipbuilder, labor rate increases on CVN 76 resulted from union negotiations following a strike at the shipyard, as well as significant use of overtime labor, which is more expensive than normal hourly wages. According to Navy officials, between 30 and 40 percent of the work on CVN 76 was done on overtime in 2003. Navy-furnished equipment did not represent an area of cost growth on CVN 76 and CVN 77. On CVN 76, the costs for propulsion equipment decreased by close to $145 million—driving down the overall cost of Navy- furnished equipment. Since 2001, costs for Navy-furnished equipment on CVN 77, however, have grown by $100 million. This growth on CVN 77 can be attributed to increases in the cost associated with the Integrated Warfare System—the carrier’s combat system. The Integrated Warfare System included new phased array radar that was being developed by the next generation destroyer program. However, when the radar technology did not become available as planned, the Navy decided to install a legacy system on the ship. Because the shipbuilder was suppose to buy and install the Integrated Warfare System as part of the original contract scope, the costs for the Integrated Warfare System were removed from the contract and used by the Navy to procure a legacy system as Navy- furnished equipment. The San Antonio class amphibious transport dock ship is designed to transport Marines and their equipment and allow them to land using helicopters, landing craft, and amphibious vehicles. The class is expected to increase operational flexibility and survivability over each ship’s 40-year lifespan and to operate at lower cost than previous amphibious transport ship classes. The new class is also designed to reduce crew size and provide significant improvements in command, control, communications, computer, intelligence, and quality of life. In acquiring LPD 17, the lead ship in the class, a three-dimensional computer-aided design tool and a shared data tracking system has been used. The shared data tracking system was intended to provide significant savings within the San Antonio class program through the reuse of critical data in future design, construction, and operational activities. We focused our review on the LPD 17 and 18. Budgets for the two LPD case study ships have grown by $1 billion, and funds have been appropriated to cover these increases. However, the Navy could need additional appropriations of $200 million to $300 million to fund projected cost growth. For detail design and construction of LPD 17, the Congress initially appropriated $953.7 million to fund the construction contract (the basic contract plus a budget for future changes) and acquisition of Navy- furnished equipment. The Congress later appropriated $762 million to fund LPD 18 construction. (See table 26.) Since that time, the Congress has appropriated $1 billion to cover the increases in the ships’ costs. However, more funds will likely be needed to cover additional cost growth for these two ships. We project that, if the current schedule is maintained, total cost growth for the LPD 17 and LPD 18 will exceed $1.2 billion and possibly reach $1.4 billion. (See table 27.) Our cost growth estimates—both low and high—are likely understated because we assumed that the shipyards will maintain their current efficiency through the end of their contracts and meet scheduled milestones. LPD 17 did not meet the planned December 2004 delivery date. Delivery is now scheduled for May 2005, increasing the final cost of the ship. Increases in labor hour and material costs account for 76 percent of the cost growth on LPD 17 and LPD 18 construction contracts. Navy-furnished equipment—including radars, propulsion equipment, and weapon systems—represents just 2 percent of the cost growth. The remaining 22 percent was due to increases in overhead and labor rates. (See fig. 9.) The shipbuilder cited a number of direct causes for the labor hour, material, and overhead cost growth in the two case study ships. The most common causes were related to the concurrent development of a new and unproven design tool and design of the lead ship, initial focus on controlling total lifetime costs, and changes in employee pay and benefits. Engineering costs (classified as material costs) associated with use of a three-dimensional product model to design LPD 17 were a key contributor to material cost growth. The design tool was not fully developed and subsequent problems affected all aspects of the design. Subcontracts for engineering design doubled, accounting for $215 million in cost growth on LPD 17. Development of an integrated production data environment, originally assumed to be funded by the state, has instead been shifted to the contract, representing an additional $35 million in cost spread across LPD. (See table 28.) Labor hours, the second largest component of cost growth, increased significantly for the LPD 17 and LPD 18. For example, engineering labor hours for the LPD 17 increased by over 100 percent from the original proposal. As the number of hours to construct the ship increased, total labor costs grew, with the shipbuilder paying for additional overhead costs and employee wages. We separated the overhead and labor rates associated only with the additional hours and added this to the shipbuilder’s reported labor cost growth. (See table 29.) Our analysis captures all cost growth associated with labor—including labor hours, overhead, and labor rates. Factory inefficiencies and loss of skilled laborers, including significant employee attrition (35 percent annually) contributed significantly to labor hour increases. Difficulties with the design tool and turnover in engineering staff led to increases in engineering labor hours and delayed achieving a stable design. Without a stable design, work was often delayed from early in the building cycle to later, during integration of the hull. Shipbuilders stated that doing the work at this stage could cost up to five times the original cost. On LPD 17, 1.3 million labor hours were moved from the build phase to the integration phase. Consequently, LPD 17 took much longer to construct than originally estimated. Moreover, a diminished workforce at Avondale required the busing of shipyard workers from Ingalls Shipyard in Pascagoula, Mississippi to Avondale in New Orleans, Louisiana and the subcontracting of skilled labor. While the total overhead costs on both the LPD 17 and 18 grew by $0.5 billion over the life of the contract, labor hour increases contributed to about half of that amount. (See table 30.) According to Northrop Grumman, increases in overhead costs not related to labor hour growth can be attributed largely to changes in the shipyard’s workload and employee benefit costs. Beginning in 2001, the shipyard experienced a rise in overhead rates. For example, the overhead rates in the 2004 latest estimate by Northrop Grumman are 39 percent higher than what was originally proposed on the LPD 17 in 1996. Several factors helped to increase overhead. For example; due to the loss of the bulk military cargo T-AKE ship, the cancellation of the construction of a commercial ship (American Classics Voyage), and the delay in signing the contract for the next generation destroyer, overhead costs had to be absorbed by the remaining contracts at the yard, including LPD. This led to 36 percent of the increase in overhead rates—24 percent for the T-AKE and cruise ship and 12 percent for DD(X). According to the shipyard, changes in the financial market affected the pension fund and the rise in medical care costs were responsible for 16 percent of the increase in the shipyards overhead rates. Labor rates rose due to inflation impacts of an over 2-year delay in lead ship delivery and subsequent changes in the procurement schedule and wage rates negotiated with labor unions. According to program officials, cost growth for Navy-furnished equipment on the LPD 17 was due to increased costs for a shock wave test that was not anticipated in the original cost estimate. This cost was a one-time increase, affecting only LPD 17 costs. The Virginia-class attack submarine, the newest class of nuclear submarines, is designed to combat enemy submarine and surface ships, fire cruise missiles at land targets, and provide improved surveillance and special operations support to enhance littoral warfare. Because the Virginia class is designed to be smaller than the Seawolf and slightly larger than the Los Angeles class submarines—ships the new class will eventually replace—the Virginia class is better suited for conducting shallow-water operations. Major features of this new class of submarine include new acoustic, visual, and electronic systems for enhanced stealth. An objective of Virginia class is to reduce the life-cycle cost through better design and engineering resulting in one third fewer man-hours than were needed to construct Seawolf (SSN 21), the lead ship in the previous class of attack submarines. The first ship, the SSN 774, was delivered in October 2004. Our review focused on the SSN 774 and 775. The Fiscal Year 2005 President’s Budget showed that budgets for the two Virginia class case study ships have increased by $734 million. However, based on data of July 2004, we projected additional cost growth on contracts for the two ships is likely to reach $840 million and could be higher. In fiscal year 2006 budget, the Navy has requested funds to cover cost increases that are now expected to reach approximately $1 billion. The fiscal year 2005 budget for the SSN 774 and SSN 775 is about $6.2 billion, compared with the initial fiscal year 1998 budget request of $5.5 billion. (See table 32.) Ship construction costs comprise the majority of this increase. While the Congress has appropriated funds to cover the increases in the ships’ costs, more funds will be needed to cover additional cost growth for these two ships. In its fiscal year 2006 budget submission the Navy is requesting an additional $125 million in prior year completion funding between fiscal years 2006 to 2007 for the case study ships. We calculated a range of the potential growth for the two case study ships and found that the total projected cost growth is likely to exceed $724 million and could reach $840 million or higher. (See table 33.) Our cost growth estimates—both low and high—may be understated because we assumed that the shipbuilders will maintain their current efficiency through the end of their contracts and meet scheduled milestones. Any slips in efficiency and schedules would likely result in added costs. For example, the delivery date for SSN 775 is expected to slip by as many as 8.5 months, which could increase the final cost of the ship. Our analysis shows that the submarine contract costs have grown because initial construction costs were underestimated—especially the costs associated with the cost of material and number of labor hours needed to build the ships. For the two case study ships we examined, we found that increases in the number of labor hours and material costs to build the submarines accounted for 83 percent of the cost growth on shipbuilding construction contracts. Navy-furnished equipment, including radars, propulsion equipment, and weapon systems, caused 14 percent of the cost growth. We found that ship overhead—such as employee benefits and shipyard support costs—and labor rate increases accounted for 3 percent of cost growth. In negotiating the contract for the first four Virginia class ships, program officials stated they were constrained in negotiating the target price to the amount funded for the program, thereby risking cost growth at the outset. The shipbuilders said that they accepted a challenge to design and construct these ships for $748 million less than their estimated costs because the contract protected their financial risk. Despite the fact that there was significant risk of cost growth, the Navy, based on guidance at the time, did not identify any funding for probable cost growth. We analyzed shipbuilder contract costs to identify the sources of cost growth. Using shipbuilder cost data, we allocated the sources of shipbuilder cost growth on the contract into three categories—labor hours; material costs; and labor and overhead rates. Since labor costs and overhead costs can change due to labor hours, labor rates, and rates associated with individual elements of overhead—or a combination of these—we examined each in isolation by separating the program overhead cost associated with an increase in labor hours from costs that resulted from an increase in overhead rates, such as an increase in health care costs. Due to high risk that specialized material could not be procured for the amount budgeted, the Navy agreed to purchase this material as a cost plus fixed fee item. This agreement protected the shipbuilder from having to fund any resulting cost increases for highly specialized material. Indeed, cost growth for material increased by $350 million for the two Virginia class submarines we examined. The Navy and shipbuilders attribute material cost growth to several factors including unrealistic budgets not supported by current vendor costs, diminished supplier base for highly specialized materials, nonrecurring costs for Computer Data Integration between shipbuilder teams, lack of design maturity for certain electronic components, and full funding of ships in the year of authorization. Shipbuilders stated they based more than 70 percent of their estimate for major material costs on updated vendor quotes while the Navy relied on historical costs that were not analogous to the low number of submarines being planned for construction. While the Navy knew there would be a price penalty for a 6-year gap in submarine production, there were no studies or actual data to support what the overall effect would be. Thus, Navy cost estimators assumed that costs for major material items would increase by 20 percent. When the Navy negotiated the costs for Virginia- class high value, specialized material, the shipbuilder agreed to take on the challenge of achieving lower costs in exchange for funding these materials on a cost-plus-fixed-fee basis. By the time the lead ship was delivered 8 years later, the true cost increase for highly specialized material was closer to 60 percent more than historical costs. Following the cancellation of the prior submarine program—Seawolf— and a decrease in submarine production of three to four submarines per year to one over a period of 6 years, many vendors left the nuclear submarine business and focused instead on more lucrative commercial product development. As a result, prices for highly specialized material increased due to less competition and a lack of business. For example, many vendors were reluctant to support the Virginia class submarine contract because costs associated with producing small quantities of highly specialized materials were not considered worth the investment— especially for equipment with no other military or commercial applications. Material costs also increased due to nonrecurring costs for integrating computer data so that the shipbuilders could work from a common design. In addition, costs to develop high-risk systems like the array and exterior communication system were underestimated. Recognizing the significant cost risk involved, the Navy procured these systems under a separate contract line item that guaranteed the shipbuilders a fixed fee and made the Navy responsible for funding all cost growth. Finally, the Navy believes that the block-buy contract has contributed to increased material costs. Under a block-buy contract, subcontracts for submarine materials are for single ships spread over several years. According to the Navy, this type of acquisition approach does not take advantage of bulk-buy savings and incurs the risk that funding will not be available in time to order the material when needed. In addition, since ships are funded individually, the Navy believes suppliers are unwilling to risk investing in technology improvements due to the uncertainty that future ships will not be purchased. To stabilize the vendor base, the Navy awarded a multiyear contract that commits the Navy to purchasing additional submarines. While a multiyear contract can provide such savings, a program must meet criteria to demonstrate a sufficient level of stability for such a contract. In June 2003, we noted several aspects of the Virginia class program that indicated instability. Another factor to be considered in using multiyear contracts is the budget flexibility the government gives up in exchange for the commitment of funds for the future years of the contract. Labor cost increases have led to $339 million in cost growth for the SSN 774 and SSN 775 combined. Problems with mastering state-of-the art design tools, first in class technical and teaming issues, and material availability all contributed to the labor cost growth. We found that SSN 774 required almost 3 million additional labor hours than planned, reflecting a growth of 25 percent. (See fig. 12.) In addition, we found that SSN 775 required almost 4 million more labor hours than planned. Approximately 3.4 million nonrecurring labor hours for SSN 774 were procured on a separate contract line item and therefore not included in our analysis while some SSN 775 nonrecurring labor hours are embedded in the labor hours for that ship. Technical issues commonly associated with first-in-class ships also contributed to the overall labor cost growth. For example, shipbuilders experienced problems with crossed hydraulic lines on the lead ship. In addition, problems with the torpedo tube and weapons handling design issues also contributed to labor hour growth in both ships. Labor hours also increased as quality problems discovered for a component made by one shipyard were reworked by the shipyard integrating the components. Because the shipyard doing the integration was not as familiar with the effort, the work was not completed as efficiently. Late material deliveries also disrupted the work-flow sequence. Because many vendors either went out of business or focused on developing new commercial products in response to low demand, the Navy was no longer considered a preferred customer. In cases where there was no ready supplier, the shipbuilder had to request old subcontractors to supply the highly specialized material. This caused delays in material deliveries as well as quality problems arising from strict inspection processes that subcontractors were no longer familiar with. Although the shipbuilders tried to work around late material deliveries when they could, this caused workers to perform less efficiently than had the material been available when scheduled. Moreover, when the material did arrive, the shipbuilders had to work overtime to make up the schedule causing additional growth in labor costs. According to Navy program officials, radar costs increased due to more design effort needed to fix problems associated with the Seawolf program. Other costs increases were driven by changes in how certain items were purchased. For example, the advanced display system was recently established as a line item in the budget when in the past it was paid for as part of the shipbuilder’s construction contract. Moreover, the Navy initially planned to use research and development funds to cover costs for the propulsor but switched to ship construction funds instead, leading to an increase in the program’s budget for Navy-furnished equipment. Our analysis shows that program overhead costs and increases in labor rates were not significant sources of cost growth—causing approximately 3 percent of the cost growth. To isolate true increases in overhead rates from increases that were a consequence of labor hour increases, we separate the two in table 36. Costs associated with growth in labor hours are shown in table 35 calculations. According to the shipbuilder, overhead and labor rate increases were related to pension, workers compensation, and health care costs rising beyond what was expected. Furthermore, when other ship acquisitions did not materialize, shipyard overhead costs were spread over a fewer number of contracts causing an increase in the Virginia class overhead costs. Similarly, the loss of business caused the shipbuilders to lay off skilled workers. According to the shipbuilders, many of the experienced workers did not return to the shipyard. Hiring and training new workers increased costs. We found that one shipbuilder was affected by labor rate increases. Following a strike at the shipyard, union negotiations resulted in four pay increases totaling an average of $3.10 per hour. This appendix discusses GAO’s forecast of future cost growth for all ships in construction that are more than 30 percent complete. The forecast is also compared with the shipbuilders’ forecasts of estimated costs at completion. CVN 76 and CVN 77: CVN 76 was delivered to the Navy in 2003. While we forecasted an overrun of up to $586 million over the initial target price for CVN 77, the fiscal year 2006 budget request indicates a need of $870 million in prior year funding. SSN 774-SSN 777: SSN 774 was delivered to the Navy in October 2004. We found that the contractors’ forecasts are unlikely to be achieved based on continuing cost growth on the remaining 3 ships. In addition, the SSN 776 and SSN 777 are the follow-on ships to a new class and still may experience production problems that could lead to future cost growth. DDG 91-DDG 101: The DDGs have experienced cost growth at both shipyards. All the DDGs under construction at Bath Iron Works and more than 30% complete have experienced cost growth. Similarly, cost growth is also expected on the DDGs built by Northrop Grumman. LPD 17-LPD 20: LPDs currently under construction are likely to experience significant cost overruns. On all of the LPDs, with the exception of LPD 18, the shipbuilder is estimating overall cost growth to be at the lower end of our predicted range. Hence we believe, the shipbuilder’s forecast of cost growth is optimistic. T-AKE: Major cost growth is being predicted for T-AKE 1. We estimate that costs could grow more than $70 million beyond the initial contract price. The shipbuilder believes that escalating material costs resulting from rising commodity prices and unfinalized vendor subcontracts are driving contract cost growth. LHD 8: It also has the potential for significant cost growth—as much as $177 million more than what was anticipated. Cost growth thus far is attributed to increases in overhead and general and administrative costs. In addition to the contacts named above, Margaret B. McDavid, Christina Connelly, Diana Dinkelacker, Christopher R. Durbin, Jennifer Echard, R. Gaines Hensley, Ricardo Marquez, Christopher R. Miller, Madhav Panwar, Karen Richey, Karen Sloan, Lily Chin, and Marie Ahearn made key contributions to this report. | The U.S. Navy invests significantly to maintain technological superiority of its warships. In 2005 alone, $7.6 billion was devoted to new ship construction in six ship classes--96 percent of which was allocated to four classes: Arleigh Burke class destroyer, Nimitz class aircraft carrier, San Antonio class amphibious transport dock ship, and the Virginia class submarine. Cost growth in the Navy's shipbuilding programs has been a long-standing problem. Over the past few years, the Navy has used "prior year completion" funding--additional appropriations for ships already under contract--to pay for cost overruns. This report (1) estimates the current and projected cost growth on construction contracts for eight case study ships, (2) breaks down and examines the components of the cost growth, and (3) identifies any funding and management practices that contributed to cost growth. For the eight ships GAO assessed, the Congress has appropriated $2.1 billion to cover the increases in the ships' budgets. The GAO's analysis indicates that total cost growth on these ships could reach $3.1 billion or even more if shipyards do not maintain current efficiency and meet schedules. Cost growth for the CVN 77 aircraft carrier and the San Antonio lead ship (LPD 17) has been particularly pronounced. Increases in labor hour and material costs together account for 77 percent of the cost growth on the eight ships. Shipbuilders frequently cited design modifications, the need for additional and more costly materials, and changes in employee pay and benefits as the key causes of this growth. For example, the San Antonio's lead ship's systems design continued to evolve even as construction began, which required rebuilding of completed areas to accommodate the design changes. Materials costs were often underbudgeted, as was the case with the Virginia class submarines and Nimitz class aircraft carriers. For the CVN 77 carrier, the shipbuilder is estimating a substantial increase in material costs. Navy practices for estimating costs, contracting, and budgeting for ships have resulted in unrealistic funding of programs, increasing the likelihood of cost growth. Despite inherent uncertainties in the ship acquisition process, the Navy does not account for the probability of cost growth when estimating costs. Moreover, the Navy did not conduct an independent cost estimate for carriers or when substantial changes occurred in a ship class, which could have provided decision makers with additional knowledge about a program's potential costs. In addition, contract prices were negotiated and budgets established without sufficient design knowledge and construction knowledge. When unexpected events did occur, the incomplete and untimely reporting on program progress delayed the identification of problems and the Navy's ability to correct them. |
Over the long term, the imbalance between spending and revenue that is built into current law and policy is projected to lead to continued growth of the deficit and debt held by the public as a share of GDP. This situation— in which debt grows faster than GDP—means the current federal fiscal path is unsustainable. Projections from the 2016 Financial Report of the United States and the Congressional Budget Office (CBO), and simulations from GAO all show that, absent policy changes, the federal government’s fiscal path is unsustainable. According to the 2016 Financial Report, the federal deficit in fiscal year 2016 increased to $587 billion—up from $439 billion in fiscal year 2015. This marked a change from 6 years of declining deficits. The federal government’s receipts (taxes and other collections) increased by $18.0 billion (0.6 percent), from $3,248.7 billion to $3,266.7 billion, but that was outweighed by a $166.5 billion increase in spending from $3,687.6 billion to $3,854.1 billion. Spending increases in 2016 were driven by Social Security (the Old-Age and Survivors Insurance and Disability Insurance programs), Medicare, Medicaid, and interest on debt held by the public (net interest). Debt held by the public was 77 percent of GDP at the end of fiscal year 2016—an increase from 74 percent at the end of fiscal year 2015. Although the federal government has carried debt throughout virtually all of U.S. history, the 2016 Financial Report shows that the current fiscal position is unusual in the nation’s history and that debt as a share of the economy is the highest it has been since 1950. As shown in figure 1, debt as a share of GDP peaked as 106 percent just after World War II, but then fell rapidly. Since 1946 the debt-to-GDP ratio has averaged 44 percent. The long-term fiscal projections in the federal government’s 2016 Financial Report and those prepared annually by CBO and GAO each use somewhat different assumptions, but their results are the same: absent policy changes, the federal government’s fiscal path is unsustainable with debt held by the public as a share of GDP projected to grow continuously. Projections show that under current law it will grow to exceed the historical high of 106 percent in 15 to 25 years. (See figure 2.) Both the timing and pace of this growth depend on underlying assumptions made, especially about health care costs. Under GAO’s alternative simulation debt held by the public as a share of GDP would surpass its historical high of 106 percent by 2032. CBO’s extended baseline shows debt held by the public surpassing that level by 2035 and the 2016 Financial Report projections show debt held by the public surpassing 106 percent by 2041. Of further concern is the fact that none of these long-term projections include certain fiscal risks that create fiscal exposures that could affect the government’s financial condition in the future. Fiscal exposures are responsibilities, programs, and activities that may legally commit or create expectations for future federal spending based on current policy, past practices, or other factors. Some examples of such fiscal risks include: The Pension Benefit Guaranty Corporation’s (PBGC) financial future is uncertain because of long-term challenges related to PBGC’s governance and funding structure. PBGC’s liabilities exceeded its assets by over $79 billion as of the end of fiscal year 2016—an increase of over $3 billion from the end of fiscal year 2015 and of about $44 billion since 2013. PBGC reported that it is subject to potential further losses of $243 billion if plan terminations occur that are considered reasonably possible. The U.S. Postal Service (USPS) continues to be in a serious financial crisis as it has reached its borrowing limit of $15 billion and finished fiscal year 2016 with a reported net loss of $5.6 billion. USPS’s business model is not viable and cannot fund its current level of services, operations, and obligations. USPS’s liabilities exceeded its assets by $56 billion as of the end of fiscal year 2016 and USPS reported an additional $39.5 billion in unfunded liabilities at that time for its retiree health and pension funds. USPS reported a total unfunded liability for its retiree health and pension funds of $73.4 billion, $33.9 billion of which relates to required prefunding payments for postal retirees’ health benefits that have not been made and is included in the liabilities reported on its balance sheet. Some government insurance programs such as the National Flood Insurance Program do not have sufficient dedicated resources to cover expected costs. The Federal Emergency Management Agency (FEMA), which administers the National Flood Insurance Program, owed $24.6 billion as of March 2017 to the Department of the Treasury (Treasury) for money borrowed to pay claims and other expenses, including $1.6 billion borrowed following a series of floods in 2016. FEMA is unlikely to collect enough in premiums to repay this debt. Citizens also look to the federal government for assistance when crises happen and immediate federal action is expected. This can take the form of expectations for additional and large amounts of federal spending. These crises often cannot be predicted and are very difficult to budget for. According to the Congressional Research Service, the federal budget does contain some funds for disaster response through the Disaster Relief Fund; however, this fund often is insufficient to respond to the number and scope of natural disasters, and it is not typically used as a funding source for other types of unforeseen events such as wars, financial crises, cyberattacks, or health pandemics. The growing gap between revenues and spending reflects three main trends: significant growth in spending for retirement and healthcare programs, rising interest payments on the government’s debt, and modest growth in revenues. The size of the gap is such that both the spending and revenue side of the budget must be examined. The 2016 Financial Report’s long-term fiscal projections, CBO’s long-term projection, and GAO’s long-term simulations all show that the key drivers on the spending side are health care programs and interest on debt held by the public (net interest). Social security also poses significant financial challenges. Total health care spending (public and private) in the United States continues to grow faster than the economy. As figure 3 shows, growth in federal spending for health care programs has exceeded the growth of GDP historically and is projected to grow faster than the economy. These health care programs include Medicare, Medicaid, and the Children’s Health Insurance Program, along with federal subsidies for health insurance purchased through the marketplaces established by the Patient Protection and Affordable Care Act (ACA) and related spending. According to GAO’s alternative simulation, federal spending on major health care programs is projected to increase from $993 billion in fiscal year 2016 to $2 trillion in fiscal year 2045 in 2016 dollars. Growth in federal spending on health care is driven, in part, by increasing enrollment in federal health care programs, stemming from both the aging of the population and the expansion of federal programs. As many members of the baby-boom generation age and as life expectancy continues to generally increase, the number of people 65 or older is expected to rise by more than one-third, thereby increasing the number of Medicare beneficiaries. (See figure 4.) According to CBO, outlays for Medicaid in fiscal year 2016 rose by $18 billion (or 5.3 percent) compared with outlays in fiscal year 2015. The decision of more than half the states to expand eligibility for their Medicaid programs as provided by the ACA was the primary reasons for this growth. The growth in federal spending on health care can also be attributed to increases in health care spending per enrollee. Per beneficiary health care spending has historically risen faster than per capita economic output and is projected to do so in the future. While health care spending is a key programmatic and policy driver of the long-term outlook on the spending side of the budget, eventually, spending on net interest becomes the largest category of spending in both the 2016 Financial Report’s long-term fiscal projections and GAO’s simulations. Specifically, in GAO’s alternative simulation, net interest increases from $248 billion in fiscal year 2016 to $1.4 trillion in fiscal year 2045 in 2016 dollars. Growth in interest payments occurs for two main reasons: Growing debt: Even without any increase in interest rates, the cost of financing the debt grows as debt held by the public grows, resulting in greater interest payments than would otherwise exist with less debt. Spending on interest can absorb resources that could be used instead for other priorities. Growth in interest rates: In recent years interest rates on Treasury securities have remained low, lowering interest costs. However, CBO and others project those interest rates will rise in the long term, increasing the net interest costs on the debt. Marketable U.S. Treasury securities consist of bills, notes, and bonds. Treasury seeks to accomplish “lowest cost financing over time” in the way it manages debt issuance. Net interest costs will depend in part on the outstanding mix of Treasury securities. Treasury issues securities in a wide range of maturities to appeal to the broadest range of investors. Longer-term securities typically carry higher interest rates but offer the government the ability to “lock in” fixed interest payments over a longer period and reduce the amount of debt that Treasury needs to refinance in the short term. In contrast, shorter-term securities generally carry lower interest rates. They also play an important role in financial markets. For example, investors use Treasury bills to meet requirements to buy financial assets maturing in a year or less. However, shorter-term securities add uncertainty to the government’s interest costs and require Treasury to conduct more frequent auctions to refinance maturing debt. As of September 30, 2016, 58 percent of marketable Treasury securities held by the public were scheduled to mature and need to be refinanced in the next 4 years—potentially at higher interest rates. As the 2016 Financial Report notes, each year trillions of dollars of debt mature and new debt is issued in its place. In fiscal year 2016, new borrowings were $8.4 trillion, and repayments of maturing debt held by the public were $7.3 trillion. Social Security also poses significant financial challenges. It provides individuals with benefits that can help offset the loss of income due to retirement, death, or disability, and paid more than $905 billion in Old-Age and Survivors Insurance (OASI) and Disability Insurance (DI) program benefits in fiscal year 2016. However, demographic factors, such as an aging population and slower labor force growth, are straining Social Security programs and contributing to a gap between program costs and revenues. Absent any changes, it is projected that the Social Security trust funds will deplete their assets and that incoming revenues will not be sufficient to pay benefits in full on a timely basis. To change the long-term fiscal path, policymakers will need to consider policy changes to the entire range of federal activities: entitlement programs, other mandatory spending, discretionary spending, and revenue. The 2016 Financial Report, CBO, and GAO all make the point that the longer action is delayed, the greater and more drastic the changes will have to be. Medicare’s Hospital Insurance trust fund, and Social Security’s OASI and DI trust funds face financial challenges that add to the importance of beginning action soon. (See figure 5.) It is important to develop and begin to implement a long-term fiscal plan for returning to a sustainable path. As currently structured, the debt limit—a legal limit on the amount of federal debt that can be outstanding at one time—does not restrict Congress and the President’s ability to enact spending and revenue legislation that affects the level of debt; nor does it otherwise constrain fiscal policy. The debt limit is an after-the-fact measure: the spending and tax laws that result in debt have already been enacted. In other words, the debt limit restricts Treasury’s authority to borrow to finance the decisions already enacted by Congress and the President. I cannot overstate the importance of preserving confidence in “the full faith and credit” of the United States. Failure to increase (or suspend) the debt limit in a timely manner could have serious negative consequences for the Treasury market and increase borrowing costs. For those Treasury securities issued during the 2013 debt limit impasse, we estimated that the additional borrowing costs incurred through fiscal year 2014 were between $38 and $70 million depending on the assumptions used. When delays in raising the debt limit occur, Treasury often must deviate from its normal debt management operations and take a number of extraordinary actions to avoid exceeding the debt limit. The Bipartisan Budget Act of 2015 temporarily suspended the debt limit from November 2, 2015, through March 15, 2017. Following the expiration of the debt limit suspension period, on March 16, 2017, Treasury began taking extraordinary actions to avoid exceeding the debt limit. These extraordinary actions included suspending investments to certain federal government accounts. During the 2013 impasse, investors reported taking the unprecedented action of systematically avoiding certain Treasury securities—(i.e., those that would mature around the dates when Treasury projected it would exhaust the extraordinary actions it used to manage debt as it approached the debt limit). For these securities, the actions resulted in both a dramatic increase in interest rates and a decline in liquidity in the secondary market where securities are traded among investors. To minimize disruptions to the Treasury market and to help inform fiscal policy debate in a timely way, we recommended that decisions about giving Treasury the authority to borrow be made when decisions about spending and revenues are made. In 2015, we conducted a forum with experts in the field to help identify options for Congress to delegate its borrowing authority and better align decisions about the level of debt with decisions on spending and revenue. All maintain Congressional control and oversight over federal borrowing. Our report described the benefits and challenges presented by each of the options described below: Option 1: Link Action on the Debt Limit to the Budget Resolution. This is a variation of a previously used approach under which legislation raising the debt limit to the level envisioned in the Congressional Budget Resolution would be spun off and either be deemed to have passed or be voted on immediately thereafter. Option 2: Provide the Administration with the Authority to Increase the Debt Limit, Subject to a Congressional Motion of Disapproval. This is a variation of an approach contained in the Budget Control Act of 2011. Congress would give the administration the authority to propose a change in the debt limit, which would take effect absent enactment of a joint resolution of disapproval within a specified time frame. Option 3: Delegating Broad Authority to the Administration to Borrow as Necessary to Fund Enacted Laws. This is an approach used in some other countries: delegate to the administration the authority to borrow such sums as necessary to fund implementation of the laws duly enacted by Congress and the President. Since the laws that affect federal spending and revenue and so create the need for debt already require adoption by the Congress, Congress would still maintain control over the amount of federal borrowing. We did not endorse a specific option but we did recommend that Congress consider alternative approaches that better link decisions about the debt limit with decisions about spending and revenue at the time those decisions are made. Some of the experts also supported replacing the debt limit with a fiscal rule imposed on spending and revenue decisions. The federal government has enacted such fiscal rules in the past. For example, the Budget Control Act of 2011 enacted limits on discretionary spending, which are enforced by additional spending cuts if those limits are breached (known as a sequester). Congress could consider additional fiscal rules to frame and control the overall results of spending and revenue decisions. Such rules could limit spending or affect other areas of the budget such as overall debt or annual deficits. Other countries have also operated under such fiscal rules. For example, the European Union’s (EU) stability and growth pact allows for sanctions against member states that exceed certain target levels of debt or deficits defined as “excessive” by the EU. The pact is a set of rules designed to ensure that countries in the EU pursue sound public finances and coordinate their fiscal policies. The EU defines an excessive budget deficit as one greater than 3 percent of GDP. Public debt is considered excessive if it exceeds 60 percent of GDP without diminishing at an adequate rate (defined as a decrease of the excess debt by 5 percent per year on average for more than 3 years). That said, several nations have struggled to meet these targets in recent years. In general, budget experts and other observers have noted that the success of fiscal rules depends on effective enforcement and a sustained commitment by policymakers and the public. Achieving long-term fiscal sustainability will require examining revenues and the drivers of spending and enacting legislation to narrow the growing gap between spending and revenues. However, in our prior work we have also identified numerous actions Congress and agencies can take now to help improve the fiscal situation. It is important for agencies to act as stewards of federal resources. Although these actions alone cannot put the U.S. government on a sustainable fiscal path, they would improve both the fiscal situation and the federal government’s operations. Improper payments remain a significant and pervasive government-wide issue. For several years, we have reported improper payments as a material weakness in our audit reports on the consolidated financial statements of the U.S. government. Since fiscal year 2003—when certain agencies began reporting improper payments as required by the Improper Payments Information Act of 2002 (IPIA)—cumulative reported improper payment estimates have totaled over $1.2 trillion, as shown in figure 6. For fiscal year 2016, agencies reported improper payment estimates totaling $144.3 billion, an increase of over $7 billion from the prior year’s estimate of $136.7 billion. The reported estimated government-wide improper payment error rate was 5.1 percent of related program outlays. These figures do not include the Department of Defense’s (DOD) Defense Finance and Accounting Service (DFAS) Commercial Pay program because of concerns regarding the reliability of the program’s estimate, which I will discuss later in this statement. As shown in figures 7 and 8, the reported improper payment estimates—both dollar estimates and error rates—have been increasing over the past 3 years, largely because of increases in Medicaid’s reported improper payment estimates. For fiscal year 2016, overpayments accounted for approximately 93 percent of the improper payment estimate, according to www.paymentaccuracy.gov, with underpayments accounting for the remaining 7 percent. Although primarily concentrated in three areas (Medicare, Medicaid, and the Earned Income Tax Credit), the reported estimated improper payments for fiscal year 2016 were attributable to 112 programs spread among 22 agencies. (See figure 9.) Agencies reported improper payment estimates exceeding $1 billion for 14 programs, as shown in table 1, and error rates exceeding 10 percent for 11 programs. (See table 2.) In our audit report on the fiscal year 2016 consolidated financial statements of the U.S. government, we continued to report a material weakness in internal control related to improper payments because the federal government is unable to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce them. Challenges include potentially inaccurate risk assessments, programs that do not report any improper payment estimates or report unreliable or understated estimates, and noncompliance issues. Agencies conduct risk assessments to determine which programs need to develop improper payment estimates. However, in Improper Payments Elimination and Recovery Act (IPERA) compliance reports for fiscal year 2015—the most current reports available—various inspectors general (IG) reported issues related to agencies’ improper payment risk assessments. For example: The IG for the General Services Administration reported that the agency’s risk assessment was flawed because, among other things, the questionnaires in the assessment did not ask if programs actually experience improper payments and were distributed to individuals who did not have direct or specific knowledge of improper payments. Further, the IG found that the agency did not evaluate relevant reports—such as IG or GAO reports—to identify relevant findings, and two of the six questionnaires that the IG reviewed included incomplete information. The IG for the Department of Housing and Urban Development found that the agency did not assess all of its programs on a 3-year cycle and did not consider all nine of the required risk factors in conducting its risk assessment. The IG also noted instances in which the agency did not rate risk factors in accordance with the agency’s own policy. It is also important to note that nine of the Chief Financial Officer (CFO) Act agencies either reported no improper payment estimates or reported estimates for only disaster relief programs funded through the Disaster Relief Appropriations Act, 2013 for fiscal year 2016. The nine agencies were: U.S. Agency for International Development Department of Commerce (disaster relief only) Department of the Interior (disaster relief only) Department of Justice (disaster relief only) National Aeronautics and Space Administration (disaster relief only). Programs That Do Not Report Improper Payment Estimates We found that not all agencies had developed improper payment estimates for all of the programs and activities they identified as susceptible to significant improper payments. Eight agencies did not report improper payment estimates for 18 risk-susceptible programs. (See table 3.) Because agencies did not report improper payment estimates for these risk-susceptible programs, the government-wide improper payment estimate is understated and agencies are hindered in their efforts to reduce improper payments in these programs. For example, the Department of Health and Human Services (HHS) did not report an improper payment estimate for Temporary Assistance for Needy Families, a program with outlays of over $15 billion for fiscal year 2016. HHS cited statutory limitations prohibiting the agency from requiring states to participate in an improper payment measurement for the program. Another example is U.S. Department of Agriculture’s (USDA) Supplemental Nutrition Assistance Program. Although USDA has reported improper payment estimates for this program in prior years, the agency did not report an estimate for fiscal year 2016. In its fiscal year 2016 agency financial report, USDA stated that it was unable to validate data provided by 42 of the 53 state agencies that administer the program. USDA stated that it could not adjust for this unreliability and calculate a national error rate. Potentially Unreliable or Understated Estimates Improper payment estimates for certain programs may be unreliable or understated. For example, in May 2013 we reported that DOD had major deficiencies in its process for estimating fiscal year 2012 improper payments in the Defense Finance and Accounting Service (DFAS) Commercial Pay program, including deficiencies in identifying a complete and accurate population of payments. The foundation of reliable statistical sampling estimates is a complete, accurate, and valid population from which to sample. As of October 2016, DOD was still developing key quality assurance procedures to ensure the completeness and accuracy of sampled populations. Therefore, DOD’s fiscal year 2016 improper payment estimates, including its estimate for the DFAS Commercial Pay program, may not be reliable. DFAS Commercial Pay’s reported program outlays are significant—approximately $249 billion for fiscal year 2016. Consequently, a small change in the program’s estimated error rate could result in a significant change in the dollar value of its improper payment estimate. Further, flexibility in how agencies are permitted to implement improper payment estimation requirements can contribute to inconsistent or understated estimates. For example, in February 2015, we reported that DOD uses a methodology for estimating TRICARE improper payments that is less comprehensive than the methodology the Centers for Medicare & Medicaid Services (CMS) used for Medicare. Though the programs are similar in that they pay providers on a fee-for-service basis and depend on contractors to process and pay claims, TRICARE’s methodology does not examine the underlying medical record documentation to discern whether each sampled payment was supported or whether the services provided were medically necessary. On the other hand, Medicare’s methodology more completely identifies improper payments beyond those resulting from claim processing errors, such as those related to provider noncompliance with coding, billing, and payment rules. As a result, the estimated improper payment error rates for TRICARE and Medicare are not comparable, and TRICARE’s error rate is likely understated. In addition, corrective actions for TRICARE improper payments do not address issues related to medical necessity errors—a significant contributor to Medicare improper payments. We recommended that DOD implement a more comprehensive TRICARE improper payment methodology and develop more robust corrective action plans that address the underlying causes of improper payments. In October 2016, DOD requested proposals for claim record reviews—including medical record reviews—to begin the process of incorporating medical record reviews in its methodology for calculating improper payment rates. Since fiscal year 2011, IPERA has required agencies’ IGs to annually report on the respective agencies’ compliance under the act. IGs at 15 of the 24 CFO Act agencies found their respective agencies to be noncompliant under IPERA for fiscal years 2014 and 2015, the highest total since IGs began their annual compliance reviews. Although noncompliance has occurred across all six of the criteria listed in IPERA, the most common issues are noncompliance related to reporting and meeting improper payment reduction targets or reporting an error rate below 10 percent. Continued noncompliance further highlights the need for additional efforts to reduce improper payments. Agencies can use detailed root cause analysis and related corrective actions to implement preventive and detective controls to reduce improper payments. Collaboration with other relevant entities can also assist federal agencies in reducing improper payments. Root cause analysis is key to understanding why improper payments occur and developing effective corrective actions to prevent them. In 2014, the Office of Management and Budget (OMB) established new guidance to assist agencies in better identifying the root causes of improper payments and assessing their relevant internal controls. Agencies across the federal government began reporting improper payments using these more detailed root cause categories for the first time in their fiscal year 2015 financial reports. Further identification of the true root causes of improper payments can help to determine the potential for fraud. Figure 10 shows the root causes of government-wide improper payments for fiscal year 2016, as reported by OMB. We will continue to focus on agencies’ efforts to both identify the root causes and take appropriate actions to reduce improper payments. Implementing strong preventive controls can serve as the frontline defense against improper payments. When agencies proactively prevent improper payments, they increase public confidence in program administration and they avoid the difficulties associated with the “pay and chase” aspects of recovering overpayments. Examples of preventive controls include up-front eligibility validation through data sharing, predictive analytic technologies, and program design review and refinement. For example, we have made the following recommendations and matters for congressional consideration to improve preventive controls in various programs. Use of the Do Not Pay (DNP) working system. Established by OMB and hosted by Treasury, the DNP working system is a web-based, centralized data-matching service that allows agencies to review multiple databases—such as data on deceased individuals and entities barred from receiving federal awards—before making payments. In October 2016, we found that the 10 agencies we reviewed used the DNP working system in limited ways, in part because OMB had not provided a clear strategy and guidance. Only 2 of these 10 agencies used the DNP working system on a preaward or prepayment basis for certain types of payments. Because the DNP working system offers a single point of access to multiple databases, agencies may be able to streamline their existing data matching processes. Among other things, we recommended that OMB develop a strategy—and communicate it through guidance—for whether and how agencies should use the DNP working system to complement or streamline existing data matching processes. OMB generally agreed with the concept of developing a strategy and said it would explore the concept further. Further, we found that the death records offered through the DNP working system do not include state-reported death data. The Social Security Administration (SSA) officials stated that sharing its full death file—which includes state-reported death data—would require an amendment to the Social Security Act. We suggested that Congress amend the Social Security Act to explicitly allow SSA to share its full death file with Treasury for use through the DNP working system. Sharing the full death file through the DNP working system would enhance efforts to identify and prevent improper payments. Expanded error correction authority. IRS has the authority to correct some calculation errors and check for other obvious noncompliance such as claims for a deduction or credit that exceed statutory limits. We have suggested to Congress that such authority be authorized on a broader basis rather than on a piecemeal basis and that controls may be needed to help ensure that this authority is used properly. Also, Treasury has proposed expanding IRS’s “math error” authority to “correctible error” authority to permit it to correct errors in cases where information provided by the taxpayer does not match information in government databases, among other things. Providing these authorities could help IRS correct additional errors— including some errors with Earned Income Tax Credit claims—and avoid burdensome audits and taxpayer penalties. Additional prepayment reviews in Medicare fee-for-service. In April 2016, we found that CMS could improve its claim review programs by conducting additional prepayment reviews. Using prepayment reviews to deny improper claims and prevent overpayments is consistent with CMS's goal to pay claims correctly the first time. It can also better protect Medicare funds because not all overpayments can be collected. A recovery auditor (RA) is one type of claim review contractor that CMS uses, and in 2013 and 2014, 85 percent of RA claim reviews were postpayment. Because CMS is required by law to pay RAs contingency fees from recovered overpayments, the RAs can only conduct prepayment reviews under a demonstration. From 2012 through 2014, CMS conducted a demonstration in which the RAs conducted prepayment reviews and were paid contingency fees based on claim denial amounts. CMS officials considered the demonstration a success. However, CMS has not requested legislation that would allow for RA prepayment reviews by amending existing payment requirements and thus may be missing an opportunity to better protect Medicare funds. We recommended that CMS seek legislative authority to allow RAs to conduct prepayment claim reviews. HHS did not concur with this recommendation, stating that CMS has implemented other programs as part of its efforts to move away from the "pay and chase" process of recovering overpayments. We continue to believe that seeking authority to allow RAs to conduct prepayment reviews is consistent with CMS's strategy to pay claims properly the first time. Although preventive controls remain the frontline defense against improper payments, effective detection techniques can help to quickly identify and recover those overpayments that do occur. Detective controls play a significant role not only in identifying improper payments but also in providing information on why these improper payments were made, highlighting areas that need stronger preventive controls. Examples of detective controls include data mining and recovery auditing. The following are examples of recommendations we have made to improve detective controls in various programs. Improvements to recovery efforts in Medicare Advantage. In April 2016, we reported that CMS needs to fundamentally improve its efforts to recover substantial amounts of improper payments in the Medicare Advantage program. CMS conducts two types of risk adjustment data validation (RADV) audits to identify and correct Medicare Advantage improper payments: national RADV activities and contract-level RADV audits. Both types of audits determine whether the diagnosis codes submitted by Medicare Advantage organizations are supported by a beneficiary’s medical record documentation. Contract-level RADV audits seek to identify and recover improper payments from Medicare Advantage organizations and thus to deter them from submitting inaccurate beneficiary diagnoses. However, we found that CMS does not focus its RADV audits on the contracts with the highest potential for improper payments and has not developed specific plans or a timetable for including recovery auditor contractors in the contract-level RADV audit process. We made several recommendations, including that CMS modify the selection of contracts for contract-level RADV audits to focus on those most likely to have high rates of improper payments and that CMS develop specific plans and a timetable for incorporating a recovery audit contractor in the Medicare Advantage program. In response to our report, HHS concurred with the recommendations and reaffirmed its commitment to identifying and correcting Medicare Advantage improper payments. By implementing our recommendations, CMS could recover hundreds of millions of dollars in improper payments by improving its processes for auditing payments to Medicare Advantage organizations. Review of federal determinations of Medicaid eligibility. In October 2015, we reported that additional efforts were needed to ensure that state spending is appropriately matched with federal funds in Medicaid. States and the federal government share in the financing of the Medicaid program, with the federal government matching most state expenditures for Medicaid services on the basis of a statutory formula. CMS has implemented interim measures to review the accuracy of state eligibility determinations and examine states’ expenditures for different eligibility groups, for which states may receive multiple federal matching rates. However, some states have delegated authority to the federal government to make Medicaid eligibility determinations through the federally facilitated exchange. CMS has excluded these states from the reviews. This creates a gap in efforts to ensure that only eligible individuals are enrolled into Medicaid and that state expenditures are correctly matched by the federal government. We recommended that CMS conduct reviews of federal Medicaid eligibility determinations to ascertain the accuracy of these determinations and institute corrective action plans where necessary. HHS has taken some steps to improve the accuracy of Medicaid eligibility determinations, as we recommended, but has not conducted a systematic review of federal eligibility determinations. For example, in March 2017, HHS reported that it is reviewing federal determinations of Medicaid eligibility in two of the nine states that have delegated eligibility determination authority to the federal marketplace. Although the actions HHS has taken have value, they are not sufficient to identify erroneous eligibility determinations. Specifically, without a systematic review of federal eligibility determinations, the department lacks a mechanism to identify and correct errors and associated payments. While federal agencies are responsible for reducing improper payments, agencies may consider collaboration with relevant entities—such as OMB, states, state auditors, and the IG community—to expand efforts to reduce improper payments. In November 2016, we held a discussion with various state auditors and federal agencies to identify potential opportunities to strengthen collaboration, focusing on federal and state initiatives related to improper payments. Further, in September 2015, we reported on the Recovery Operations Center’s (ROC) significant analytical services, provided primarily to IGs to support antifraud and other activities. While funding for the ROC ended in September 2015, officials from some small- and medium-sized IGs stated that they do not have the capabilities to develop independent data analytics or pay for a similar service, thus foregoing the ROC’s capabilities. We suggested that Congress may wish to consider directing the Council of the Inspectors General on Integrity and Efficiency to develop a legislative proposal to reconstitute the essential capabilities of the ROC to help ensure federal spending accountability. Finally, I recently met with the Director of OMB to discuss improper payments, among other issues. This spring we are providing OMB a letter highlighting open priority recommendations related to important issues, including improper payments. Strengthened efforts and collaboration among relevant entities is important to reducing improper payments across the federal government. For the last 7 years, we have annually presented actions Congress or executive branch agencies could take to reduce, eliminate, or better manage fragmentation, overlap, or duplication; achieve cost savings; or enhance revenue. We also maintain our High-Risk List to bring attention to government operations that are at high risk of fraud, waste, abuse, and mismanagement, or that need broad-based transformation to address economy, efficiency, or effectiveness challenges of government operations. Combined, these efforts have led to hundreds of billions of dollars in financial benefits over the last decade. Fully addressing the issues we raise in those reports could yield additional benefits, such as increased savings, better services to the public, and improved federal programs. For example, we estimate tens of billions more dollars could be saved by fully implementing our remaining open recommendations to address fragmentation, overlap, and duplication. While these issues span the government, a substantial number of them involve five agencies that made up 69 percent—$3.0 trillion—of federal outlays in fiscal year 2016: the Departments of Defense, Health and Human Services, and Veterans Affairs; the Social Security Administration; and the Office of Management Budget. DOD represented about 15 percent of federal spending in fiscal year 2016, with outlays totaling about $637.6 billion. In our 2011 to 2017 annual duplication reports, we directed 168 actions to DOD in areas that contribute to DOD’s effectiveness. As of March 2017, 95 of these 168 actions remained open. DOD also bears responsibility, in whole or part, for half (17 of 34) of the areas we have designated as high risk. Our work suggests that effectively taking actions to address these issues would yield significant financial benefits, as discussed below. DOD weapon systems acquisition. DOD’s portfolio of 78 major acquisition programs has a total estimated cost of $1.46 trillion. Over the past 4 fiscal years, our analyses of DOD’s weapon system acquisitions have resulted in nearly $30 billion in savings. We have six open priority recommendations to improve DOD’s management of three of DOD’s most expensive programs, each of which is facing significant cost, schedule, and performance challenges—the F-35 Joint Strike Fighter, Littoral Combat Ship, and Ford Class Aircraft Carrier. We continue to encourage DOD and Congress to hold programs accountable by ensuring that they attain the required knowledge at key decision points—such as conducting systems engineering reviews and making sure technologies are fully mature before product development begins, and successfully completing testing—before committing resources to production. By acting on our open recommendations for F-35, LCS, and Ford Class, and applying the same knowledge-based approach across its portfolio, DOD could potentially achieve tens of billions of dollars more in cost savings or cost avoidance over the life of these programs. DOD contract management. DOD obligated $273.5 billion in fiscal year 2015 on contracts for goods and services, including major weapon systems, support for military bases, information technology, consulting services, and commercial items. As the federal government’s largest procurement agency, DOD has opportunities to leverage its buying power to reduce prices, improve quality, and otherwise enhance supplier management and performance. We have found that leading commercial companies often manage 90 percent of their spending using strategic sourcing and generate 10 to 20 percent savings in doing so. In contrast, we have reported that DOD components (Navy, Air Force, and Army) managed between 10 and 27 percent of their $8.1 billion in spending on information technology services through their preferred strategic sourcing contracts in fiscal year 2013. By awarding hundreds of potentially duplicative contracts, these components diminished the department’s buying power. Further, the low utilization rate of federal strategic sourcing initiatives contracts by DOD and other federal agencies resulted in missed opportunities to leverage buying power. In this case, the Federal Strategic Sourcing Initiatives reported an estimated savings of $470 million between fiscal years 2011 and 2015, an overall savings rate of about 25 percent. In fiscal year 2015, however, the seven large agencies that comprised the Leadership Council—a cohort of large federal agencies responsible for federal strategic sourcing initiatives—directed less than 10 percent of their spending on the types of goods and services offered under federal strategic sourcing initiatives in fiscal year 2015, resulting in a missed opportunity to potentially have saved over $1 billion. DOD headquarters reductions. Since 2014, and in part to respond to congressional direction, DOD has undertaken initiatives intended to improve the efficiency of headquarters organizations and identify related cost savings, but it is unclear to what extent these initiatives will help the department achieve the potential savings it has identified. DOD has many organizations with multiple layers of headquarters management, and at times these organizations possess complex and overlapping relationships. To improve the management of DOD’s headquarters-reduction efforts, we recommended that the Secretary of Defense conduct systematic determinations of personnel requirements for the Office of the Secretary of Defense, Joint Staff, and military service secretariats and staffs; set a clearly defined and consistently applied starting point as a baseline for headquarters-reduction efforts and track reductions against the baselines to provide reliable accounting of savings and reporting to Congress; and conduct comprehensive, periodic evaluations of whether the combatant commands are sized and structured to efficiently meet assigned missions. By implementing these recommendations, DOD could yield billions in savings. DOD commissaries. DOD operates 238 commissaries worldwide to provide groceries and household goods at reduced prices as a benefit to military personnel, retirees, and their dependents. In our November 2016 and March 2017 reports, we found that DOD can more efficiently manage its commissaries and potentially achieve cost savings. DOD could better position itself to meet its $2 billion target from fiscal years 2017 through 2021 by implementing our recommendation to develop a plan with assumptions, a methodology, cost estimates, and specific time frames for achieving alternative reductions to appropriations, to support DOD’s efforts to ensure that DOD’s cost savings target is feasible and accurate. DOD generally agreed with our recommendations. DOD leases and use of underutilized spaces at military installations. Overreliance on costly leasing is one of the major reasons that federal real property management remains on our high- risk list. Our prior work has shown that owning buildings often costs less than operating leases, especially where there are long-term needs for space. We analyzed all 5,566 lease records in DOD’s real property database for fiscal year 2013 (the most recent year for which data were available) and found that there were 407 records for general administrative space. The total annual rent plus other costs for these leases was approximately $326 million for about 17.6 million square feet of leased space. We recommended that DOD look for opportunities to relocate DOD organizations in leased space to installations that may have underutilized space because of force structure reductions or other indicators of potentially available space, where such relocation is cost- effective and does not interfere with the installation’s ongoing military mission. DOD did not agree with the recommendation and had not taken action, as of October 2016. These actions could potentially save millions of dollars each year in reduced or avoided rental costs. We have identified numerous opportunities within the Department of Health and Human Services (HHS) to achieve cost savings. HHS represented about 28 percent of the fiscal year 2016 federal budget, with outlays totaling about $1.2 trillion. HHS’s largest mandatory programs are Medicare, which in fiscal year 2016 financed health services for over 57 million beneficiaries at an estimated cost of $696 billion, and Medicaid, which covered an estimated 72.2 million people in fiscal year 2016 at a cost of $575.9 billion. Our work suggests that effectively implementing these actions, could yield substantial financial benefits. Our work has identified opportunities for billions of dollars of savings and the need for improved federal oversight in multiple areas of traditional Medicare—also known as Medicare fee-for service (FFS)—and Medicare Advantage (MA), which provides health care coverage to Medicaid beneficiaries through private health plans. Payments and provider incentives in traditional Medicare. Medicare spending on hospital outpatient department services has grown rapidly in recent years—nearly $58 billion spent in 2015. In December 2015, we reported that some of this growth is because services that were typically performed in physician offices have shifted to hospital outpatient departments, resulting in higher reimbursement rates. We recommended that Congress consider directing HHS to equalize payment rates between settings for certain services and return the associated savings to the Medicare program. Congress passed legislation to exclude services furnished by off-campus hospital outpatient departments from higher payment beginning in 2017; however, this exclusion does not apply to services furnished by providers billing as hospital outpatient departments or those meeting certain mid-build requirements prior to November 2, 2015. We maintain that Medicare could save billions of dollars annually if Congress were to equalize the rates for certain health care services, which often vary depending on where the service is performed. The federal government spends about $50 billion annually to help hospitals with billions of dollars in costs incurred for uncompensated care—services hospitals provide to uninsured and low-income patients for which they are not fully compensated. Both Medicare and Medicaid make multiple types of payments that help offset hospital uncompensated care costs. In June 2016, we reported that Medicare Uncompensated Care payments are not well aligned with hospital uncompensated care costs, potentially resulting in relatively large amounts of available funding being distributed to hospitals where uncompensated care costs are likely declining. We recommended that the Centers for Medicare & Medicaid Services (CMS) instead base those payments on actual hospital uncompensated care costs and account for Medicaid payments made when making Medicare Uncompensated Care payments to individual hospitals. HHS concurred with the recommendations and indicated that the agency planned to implement them beginning in fiscal year 2021 to allow time for hospitals to collect and report reliable data. Implementing our recommendations could prevent more than $1 billion annually from going to hospitals that may not have any uncompensated care. The Medicare prospective payment system (PPS) introduced better control over program spending and provided hospitals with an incentive for efficient resource use. Yet for decades, as required by law, Medicare has paid 11 cancer hospitals differently than PPS hospitals—specifically, these cancer hospitals are reimbursed largely based on their reported costs and as such have little incentive for containing costs. To help HHS better control Medicare spending and encourage efficient delivery of care, and to generate cost savings from any reductions in payments to cancer hospitals that are exempted from the PPS, we recommended that Congress consider requiring Medicare to pay these PPS-exempt cancer hospitals as it pays PPS teaching hospitals, or provide the Secretary of HHS with the authority to otherwise modify how Medicare pays PPS-exempt cancer hospitals, and provide that all forgone outpatient payment adjustment amounts be returned to the Supplementary Medical Insurance Trust Fund. The 21st Century Cures Act, enacted in December 2016, slightly reduces the additional payments cancer hospitals receive for outpatient services. However, the law keeps in place the payment system for outpatient services that differs from how Medicare pays PPS teaching hospitals. Moreover, the law does not change how PPS-exempt cancer hospitals are paid for inpatient services. Until Medicare pays PPS-exempt cancer hospitals in a way that encourages efficiency, rather than largely on the basis of reported costs, Medicare remains at risk for overspending almost $500 million per year. Medicare Advantage and other Medicare health plans. The number and percentage of Medicare beneficiaries enrolled in MA has grown steadily over the past several years, increasing from 8.1 million (20 percent of all Medicare beneficiaries) in 2007 to 17.5 million (32 percent of all Medicare beneficiaries) in 2015. We have identified opportunities for CMS to improve the accuracy of MA payments, to account for diagnostic coding differences between MA and FFS. We previously reported that shortcomings in CMS’s adjustment resulted in excess payments to MA plans totaling an estimated $3.2 billion to $5.1 billion over a 3-year period from 2010 through 2012. In January 2012, we recommended that CMS take steps to improve the accuracy of the adjustment made for differences in diagnostic coding practices by, for example, accounting for additional beneficiary characteristics such as sex, health status, and Medicaid enrollment status, as well as including the most recent data available. Although CMS has taken steps to improve the accuracy of the risk adjustment model and Congress has taken steps to increase the adjustment, CMS has not improved its methodology for calculating the diagnostic coding adjustment. Until CMS shows the sufficiency of the diagnostic coding adjustment or implements an adjustment based on analysis using an updated methodology, payments to MA plans may not accurately account for differences in diagnostic coding between these plans and traditional Medicare providers. CMS could achieve billions of dollars in additional savings by better adjusting for differences between MA plans and traditional Medicare providers in the reporting of beneficiary diagnoses. We have also found that improved federal oversight is needed in multiple areas of Medicaid, including in the area of financing transparency and oversight and oversight of Medicaid demonstrations. Growing expenditures for and oversight of large Medicaid demonstrations. Medicaid demonstrations have become a significant proportion of Medicaid expenditures, growing steadily from about $50 billion, or about 14 percent of total Medicaid expenditures in fiscal year 2005, to $165 billion, or close to one-third of total Medicaid expenditures in fiscal year 2015. Between 2002 and 2014, we reviewed several states’ approved comprehensive demonstrations and found that HHS had not ensured that all of the demonstrations would be budget neutral to the federal government. We recommended that HHS improve the process for reviewing and approving Medicaid demonstrations and, in January 2008, we elevated this matter for consideration by Congress. Legislation was introduced in the 114th Congress but not enacted to require HHS to improve the Medicaid demonstration review process consistent with our recommendations. In October 2016, CMS officials told us that they had established new budget neutrality policies to reduce demonstration spending limits and they are implementing the policies over time. However, these new policies do not address all of the problematic budget neutrality methodologies that we identified. We maintain that improving the process for reviewing, approving, and making transparent the basis for spending limits approved for Medicaid demonstrations could potentially save billions of dollars. Financing and provider payment transparency and oversight. To effectively oversee state Medicaid programs, CMS needs complete and accurate information on payments to individual providers. We have raised concerns about states making large Medicaid supplemental payments—payments in addition to the regular, claims- based payments made to providers for services they provided—to institutional providers, such as hospitals and nursing facilities. In fiscal year 2015, these payments totaled about $55 billion. In April 2015, we concluded that federal oversight of Medicaid payments is limited in part by insufficient federal information on payments. Oversight is also limited because CMS does not have a policy and process for determining that payments are economical and efficient. As a result, CMS may not identify or examine excessive payments states make to individual providers. We recommended that CMS ensure that states report accurate provider-specific payment data for all payments, develop a policy establishing criteria to determine when provider-specific payments are economical and efficient, and develop a process for identifying and reviewing payments to individual providers to determine if they meet the established criteria. CMS planned to publish a proposed rule for public comment in fall 2016 to improve the oversight of supplemental payments made to individual providers, but as of March 2017, the proposed rule had not been published. CMS could save hundreds of millions of dollars by taking steps to implement our recommendations. We have identified numerous opportunities for the Department of Veterans Affairs (VA) to more effectively and efficiently achieve its mission to promote the health, welfare, and dignity of all veterans by ensuring that they receive medical care, benefits, and social services. In fiscal year 2016, VA spent about $179.6 billion—about 4 percent of federal outlays—for veterans’ benefits and services. Our work suggests that effectively implementing these actions could yield cost savings and efficiencies that would improve the delivery of services. VA health care. Since designating VA health care as a high-risk area in 2015, we continue to be concerned about VA’s ability to ensure its resources are being used cost-effectively and efficiently to improve veterans’ timely access to health care, and to ensure the quality and safety of that care. VA operates one of the largest health care delivery systems in the nation, with 168 medical centers and more than 1,000 outpatient facilities organized into regional networks. VA has faced a growing demand by veterans for its health care services. To help address veterans’ health care needs, VA’s budgetary resources have more than doubled since 2006 to $91.2 billion in fiscal year 2016. Despite these increased resources, there have been numerous reports in this same period—by us, VA’s Office of the Inspector General, and others—of VA facilities failing to provide timely health care. In some cases, veterans have reportedly been harmed by the delays in care or VA’s failure to provide care at all. Among the concerns we have raised in these reports is the lack of reliability, transparency and consistency of VA’s budget estimates and tracking obligations. These concerns were evident in June 2015, when VA requested additional funds from Congress because agency officials projected a funding gap in fiscal year 2015 of about $3 billion in its medical services appropriation account. The projected funding gap was largely due to administrative weaknesses, which slowed the utilization of the Veterans Choice Program in fiscal year 2015 and resulted in higher-than-expected demand for VA’s previously established VA community care programs. To better align cost estimates for community care services with associated obligations, in June 2016, we reported that VA was examining options for replacing its outdated financial information technology systems and VA has since established a projected completion date of fiscal year 2020 for that effort. However, VA continues to underestimate the resources it needs to provide health care services efficiently and effectively. For example, in February 2017, a VA official told us that VA would need to request additional funding for fiscal year 2018 above already appropriated funding for that year. VA benefits. VA provides billions of dollars in monthly disability compensation to veterans with disabling conditions caused or aggravated by their military service. In recognition of cases where the benefit does not adequately compensate veterans who are unable to maintain substantially gainful employment, VA may provide supplemental compensation through its Total Disability Individual Unemployability (TDIU) benefit. We found that 54 percent of disabled veterans receiving TDIU benefits in fiscal year 2013 were 65 years or older. By comparison, other benefit programs, such as Social Security Disability Insurance, consider retirement age a cause for ineligibility and convert benefits for those reaching their full retirement age to a Social Security retirement benefit. We recommended that VA develop a plan to study whether age should be considered when deciding if veterans are unemployable. VA concurred with our recommendation and began reviewing disability eligibility policies and procedures in April 2015, including consideration of age in claim decisions. The review was on going as of February 2017. If it were determined that TDIU benefits should only be provided to those veterans younger than their full Social Security retirement age, VA could achieve significant cost savings—$15 billion from 2015 through 2023, according to a CBO estimate. In fiscal year 2016, the Social Security Administration (SSA) spent about $979.7 billion, roughly 23 percent of federal outlays. We have identified a number of opportunities for SSA to improve the integrity of its programs and achieve cost savings. Its two largest programs—Old-Age and Survivors Insurance (OASI), which provides retirement benefits, and Disability Insurance (DI), which provides benefits to individuals who cannot work because of a disability—together paid out more than $905 billion in fiscal year 2016. Benefits provided under these programs are subject to several provisions that offset benefits for individuals who receive both Social Security benefits and similar benefits under another program, such as state and local pensions or workers’ compensation. In some of these cases, SSA is required to offset or reduce the amount it pays to account for these other benefits. We have reported that SSA could take additional steps to better enforce these rules and avoid paying duplicative benefits. Social Security offsets. SSA needs accurate information from state and local governments on retirees who receive pensions from employment not covered under Social Security. SSA needs this information to fairly and accurately apply the Government Pension Offset (GPO), which generally applies to spouse and survivor benefits, and the Windfall Elimination Provision (WEP), which applies to retirement and disability benefits. Congress could consider giving IRS the authority to collect the information that SSA needs on government pension income to administer the GPO and the WEP accurately and fairly. Implementing this action could save $2.4 billion to $7.9 billion over 10 years, if enforced both retrospectively and prospectively, based on estimates from CBO and SSA. The estimated savings would be less if SSA only enforced the offsets prospectively as it would not reduce benefits already received. Disability and unemployment benefits. Current law does not preclude the receipt of overlapping DI and Unemployment Insurance (UI) benefits. We previously found that 117,000 individuals received concurrent cash benefit payments, in fiscal year 2010, from these programs totaling more than $850 million. In 2014, we reported that Congress should consider passing legislation to require SSA to offset DI benefits for any UI benefits received in the same period. As of March 2017, legislation had not been enacted. Several bills, including the Social Security Disability Insurance and Unemployment Benefits Double Dip Elimination Act, were introduced in the 114th Congress that would have prevented concurrent receipt of SSA DI and UI benefits, as we suggested in our 2014 report. If new legislation is introduced in the 115th Congress and enacted, the change could save $1.9 billion over 10 years in the DI program, according to CBO. SSA’s DI program requires beneficiaries to meet certain medical and financial requirements in order to maintain eligibility for benefits. We have identified a number of opportunities for SSA to save money by improving its ability to determine whether beneficiaries have regained the ability to work, and if working, gather information on wages to avoid improper payments to beneficiaries earning above program limits. Disability Insurance overpayments. DI overpayments often result when a beneficiary returns to work and starts earning income above a certain level, but the earnings activity is not properly reported to or processed by SSA. We estimated that SSA overpaid individuals $11.5 billion during fiscal years 2005 through 2014 because their work activity resulted in earnings that exceeded program limits. SSA may waive overpayments under some circumstances, in which case collection of the debt is terminated, and allows flexibility to administratively waive low dollar amounts. In October 2015, we identified several weaknesses in SSA’s process for handling work reports and waivers, and we made several recommendations—including that SSA study the costs and benefits of automated reporting options to enhance the ease and integrity of the work reporting process and take additional steps to ensure compliance with waiver policies, including updating its Debt Management System to ensure waivers over $1,000 are not improperly waived. SSA agreed with this recommendation. Regarding work reporting, SSA was drafting business processes as of March 2017 to (1) build an Internet and telephone wage reporting system for DI beneficiaries and (2) contract with third-party payroll providers to receive monthly earnings data that will allow SSA to automatically make benefit adjustments. Until these new processes are implemented, the incidence of overpayments will likely remain high due to the lack of convenient reporting options for beneficiaries, failure of beneficiaries to self-report, and SSA processing errors. Regarding waivers, SSA had not updated its Debt Management System as of March 2017, and commented that it lacks the funds to do so. Fully implementing these recommendations would help prevent the loss of billions of dollars, by preventing overpayments in the first place, as well as improper waivers of overpayments, once they occur. Disability reviews. SSA is generally required to conduct continuing disability reviews (CDR) to determine whether DI and Supplemental Security Income recipients remain eligible for benefits based on their medical condition and ability to work. In February 2016, we reported that SSA’s process for targeting CDRs does not maximize potential savings for the government. We recommended that SSA further consider cost savings when prioritizing reviews. SSA partially agreed with our recommendation, stating that, although it could do more to increase the return on its CDRs, the agency’s statistical models and prioritization process already do much of what was recommended. However, we believe that SSA could refine its prioritization process by factoring in actuarial considerations in addition to its existing statistical models. SSA had not taken action as of February 2017. If SSA further incorporates cost savings into its process for prioritizing CDRs to conduct, the agency could realize greater savings by targeting cases with the highest average potential savings among those with the highest likelihood of benefit cessation. Many of the results the federal government seeks to achieve require the coordinated effort of more than one federal agency, level of government, or sector. OMB manages and coordinates many government-wide efforts and its involvement is critical in continuing to make progress in improving efficiency and effectiveness of government programs. OMB also plays a critical role in the management of improper payments, tax expenditures, and the Digital Accountability and Transparency Act of 2014 (DATA Act). Reducing acquisition costs. Between fiscal years 2011 through 2015, federal agencies spent almost $2 billion through OMB’s federal strategic sourcing initiatives and achieved an estimated $470 million in savings. Implementing our recommendations related to federal acquisitions would help agencies achieve significant savings. In 2016, we found that OMB and the General Services Administration needed to take actions to hold federal agencies more accountable for the results of federal strategic sourcing initiatives. For example, the seven largest federal agencies that comprised the Leadership Council—a cohort of large federal agencies responsible for federal strategic sourcing initiatives governance—directed less than 10 percent of their spending on the types of goods and services offered under the federal strategic sourcing initiatives in fiscal year 2015. As a result, they missed the opportunity to potentially have saved $1 billion. OMB generally agreed with these recommendations. It is important that OMB continue to expand this approach to other high-spend categories in a timely fashion to help agencies reap billions of dollars in potential savings. Information technology investment portfolio management. Federal agencies spend billions of dollars each year to meet their increasing demand for information technology (IT). In March 2012, OMB launched an initiative, referred to as PortfolioStat, to maximize the return on IT investments across the government’s portfolio. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT investment management process, making decisions on eliminating duplicative investments, and moving to shared solutions (such as cloud computing) within and across agencies. In 2013, we made several recommendations to OMB regarding the PortfolioStat initiative. For example, we recommended that OMB direct the Federal Chief Information Officer to improve transparency of and accountability for PortfolioStat by publicly disclosing planned and actual data consolidation efforts and related cost savings by agency. While OMB disagreed with the recommendation, as of March 2017, OMB had taken steps to improve transparency of and accountability for PortfolioStat by displaying actual data consolidation savings data on the federal information technology dashboard. However, OMB stated that it does not track planned cost savings and cost avoidance figures and did not provide any plans to do so. OMB’s continued attention in addressing this recommendation and our government- wide high-risk area Improving the Management of IT Acquisitions and Operations is essential to enabling agencies to demonstrate progress in improving their portfolios of IT investments. Improving the transparency and accountability for PortfolioStat by publicly disclosing both planned and actual data consolidation efforts and related cost savings by agency would provide stakeholders, including Congress and the public, a means to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. Fully implementing the actions in this area could result in billions of dollars in additional savings. Federal data center consolidation. Over time, the federal government's increasing demand for IT has led to a dramatic rise in the number of federal data centers (defined as data processing and storage facilities over 500 square feet with strict availability requirements) and a corresponding increase in operational costs. In 2011, we identified the need for OMB to work with agencies to establish goals and targets for consolidation (both in terms of cost savings and reduced data centers), maintain strong oversight of the agencies' efforts, and look for consolidation opportunities across agencies. Since 2011, OMB has taken steps to look for data center consolidation opportunities across agencies; however, continued evidence of agencies not fully reporting their savings demonstrates the importance of OMB's continued oversight. As of March 2017, agencies collectively reported having 10,058 data centers, of which 4,679 were reported closed. Agencies also reported that they planned to close another 1,358 data centers—for a total of 6,037 closed—by the end of fiscal year 2019. The agencies reported achieving approximately $2.8 billion in cost savings or avoidances from their data center consolidation and optimization efforts from fiscal year 2012 through 2016. Further, as of December 2016, agencies were planning a total of approximately $378 million in cost savings between fiscal years 2016 and 2018—significantly less than OMB's $2.7 billion cost savings goal for agencies to achieve by the end of fiscal year 2018. Of the recommendations that we made to 10 agencies in March 2016 to complete their planned data center cost savings targets for fiscal years 2016 through 2018, all remain open. Going forward, it will be important for OMB to continue to implement its oversight of agencies' data center consolidation efforts to better ensure that the consolidation and optimization efforts are meeting their established objectives. Geospatial investments. The federal government collects, maintains, and uses geospatial information linked to specific geographic locations to help in decision making and to support many functions, including national security, law enforcement, health care, and environmental protection. Many activities, such as maintaining roads and responding to natural disasters can depend on critical analysis of geospatial information. Further, multiple federal agencies may provide services at the same geographic locations and may independently collect similar geospatial information about those locations. In 2012, we recommended that OMB develop a mechanism, or modify existing mechanisms, to identify and report annually on all geospatial- related investments, including dollars invested and the nature of the investment. In responding to the recommendation at the time of the report, OMB noted that it developed new analysis tools and updated its models to improve its ability to identify and report on geospatial- related investments. As of March 2017, OMB has made progress in developing a way to identify and report annually on all geospatial- related investments, but has not completed its efforts. Better coordination by agencies and better oversight by OMB could help to reduce duplication of geospatial investments, providing the opportunity for potential savings of millions of dollars on the estimated billions of dollars spent annually on geospatial information technology. Ensuring the security of federal information systems and cyber critical infrastructure and protecting the security of personally identifiable information. Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The security of these systems and data is vital to public confidence and the nation’s safety, prosperity, and well-being. Protecting the privacy of personally identifiable information (PII) that is collected, maintained, and shared by both federal and nonfederal entities is also critical. Regarding PII, advancements in technology, such as new search technology and data analytics software for searching and collecting information, lower data storage costs, and ubiquitous Internet and cellular connectivity have made it easier for individuals and organizations to correlate data and track it across large and numerous databases. These advances—combined with the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals—have increased the risk of PII being exposed and compromised. Actions initiated by OMB and the Federal Chief Information Officer, such as the 30-Day Cybersecurity Sprint and the October 30, 2015, cybersecurity strategy and implementation plan, reflect an increased level of attention by OMB to the security of federal networks, systems, and data at civilian agencies. Consistent with our 2015 recommendations for developing a federal cybersecurity strategy, OMB’s strategy identifies key actions, responsibilities, and timeframes for implementation as well as mechanisms for tracking progress and holding individuals accountable. These actions should help federal agencies stem the rising tide of information security incidents. In addition, OMB should continue to focus its attention on implementing our recommendations to (1) address agency cyber incident response practices in its oversight of agency information security programs and (2) collaborate with stakeholders to enhance reporting guidance for the inspector general community. Doing so will enable federal agencies to better respond to cyber attacks and will provide for more consistent and useful reporting to the Congress. Better coordination among programs that support employment for people with disabilities. In 2010, an estimated one in six working-age Americans reported having a disability, and the federal government obligated more than $4 billion in fiscal year 2010 for employment-related supports for people with disabilities. Lack of coordination is, in part, why federal disability programs have remained on our high-risk list since 2003. Meanwhile, SSA paid out almost $196 billion in fiscal year 2015 in income supports for people with disabilities who cannot work, and historically, people with disabilities have experienced higher unemployment and poverty rates than those without disabilities. In 2012, we found overlap and limited coordination among 45 programs in nine federal agencies that support employment for people with disabilities—programs that have been created or have evolved over time to address barriers in employment for people with disabilities, resulting in a fragmented system of supports. To improve coordination and spur more efficient and economical service delivery in overlapping program areas, OMB should consider establishing measurable, governmentwide goals for employment of people with disabilities, and agencies should establish related measures and indicators and collect additional data to ensure goals are being met. Establishing such goals and related measures could further enhance coordination and help improve employment outcomes for people with disabilities, including finding or maintaining employment outside of the federal government. The tax gap—the difference between taxes owed to the government and total taxes paid on time—has been a persistent problem for decades despite the Internal Revenue Service’s (IRS) efforts to improve voluntary compliance. In 2016, IRS estimated that for tax years 2008 to 2010, the voluntary compliance rate averaged 81.7 percent of taxes owed, resulting in an average annual gross tax gap of $458 billion. After accounting for an estimated $52 billion in late payments and payments resulting from IRS enforcement actions, the net compliance rate averaged 83.7 percent of taxes owed, resulting in an annual average net tax gap of $406 billion for those years. The largest part of the tax gap is from underreporting, when taxpayers inaccurately report tax liabilities on tax returns. (See figure 11.) Other forms of noncompliance are underpayment, when taxpayers fail to pay taxes due from filed returns, or nonfiling, when they fail to file a required tax return altogether or on time. We have identified actions IRS and Congress can take to reduce the tax gap. For example, we recommended that IRS collect more data on noncompliance and determine resource allocation strategies for its enforcement efforts, such as for partnerships; strengthen referral programs so whistleblowers can more easily submit information to IRS about tax noncompliance; and enhance taxpayer services, such as by developing a long-term strategy for providing web-based services to taxpayers. Likewise, Congress could help address the tax gap by expanding third- party information reporting requirements, requiring additional taxpayers to file tax and information returns electronically, regulating paid tax return preparers, and, as previously discussed, providing IRS with broad authority to correct errors where there are inconsistencies within a taxpayer’s tax return. In many cases, agencies also need to take action to provide decision makers with additional or improved information on the performance and costs of policies or programs. In particular, decision making could be improved by strengthening internal controls over financial reporting to ensure the statements are fully auditable, increasing attention to tax expenditures, and effectively implementing the DATA Act. Ensuring the federal government’s financial statements are fully auditable. Eliminating these weaknesses would improve the reliability of financial information and improve financial decision making. The U.S. government’s consolidated financial statements are intended to present the results of operations and the financial position of the federal government as if the government were a single enterprise. Since the federal government began preparing consolidated financial statements 20 years ago, three major impediments have continued to prevent us from rendering an opinion on the federal government’s accrual-based consolidated financial statements over this period: (1) serious financial management problems at DOD that have prevented its financial statements from being auditable, (2) the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities, and (3) the federal government’s ineffective process for preparing the consolidated financial statements. Over the years, we have made a number of recommendations to OMB, Treasury, and DOD to address these issues. Generally, these entities have taken or plan to take actions to address these recommendations. The material weaknesses in internal control underlying these three major impediments continued to (1) hamper the federal government’s ability to reliably report a significant portion of its assets, liabilities, costs, and other related information; (2) affect the federal government’s ability to reliably measure the full cost, as well as the financial and nonfinancial performance of certain programs and activities; (3) impair the federal government’s ability to adequately safeguard significant assets and properly record various transactions; and (4) hinder the federal government from having reliable financial information to operate in an efficient and effective manner. Increased attention to tax expenditures. Tax expenditures are sometimes used to provide economic relief to selected groups of taxpayers or to encourage certain behavior or to accomplish other goals. The goals they seek to advance may be similar to the goals of mandatory or discretionary spending programs. According to Treasury, in fiscal year 2016 there were 167 tax expenditures. These tax expenditures represented an estimated total of $1.4 trillion in forgone tax revenue. However, despite their use as a policy tool, tax expenditures are not regularly reviewed, and their outcomes are not measured as closely as those from spending programs. We recommended that OMB take actions to develop a framework for evaluating tax expenditure performance and to regularly review tax expenditures in executive branch budget and performance review processes. However, OMB has not developed a systematic approach for conducting such reviews and has not reported progress on addressing data availability and analytical challenges in evaluating tax expenditures since the President’s fiscal year 2012 budget. In July 2016 we recommended that OMB work with agencies to identify which tax expenditures contribute to agency goals, and OMB generally agreed with the recommendation. Absent such analysis, policymakers have little way of knowing whether these tax provisions support achieving the intended federal outcomes and lack information to compare their cost and efficacy with other policy tools. Effective implementation of the DATA Act. We have reported that the DATA Act holds great promise for improving the transparency and accountability of federal spending data. Full and effective implementation of the act would enable—for the first time—the federal government as a whole to report on funds at multiple points in the federal spending lifecycle and significantly increase the types and transparency of data available to Congress, agencies, and the general public. OMB and Treasury have taken significant steps toward implementing the DATA Act’s various requirements, but agencies have reported that they continue to face challenges, including issues involving systems integration, lack of resources, evolving and complex reporting requirements, and inadequate guidance. As agencies begin to report data required by the act in May 2017, attention will increasingly focus on the quality of the data being produced. Prior agency financial audits and inspectors general reviews have identified material weaknesses and significant deficiencies that present risks to agencies’ ability to submit quality data. We also identified challenges with guidance that will impact data quality and limitations with the processes to provide and communicate needed quality assurances to users. Moving forward, OMB and Treasury need to continue to address issues that we identified in our previous work as well as our open recommendations related to implementation of the act and data transparency. The government must act soon to change the long-term fiscal path or risk significant disruption to individuals and the economy. Congress will need to discuss the entire range of federal activities and spending—entitlement programs, other mandatory spending, discretionary spending, and revenue. Moving forward, the federal government will need to make tough choices in setting priorities and ensuring that spending leads to positive results. Having a broader fiscal plan to put the federal government on a more sustainable long-term path would help with these tough decisions. Thank you, Chairman Black, Ranking Member Yarmuth, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer questions. For further information on this testimony, please contact Susan J. Irving, Director of Federal Budget Analysis, Strategic Issues, who may be reached at (202) 512-6806 or [email protected], and J. Christopher Mihm, Managing Director, Strategic Issues, who may be reached at (202) 512- 6806 or [email protected]. Contact points for the individual areas listed in our 2017 Fragmentation, Overlap, and Duplication annual report can be found on the first page of each area in GAO-17-491SP. Contact points for the individual high-risk areas are listed in GAO-17-317 and on our high- risk website. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. The Nation’s Fiscal Health: Action is Needed to Address the Federal Government’s Fiscal Future. GAO-17-237SP. Washington, D.C.: January 17, 2017. GAO, Fiscal Outlook & The Debt Key Issues Page, accessed April 28, 2017, http://www.gao.gov/fiscal_outlook/overview. Fiscal Outlook: Addressing Improper Payments and the Tax Gap Would Improve the Government’s Fiscal Position. GAO-16-92T. Washington, D.C.: October 1, 2015. Social Security’s Future: Answers to Key Questions. GAO-16-75SP. Washington, D.C.: October 27, 2015. Improper Payments: CFO Act Agencies Need to Improve Efforts to Address Compliance Issues. GAO-16-55. Washington, D.C.: June 30, 2016. Improper Payments: Government-Wide Estimates and Use of Death Data to Help Prevent Payments to Deceased Individuals. GAO-15-482T. Washington, D.C.: March 16, 2015. Disaster Relief: Agencies Need to Improve Policies and Procedures for Estimating Improper Payments. GAO-15-209. Washington, D.C.: February 27, 2015. Improper Payments: TRICARE Measurement and Reduction Efforts Could Benefit from Adopting Medical Record Reviews. GAO-15-269. Washington, D.C.: February 18, 2015. Improper Payments: DOE’s Risk Assessments Should Be Strengthened. GAO-15-36. Washington, D.C.: December 23, 2017. Improper Payments: Inspector General Reporting of Agency Compliance under the Improper Payments Elimination and Recovery Act. GAO-15-87R. Washington, D.C.: December 9, 2014. Improper Payments: Government-Wide Estimates and Reduction Strategies. GAO-14-737T. Washington, D.C.: July 9, 2014. Partnerships and S Corporations: IRS Needs to Improve Information to Address Tax Noncompliance. GAO-14-453. Washington, D.C.: May 14, 2014. Paid Tax Return Preparers: In a Limited Study, Preparers Made Significant Errors. GAO-14-467T. Washington, D.C.: April 8, 2014. Tax Gap: IRS Could Significantly Increase Revenues by Better Targeting Enforcement Resources. GAO-13-151. Washington, D.C.: December 5, 2012. Tax Gap: Sources of Noncompliance and Strategies to Reduce It. GAO-12-651T. Washington, D.C.: April 19, 2012. Debt Limit: Market Response to Recent Impasses Underscores Need to Consider Alternative Approaches. GAO-15-476. Washington, D.C.: July 9, 2015. Debt Limit: Analysis of 2011-2012 Actions Taken and Effect of Delayed Increase on Borrowing Costs. GAO-12-701. Washington, D.C.: July 23, 2012. The Effects of Delays in Increasing the Debt Limit Podcast, accessed April 28, 2017, http://www.gao.gov/multimedia/podcasts/592827. Debt Limit Alternative Approaches Podcast, accessed April 28, 2017, http://www.gao.gov/multimedia/podcasts/670669. Financial Audit: Fiscal Years 2016 and 2015 Consolidated Financial Statements of the U.S. Government. GAO-17-283R. Washington, D.C.: January 12, 2017. Understanding the Primary Components of the Annual Financial Report of the United States Government. GAO-09-946SP. Washington, D.C.: September 25, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Congress and administration face serious economic, security, and social challenges that will require difficult policy choices in the short term about the level of federal spending and investments as well as ways to obtain needed resources. At the same time, the federal government is highly leveraged in debt by historical norms. In addition to near term financing decisions, a broader fiscal plan is needed to put the government on a more sustainable long-term path. In January 2017, GAO reported on the need for such a plan by outlining the fiscal condition of the U.S. government and its future path based on current fiscal policies. This statement summarizes GAO’s work on this issue and also discussed how Congress and executive branch agencies can help in the near term by taking action to address improper payments; duplication, overlap, or fragmentation; high-risk areas; and the tax gap. The Federal Government Is on an Unsustainable Fiscal Path According to the 2016 Financial Report , the federal deficit in fiscal year 2016 increased to $587 billion—up from $439 billion in fiscal year 2015. Federal receipts grew a modest $18.0 billion due primarily to extensions of tax preferences, but that was outweighed by a $166.5 billion increase in spending, driven by Social Security, Medicare, and Medicaid, and interest on debt held by the public (net interest). Debt held by the public rose as a share of gross domestic product (GDP), from 74 percent at the end of fiscal year 2015 to 77 percent at the end of fiscal year 2016. This compares to an average of 44 percent of GDP since 1946. The 2016 Financial Report , the Congressional Budget Office (CBO), and GAO projections all show that, absent policy changes, the federal government’s fiscal path is unsustainable and that the debt-to-GDP ratio would surpass its historical high of 106 percent within 15 to 25 years (see figure). Of further concern is the fact that none of the long-term projections include certain other fiscal risks that could affect the federal government’s financial condition in the future. Some examples of such fiscal risks are the Pension Benefit Guaranty Corporation’s funding and governance structure, U.S. Postal Service’s retiree health and pension funds, government insurance programs such as the National Flood Insurance Program, and military, economic, financial, or weather-related crises. Importance of Early Action: The 2016 Financial Report , CBO, and GAO all make the point that the longer action is delayed, the greater and more drastic the changes will have to be. As shown in the timeline, the trust funds face financial challenges that add to the importance of beginning action. It is important to develop and begin to implement a long-term fiscal plan for returning to a sustainable path. Debt Limit Is Not a Control on Debt: The current debt limit is not a control on debt, but rather an after-the-fact measure that restricts the Department of the Treasury’s authority to borrow to finance the decisions already enacted by Congress and the President. GAO has suggested Congress consider alternative approaches that would better link decisions about borrowing to finance the debt with decisions about spending and revenue at the time those decisions are made. Opportunities to Begin to Address the Government’s Fiscal Health GAO has identified actions Congress and agencies can take now to help improve the fiscal situation. GAO highlighted five agencies—the Departments of Defense, Health and Human Services, and Veterans Affairs; the Social Security Administration; and the Office of Management and Budget. These agencies made up 69 percent—$ 3.0 trillion—of federal outlays in fiscal year 2016. Although these actions alone cannot put the federal government on a sustainable fiscal path, they would improve both the fiscal situation and the federal government’s operations. Actions Needed to Reduce Improper Payments: Reducing payments that should not have been made or that were made in an incorrect amount could yield significant savings. The improper payments estimate in fiscal year 2016 was over $144 billion. Since fiscal year 2003, cumulative estimates have totaled over $1.2 trillion. Opportunities Exist to Improve the Efficiency and Effectiveness of Government Operations : GAO has identified government operations that are at high risk of fraud, waste, abuse, and mismanagement and has presented numerous areas to reduce, eliminate, or better manage fragmentation, overlap, or duplication; achieve cost savings; or enhance revenue. Fully addressing the issues raised could yield increased savings, better services to the public, and improved federal programs. Multiple Strategies Needed to Address the Persistent Tax Gap: Reducing the gap between taxes owed and those paid on time could increase tax collections by billions. Most recently, the annual gross tax gap was estimated to be $458 billion. Action Needed to Improve Information on Programs and Fiscal Operations: Decision-making could be improved by ensuring the government’s financial statements are fully auditable, increasing attention to tax expenditures, and effectively implementing the Digital Accountability and Transparency Act of 2014. |
The U.S. election system is highly decentralized and based upon a complex interaction of people (election officials and voters), processes, and technology. Voters, local election jurisdictions, states, and the federal government all play important roles in ensuring that ballots are successfully cast in an election. The elections process within the United States is primarily the responsibility of the individual states and their election jurisdictions. States have considerable discretion in how they organize the elections process and this is reflected in the diversity of processes and deadlines that states have for voter registration and absentee voting, including diversity in the processes and deadlines that apply to military voters. Each state has its own election system with a somewhat distinct approach. Within each of these 55 systems, the guidelines and procedures established for local election jurisdictions can be very general or specific. Even when imposing requirements, such as statewide voter registration systems and provisional voting on the states in the Help America Vote Act of 2002, Congress left states discretion in how to implement those requirements and did not require uniformity. Executive Order 12642, dated June 8, 1988, designated the Secretary of Defense or his designee as responsible for carrying out the federal functions under UOCAVA. UOCAVA requires the presidential designee to (1) compile and distribute information on state absentee voting procedures, (2) design absentee registration and voting materials, (3) work with state and local election officials in carrying out the act, and (4) report to Congress and the President after each presidential election on the effectiveness of the program’s activities, including a statistical analysis on UOCAVA voter participation. DOD Directive 1000.4, dated April 14, 2004, is DOD’s implementing guidance for the federal voting assistance program, and it designated the Under Secretary of Defense for Personnel and Readiness (USD P&R) as responsible for administering and overseeing the program. For 2004, FVAP had a full-time staff of 13 and a fiscal year budget of approximately $6 million. FVAP’s mission is to (1) inform and educate U.S. citizens worldwide of their right to vote, (2) foster voting participation, and (3) protect the integrity of and enhance the electoral process at the federal, state, and local levels. DOD Directive 1000.4 also sets forth DOD and service roles and responsibilities in providing voting education and assistance. In accordance with the directive, FVAP relies heavily upon the military services for distribution of absentee voting materials to military servicemembers. According to the DOD directive, each military service is to appoint a senior service voting representative, assisted by a service voting action officer, to oversee the implementation of the service’s voting assistance program. The directive also states that the military services are to designate trained VAOs at every level of command to provide voting education and assistance to servicemembers and their eligible dependents. One VAO on each military installation should be assigned to coordinate voting efforts conducted by VAOs in subordinate units and tenant commands. Where possible, installation VAOs should be of the civilian rank GS-12 or higher, or officer pay grade O-4 or higher. In accordance with the DOD directive, commanders designate persons to serve as VAOs. Serving as a VAO is a collateral duty, to be performed along with the servicemember’s other duties. For the 2004 presidential election, FVAP expanded its efforts beyond those taken for the 2000 election to provide military personnel tools needed to vote by absentee ballot. FVAP distributed more absentee voting materials and improved the accessibility of its Web site, which includes voting information. Also, FVAP conducted 102 more voting training workshops for its VAOs than it did for the 2000 election. FVAP also provided an online training course for them. FVAP also designed an electronic version of the Federal Write-in Absentee Ballot—an emergency ballot accepted by all states and territories—although its availability was not announced until a few weeks before the election. In assessing its efforts for the 2004 election, using data from its postelection surveys, FVAP attributed increased voter participation rates to an effective voter information and education program. However, in light of low survey response rates, FVAP’s estimates and conclusions should be interpreted with caution. In preparing for the 2004 election, FVAP distributed more absentee voting materials and improved the accessibility of its Web site. For the 2000 election, we reported that voting materials such as the Federal Post Card Application (FPCA)—the registration and absentee ballot request form for UOCAVA citizens—were not always available when needed. DOD officials stated that they had enough 2004 election materials for their potential absentee voters. Each service reported meeting the DOD requirement of 100 percent in-hand delivery of FPCAs to each servicemember by January 15. After the 2000 presidential election, FVAP took steps to make its Web site more accessible to UOCAVA citizens worldwide by changing security parameters surrounding the site. According to FVAP, prior to the 2004 election, its Web site was within the existing DOD “.mil” domain, which includes built-in security firewalls. Some overseas Internet service providers were consequently blocked from accessing this site because hackers were attempting to get into the DOD system. As a result, FVAP moved the site out of the DOD “.mil” domain to a less secure domain. In September 2004, FVAP issued a news release announcing this change and provided a list of Web site addresses that would allow access to the site. FVAP also added more election-related links to its Web site to assist UOCAVA citizens in the voting process. The Web site (which FVAP considers one of its primary vehicles for disseminating voting information and materials) provides downloadable voting forms and links to all of FVAP’s informational materials, such as the Voting Assistance Guide, Web sites of federal elected officials, and state election sites. It also contains contact information for FVAP and the military departments’ voting assistance programs. Although FVAP provided more resources to UOCAVA citizens concerning absentee voting, it is ultimately the responsibility of the voter to be aware of and understand these resources, and to take the actions needed to participate in the absentee voting process. For the 2004 election, FVAP increased the number of VAO training workshops it conducted to 164. The workshops were conducted at military installations around the world, including installations where units were preparing to deploy. In contrast, only 62 training workshops were conducted for the 2000 election. FVAP conducts workshops during years of federal elections to train VAOs in providing voting assistance. As an alternative to its in-person voting workshops, in March 2004 FVAP added an online training course to its Web site. This course was also available on CD-ROM. According to FVAP, completion of the workshop or the online course meets a DOD requirement that VAOs receive training every 2 years. Installation VAOs are responsible for monitoring completion of training. The training gives VAOs instructions for completing voting forms, discusses their responsibilities, and informs them about the resources available to conduct a successful voting assistance program. On October 21, 2004, just a few weeks prior to the election, FVAP issued a news release announcing an electronic version of the Federal Write-in Absentee Ballot, an emergency ballot accepted by all states and territories. UOCAVA citizens who do not receive their requested state absentee ballots in time to meet state deadlines for receipt of voted ballots can use the Federal Write-in Absentee Ballot. The national defense authorization act for fiscal year 2005 amended the eligibility criteria for using the Federal Write-in Absentee Ballot. Prior to the change, a UOCAVA citizen had to be outside of the United States, have applied for a regular absentee ballot early enough to meet state election deadlines, and not have received the requested absentee ballot from the state. Under the new criteria, the Federal Write-in Absentee Ballot can also be used by military servicemembers stationed in the United States, as well as overseas. On the basis of its 2004 postelection survey, FVAP reported higher voter participation rates among uniformed service members in its quadrennial report to Congress and the President on the effectiveness of its 2004 voting assistance efforts. The report included a statistical analysis of voter participation and discussed experiences of uniformed servicemembers during the election, as well as a description of state and federal cooperation in carrying out the requirements of UOCAVA. However, the low survey response rate raises concerns about FVAP’s ability to project increased voter participation rates among military servicemembers. We reported in 2001 that some absentee ballots became disqualified for various reasons, including improperly completed ballot return envelopes, failure to provide a signature, or lack of a valid residential address in the local jurisdiction. We recommended that FVAP develop a methodology, in conjunction with state and local election jurisdictions, to gather nationally projectable data on disqualified military absentee ballots and reasons for their disqualification. In anticipation of gathering nationally projectable data, prior to the election, FVAP randomly selected approximately 1,000 local election officials to receive an advance copy of the postelection survey so they would know what information to collect during the election to complete the survey. The survey solicited a variety of information concerning the election process and absentee voting, such as the number of ballots issued, received, and counted, as well as reasons for ballot disqualification. In FVAP’s 2005 report, it cited the top two reasons for disqualification as ballots were received too late or were returned as undeliverable. FVAP reported higher participation rates for military servicemembers in the 2004 presidential election as compared with the rate reported for the 2000 election. FVAP attributed the higher voting participation rate to an effective voter information and education program that included command support and agency emphasis. State progress in simplifying absentee voting procedures and increased interest in the election were also cited as reasons for increased voting participation. However, a low survey response rate raises concerns about FVAP’s ability to project participation rate changes among uniformed servicemembers. According to FVAP, while the 2004 postelection survey was designed to provide national estimates, the survey experienced a low response rate, 27 percent. FVAP did not perform any analysis comparing those who responded to the survey with those who did not respond. Such an analysis would allow researchers to determine if those who responded to the survey are different in some way from those who did not respond. If it is determined that there is a difference between those who responded and those who did not, then the results cannot be generalized across the entire population of potential survey participants. In addition, FVAP did no analysis to account for sampling error. Sampling error occurs when a survey is sent to a sample of a population rather than to the entire population. While techniques exist to measure sampling error, FVAP did not use these techniques in their report. The practical difficulties in conducting surveys of this type may introduce other types of errors as well, commonly known as nonsampling errors. For example, errors can be introduced if (1) respondents have difficulty interpreting a particular question, (2) respondents have access to different information when answering a question, or (3) those entering raw survey data make keypunching errors. DOD has taken actions in response to our prior recommendations regarding voting assistance to servicemembers. In 2001, we recommended that DOD revise its voting guidance, improve program oversight, and increase command emphasis to reduce the variance in voting assistance to military servicemembers. In 2001, we reported that implementation of the federal voting assistance program by DOD was uneven due to incomplete service guidance, lack of oversight, and insufficient command support. Prior to the 2004 presidential election, DOD implemented corrective actions, such as revising voting guidance and increasing emphasis on voting education at top command levels to address our recommendations. However, the level of assistance continued to vary at the installations we visited. Because the VAO role is a collateral duty and VAOs’ understanding and interest in the voting process differ, some variance in voting assistance may always exist. DOD plans to continue its efforts to improve absentee voting assistance. In response to our recommendations in 2001, the services revised their voting guidance and enhanced oversight of the military’s voting assistance program. In 2001, we reported that the services had not incorporated all of the key requirements of DOD Directive 1000.4 into their own voting policies, and that DOD exercised very little oversight of the military’s voting assistance programs. These factors contributed to some installations not providing effective voting assistance. We recommended that the Secretary of Defense direct the services to revise their voting guidance to be in compliance with DOD’s voting requirements, and provide for more voting program oversight through inspector general reviews and a lessons-learned program. Subsequent to DOD’s revision of Directive 1000.4, the services revised their guidance to reflect DOD’s voting requirements. In the 2002–03 Voting Action Plan, FVAP implemented a best practices program to support the development and sharing of best practices used among VAOs in operating voting assistance programs. FVAP included guidance on its Web site and in its Voting Assistance Guide on how VAOs could identify and submit a best practice. Identified best practices for all the services are published on the FVAP Web site and in the Voting Information News—FVAP’s monthly newsletter to VAOs. For the 2004 election, emphasis on voting education and awareness increased throughout the top levels of command within DOD. In 2001, we reported that lack of DOD command support contributed to the mixed success of the services’ voting programs and recommended that the Senior Service Voting Representatives monitor and periodically report to FVAP on the level of installation command support. To ensure command awareness and involvement in implementing the voting assistance program, in late 2003, the USD P&R began holding monthly meetings with FVAP and the Senior Service Voting Representatives and discussed the status of service voting assistance programs. In 2001, we also reported that some installations and units did not appoint VAOs as required by DOD Directive 1000.4. In March 2004, the Secretary of Defense and Deputy Secretary of Defense issued memorandums to the Secretaries of the military departments, the Chairman of the Joint Chiefs of Staff, and Commanders of the Combatant Commands, directing them to support voting at all levels of command. These memoranda were issued to ensure that voting materials were made available to all units and that VAOs were assigned and available to assist voters. The Chairman of the Joint Chiefs of Staff also recorded a DOD-wide message regarding the opportunity to vote and ways in which VAOs could provide assistance. This message was used by FVAP in its training presentations and was distributed to military installations worldwide. During our review, we found that each service reported to DOD that it assigned VAOs at all levels of command. Voting representatives from each service used a variety of servicewide communications to disseminate voting information and stressed the importance of voting. For example, the Marine Corps produced a videotaped interview stressing the importance of voting that was distributed throughout the Marine Corps. The Army included absentee voting information in a pop-up message that was included on every soldier’s e-mail account. In each service, the Voting Action Officer sent periodic messages to unit VAOs, reminding them of key voting dates and areas to focus on as the election drew closer. Throughout the organizational structure, these VAOs contacted servicemembers through servicewide e-mail messages, which contained information on how to get voting assistance and reminders of voting deadlines. According to service voting representatives, some components put together media campaigns that included reminders in base newspapers, billboards, and radio and closed circuit television programs. They also displayed posters in areas frequented by servicemembers (such as exchanges, fitness centers, commissaries, and food court areas). Despite the efforts of DOD and the states, our April 2006 report identified two major challenges that remain in providing voting assistance to military personnel, which are: simplifying and standardizing the time-consuming and multistep absentee voting process, which includes different requirements and time frames for each state; and developing and implementing a secure electronic registration and voting system. FVAP attempted to make the absentee voting process easier by encouraging states through its Legislative Initiatives program, to simplify the multi-step process and standardize their absentee voting requirements. Many military personnel we spoke to after the 2000 and 2004 general elections expressed concerns about the varied state and local requirements for absentee voting and the short time frame provided by many states and local jurisdictions for sending and returning ballots. FVAP’s Legislative Initiatives program encouraged states to adopt changes to improve the absentee voting process for military personnel. However, the majority of states have not agreed to any new initiatives since FVAP’s 2001 report to Congress and the President on the effectiveness of its efforts during the 2000 election. FVAP is limited in its ability to affect state voting procedures because it lacks the authority to require states to take action on absentee voting initiatives. In the 1980s, FVAP began its Legislative Initiatives program with 11 initiatives, and as of December 2005 it had not added any others. Two of the 11 initiatives—(1) accept one FPCA as an absentee ballot request for all elections during the calendar year and (2) removal of the not-earlier-than restrictions for registration and absentee ballot requests—were made mandatory for all states by the National Defense Authorization Act for Fiscal Year 2002 and the Help America Vote Act of 2002, respectively. According to FVAP, this action was the result of state election officials working with congressional lawmakers to improve the absentee voting process. Between FVAP’s 2001 and 2005 reports to Congress and the President, the majority of the states had not agreed to any of the remaining nine initiatives. Since FVAP’s 2001 report, 21 states agreed to one or more of the nine legislative initiatives, totaling 28 agreements. Table 1 shows the number of agreements with the initiatives since the 2001 report. According to FVAP records, one state withdrew its support for the 40 to 45-day ballot transit time initiative. Initiatives with the most state support were (1) the removal of the notary requirement on election materials and (2) allowing the use of electronic transmission of election materials. We also found a disparity in the number of initiatives that states have adopted. For example, Iowa is the only state to have adopted all nine initiatives, while Vermont, American Samoa, and Guam have adopted only one initiative each. The absentee voting process requires the potential voter to take the following five steps: (1) register to vote, (2) request an absentee ballot, (3) receive the ballot from the local election office, (4) correctly complete the ballot, and (5) return it (generally through the mail) in time to be counted for the election. (See fig. 1.) There are several ways for military servicemembers to accomplish these steps. Military voters must plan ahead, particularly when deployed during elections. Moreover, military voters require more time to transmit voting materials because of distance. Military servicemembers are encouraged to use the Federal Post Card Application (FPCA) to register to vote and to request an absentee ballot. Servicemembers can obtain the FPCA from several sources, including the unit VAO, from the Internet via FVAP’s Web site, or from their local election office. DOD Directive 1000.4, Federal Voting Assistance Program, requires the in-hand delivery of a FPCA to eligible voters and their voting age dependents by January 15th of each year. DOD encourages potential voters to complete and mail the FPCA early, in order to receive absentee ballots for all upcoming federal elections during the year. Military mail and the U.S. postal service are the primary means for transmitting voting materials, according to servicemembers with whom we spoke. Knowing when to complete the first step of the election process can be challenging since each state has its own deadlines for receipt of FPCAs, and the deadline is different depending on whether or not the voter is already registered. For example, according to the Voting Assistance Guide, Montana required a voter that had not previously registered to submit an FPCA at least 30 days prior to the election. A voter who was already registered had to ensure that the FPCA was received by the County Election Administrator by noon on the day before the election. For Idaho voters, the FPCA had to be postmarked by the 25th day before the election, if they were not registered. If they were registered, the County Clerk had to receive the FPCA by 5:00 p.m. on the 6th day before the election. For Virginia uniformed services voters, the FPCA had to arrive not later than 5 days before the election, whether already registered or not. Using different deadlines for newly registered and previously registered voters to return their absentee ballots may have some administrative logic and basis. For example, the process of verifying the eligibility of a newly registered voter might take longer than the process for previously registered voters, and if there was some question about the registration information provided, the early deadlines provide some time to contact the voter and get it corrected. For the November 2004 general election, according to our site survey, nine states reported having absentee ballot deadlines for voters outside the United States that were more lenient than the ballot deadlines for voters inside the United States. Table 2 lists these nine states and the difference between the mail-in ballot deadline from inside the United States and the mail-in absentee ballot deadline from outside the United States. Another challenge for military service members in completing the FPCA is to know where they will be located when the ballots are mailed by the local election official. If the voter changes locations after submitting the FPCA and does not notify the local election official, the ballot will be sent to the address on the FPCA and not the voter’s new location. This can be further complicated by a 2002 amendment to UOCAVA, which allowed military personnel to apply for absentee ballots for the next two federal elections. If servicemembers request ballots for the next two federal elections, they must project up to a 4-year period where they will be located when the ballots are mailed. DOD recommended that military servicemembers complete an FPCA annually in order to maintain registration and receive ballots for upcoming elections. After a valid FPCA has been received by the local election official, the next step for the voter is to receive the absentee ballot. Prior to mailing the ballot, the local election jurisdiction must process the FPCA. Based on one of our recent reports, local election jurisdictions reported encountering problems in processing FPCAs. For example, 39 percent of the jurisdictions received the FPCA too late to process—a problem also encountered with other state-provided absentee ballot applications. An estimated 19 percent of local jurisdictions encountered the problem of receiving the FPCA too late to process more frequently than the other problems. Other reported problems with FPCAs included (1) missing or inadequate voting residence address, (2) applied to wrong jurisdiction, (3) missing or inadequate voting mailing address, (4) missing or illegible signature, (5) application not witnessed, attested, or notarized, and (6) excuse for absence did not meet state law requirements. The determination of when the state mails its ballots sometimes depends on when the state holds its primary elections. FVAP has an initiative encouraging a 40 to 45-day transit time for mailing and returning absentee ballots; however, 14 states have yet to adopt this initiative. During our focus group discussions, some servicemembers commented that they either did not receive their absentee ballot or they received it so late that they did not believe they had sufficient time to complete and return it in time to be counted. After the voter completes the ballot, the voted ballot must be returned to the local election official within time frames established by each state. As we reported in 2004, deployed military servicemembers face numerous problems with mail delivery, such as military postal personnel who were inadequately trained and initially scarce because of late deployments, as well as inadequate postal facilities, material-handling equipment, and transportation assets to handle mail surge. In December 2004, DOD reported that it had taken actions to arrange for transmission of absentee ballot materials by Express Mail through the Military Postal Service Agency and the U.S. Postal Service. However, during our focus group discussions, servicemembers cited problems with the mail, such as it being a low priority when a unit is moving from one location to another; susceptibility of mail shipments to attack while in theater; and the absence of daily mail service on some military ships. For example, some servicemembers said that mail sat on the ships for as long as a week, waiting for pick up. Others stated that in the desert, mail trucks are sometimes destroyed during enemy attacks. Voters must also cope with registration requirements that vary when local jurisdictions interpret state requirements differently. We found variation in the counties we visited in several states as to how they implemented state laws and regulations, with some holding strictly to the letter of the law and others applying more flexibility in accepting registration applications and ballots. For example: In Florida, officials in three counties told us they allow registration of applicants who have never lived in the county, while the fourth county said they require a specific address where the applicant actually lived. In New Jersey, officials in three counties said they accepted any ballot that showed a signature anywhere on the envelope while the fourth county disqualified any ballot that did not strictly meet all technical requirements. Some local election officials in the states we visited took actions to help absentee voters comply with state and local voting requirements by tracking down missing information on the registration form or ballot envelope and ensuring that applications and ballots went to the right jurisdiction. However, local officials told us they must balance voting convenience with ensuring the integrity of the voting process. This balance often requires the exercise of judgment on the part of local election officials. Developing and implementing a secure electronic registration and voting system, which would likely improve the timely delivery of ballots and increase voter participation, has proven to be a challenging task for FVAP. Eighty-seven percent of servicemembers who responded to our focus group survey said they were likely to vote over the Internet if security was guaranteed. However, FVAP has not developed a system that would protect the security and privacy of absentee ballots cast over the Internet. For example, during the 2000 presidential election, FVAP conducted a small proof of concept Internet voting project that enabled 84 voters to vote over the Internet. While the project demonstrated that it was possible for a limited number of voters to cast ballots online, FVAP’s project assessment concluded that security concerns needed to be addressed before expanding remote (i.e., Internet) voting to a larger population. In 2001, we also reported that remote Internet-based registration and voting are unlikely to be implemented on a large scale in the near future because of security risks with such a system. For the 2004 election, FVAP developed a secure registration and voting experiment. However, it was not used by any voters. The National Defense Authorization Act for Fiscal Year 2002 directed DOD to conduct an electronic voting experiment and gather data to make recommendations regarding the continued use of Internet registration and voting. In response to this requirement, FVAP developed the Secure Electronic Registration and Voting Experiment (SERVE), an Internet-based registration and voting system for UOCAVA citizens. The experiment was to be used for the 2004 election by UOCAVA citizens from seven participating states, with the eventual goal of supporting the entire military population, their dependents, and overseas citizens. The real barrier to success is not a lack of vision, skill, resources, or dedication, it is the fact that, given the current Internet and PC security technology, and the goal of a secure, all-electronic remote voting system, the FVAP has taken on an essentially impossible task. According to FVAP, after the minority group issued its report, the full peer review group did not issue a final report. Also, because DOD did not want to call into question the integrity of votes that would have been cast via SERVE, they decided to shut it down prior to its use by any absentee voters. FVAP could not provide details on what it received for the approximately $26 million that it invested in SERVE. FVAP officials stated that they received some services from the contractor, but no hardware or other equipment. Communications technologies, such as faxing, e-mail, and the Internet, can improve communication between local jurisdictions and voters during some portions of the election process. For example, FVAP’s Electronic Transmission Service (ETS) has been in existence since the 1990s, and is used by UOCAVA citizens and state and local officials to fax election materials when conditions do not allow for timely delivery of materials through the mail. For the November 2004 general election, FVAP’s Voting Assistance Guide showed that the states allowed some form of electronic transmission of the FPCA, blank absentee ballot and the voted ballot. However, it is important to note that of the 10,500 local government jurisdictions responsible for conducting elections nationwide, particular local jurisdictions might not offer all of the options allowed by state absentee ballot provisions. As shown in Table 3, for the November 2004 presidential election, 44 states allowed the FPCA to be faxed to the local election jurisdiction for registration and ballot request. In each of these states, the completed FPCA also had to be mailed to the local election jurisdiction. In one state, the completed FPCA had to be mailed or postmarked the same day that the FPCA was faxed. A smaller number of states allowed the blank absentee ballot to be faxed to the voter and an even smaller number of states allowed the voted ballot to be sent back to the local election jurisdiction. According to FVAP’s records, in calendar year 2004 ETS processed 46,614 faxes, including 38,194 FPCAs, 1,844 blank ballots to citizens, and 879 voted ballots to local election officials. Total costs to operate ETS in 2004 were about $452,000. According to FVAP’s revised Voting Assistance Guide for 2006-2007, only one additional state allowed the faxing of the FPCA for registration and ballot request. Table 3 also shows options allowed by each state and territory for electronic transmission of election materials for the November 2006 election. Two additional states also allowed the faxing of the blank ballot. In September 2004, DOD implemented the Interim Voting Assistance System (IVAS), an electronic ballot delivery system, as an alternative to the traditional mail process. Although IVAS was meant to streamline the voting process, its strict eligibility requirements prevented it from being utilized by many military voters. IVAS was open to active duty servicemembers, their dependents, and DOD overseas personnel who were registered to vote. These citizens also had to be enrolled in the Defense Enrollment Eligibility Reporting System, and had to come from a state and county participating in the project. FVAP officials said the system was limited to DOD members because their identities could be verified more easily than those of nonmilitary overseas citizens. Voters would obtain their ballots through IVAS by logging onto www.MyBallot.mil and requesting a ballot from their participating local election jurisdiction. One hundred and eight counties in eight states and one territory agreed to participate in IVAS; however, only 17 citizens downloaded their ballots from the site during the 2004 election. According to FVAP, many states did not participate in IVAS for a variety of reasons including state legislative restrictions, workload surrounding regular election responsibilities and additional Help America Vote Act requirements, lack of technical capability, election procedural requirements and barriers, and unavailability of Internet access. Despite low usage of the electronic initiatives and existing security concerns, we found that servicemembers and VAOs at the installations we visited strongly supported some form of electronic transmission of voting materials. During our focus group discussions, servicemembers stated that election materials for the 2004 presidential election were most often sent and received through the U.S. postal system. Servicemembers also commented that the implementation of a secure electronic registration and voting system could increase voter participation and possibly improve confidence among voters that their votes were received and counted. Additionally, servicemembers said that an electronic registration and voting system would improve the absentee voting process by providing an alternative to the mail process, particularly for those servicemembers deployed on a ship or in remote locations. However, at one location, some servicemembers were more comfortable with the paper ballot system and said that an electronic voting system would not work because its security could never be guaranteed. The federal government, states, and local election jurisdictions have a shared responsibility to help increase military voters’ awareness of absentee voting procedures and make the process easier while protecting its integrity. The election process within the United States is primarily the responsibility of the individual states and their election jurisdictions. Despite some progress by FVAP in streamlining the absentee voting process, absentee voting requirements and deadlines continue to vary from state to state. While it is ultimately the responsibility of the voter to understand and comply with these deadlines, varying state requirements can cause confusion among voters and VAOs about deadlines and procedures for registering and voting by absentee ballot. The ability to transmit and receive voting materials electronically provides military servicemembers another option to submit a ballot in time to participate in an election. Although state law may allow electronic transmission of voting materials, including voted ballots, the 10,500 local election jurisdictions must be willing and equipped to accommodate this technology. The integration of people, processes and technology are very important to the United States’ election system. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have at this time. Elections: The Nation’s Evolving Election System as Reflected in the November 2004 General Election. GAO-06-450. Washington, D.C.: June 6, 2006 Elections: Absentee Voting Assistance to Military and Overseas Citizens Increased for 2004 General Election, but Challenges Remain. GAO-06- 521. Washington, D.C.: April 7, 2006. Election Reform: Nine States’ Experiences Implementing Federal Requirements for Computerized Statewide Voter Registration Lists. GAO-06-247. Washington, D.C.: February 7, 2006. Elections: Views of Selected Local Election Officials on Managing Voter Registration and Ensuring Eligible Citizens Can Vote. GAO-05-997. Washington, D.C.: September 27, 2005. Elections: Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Underway, but Key Activities Need to be Completed. GAO-05-956. Washington, D.C.: September 21, 2005. Elections: Additional Data Could Help State and Local Elections Officials Maintain Accurate Voter Registration Lists. GAO-05-478. Washington, D.C.: June 10, 2005. Department of Justice’s Activities to Address Past Election-Related Voting Irregularities. GAO-04-1041R. Washington, D.C.: September 14, 2004. Elections: Electronic Voting Offers Opportunities and Presents Challenges. GAO-04-975T. Washington, D.C.: July 20, 2004. Elections: Voting Assistance to Military and Overseas Citizens Should Be Improved. GAO-01-1026. Washington, D.C.: September 28, 2001. Elections: The Scope of Congressional Authority in Election Administration. GAO-01-470. Washington, D.C.: March 13, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The narrow margin of victory in the 2000 presidential election raised concerns about the extent to which members of the military and their dependents living abroad were able to vote via absentee ballot. In September 2001, GAO made recommendations to address variances in the Department of Defense's (DOD) Federal Voting Assistance Program (FVAP). Along with the military services, FVAP is responsible for educating and assisting military personnel in the absentee voting process. Leading up to the 2004 presidential election, Members of Congress raised concerns about efforts under FVAP to facilitate absentee voting. This testimony, which draws on prior GAO work, addresses three questions: (1) How did FVAP's assistance efforts differ between the 2000 and 2004 presidential elections? (2) What actions did DOD take in response to prior GAO recommendations on absentee voting? and (3) What challenges remain in providing voting assistance to military personnel? For the 2004 presidential election, FVAP expanded its efforts beyond those taken for the 2000 election to facilitate absentee voting by military personnel. FVAP distributed more absentee voting materials and improved the accessibility of its Web site, which includes voting information. Also, FVAP conducted 102 more voting training workshops than it did for the 2000 election, and it provided an online training course for Voting Assistance Officers (VAO). FVAP also designed an electronic version of the Federal Write-in Absentee Ballot--an emergency ballot accepted by all states and territories--although its availability was not announced until a few weeks before the election. In assessing its efforts for the 2004 election, using data from its postelection surveys, FVAP attributed increased voter participation rates to an effective voter information and education program. However, in light of low survey response rates, FVAP's estimates and conclusions should be interpreted with caution. DOD has taken actions in response to GAO's prior recommendations regarding voting assistance to servicemembers. In 2001, GAO recommended that DOD revise its voting guidance, improve program oversight, and increase command emphasis to reduce the variance in voting assistance to military servicemembers. Prior to the 2004 presidential election, DOD implemented corrective actions that addressed GAO's recommendations. Specifically, the services revised their voting guidance and enhanced oversight of the military's voting assistance program, and emphasis on voting education and awareness increased throughout the top levels of command within DOD. However, the level of assistance continued to vary at the installations GAO visited. Because the VAO role is a collateral duty and VAOs' understanding and interest in the voting process differ, some variance in voting assistance may always exist. DOD plans to continue its efforts to improve absentee voting assistance. Despite efforts of DOD and the states, GAO's April 2006 report identified two major challenges that remain in providing voting assistance to military personnel: (1) simplifying and standardizing the time-consuming and multi-step absentee voting process, which includes different requirements and time frames for each state; and (2) developing and implementing a secure electronic registration and voting system. FVAP attempted to make the absentee voting process easier by using its Legislative Initiatives program to encourage states to simplify the multi-step process and standardize their absentee voting requirements. However, the majority of states have not agreed to any new initiatives since FVAP's 2001 report on the 2000 election. FVAP is limited in its ability to affect state voting procedures because it lacks the authority to require states to take action on absentee voting initiatives. For the 2004 election, FVAP developed an electronic registration and voting experiment. However, it was not used by any voters due to concerns about the security of the system. Because DOD did not want to call into question the integrity of votes that would have been cast via the system, they decided to shut the experiment down prior to its use by any absentee voters. Some technologies--such as faxing, e-mail and the Internet--have been used to improve communication between local jurisdictions and voters. |
The serious underfunding of many airline company pension plans has been widely reported. Underfunded pension plans are a symptom of the financial turmoil the airline industry currently faces. Several industry trends, such as the emergence of well-capitalized low cost airlines and reliance on the Internet to distribute tickets, are fundamentally reshaping the structure of the airline industry. Certain technology trends have served to provide lower cost alternatives to travel for business purposes, such as videoconferencing and network meetings. In addition, a series of unforeseen events, such as the terrorist attacks of September 11, 2001 and the war in the Middle East, have served to sharply reduce the demand for air travel in recent years. These and other factors have combined to create a highly competitive environment, which has been particularly challenging for the legacy airlines. As we reported in August, the financial performance and viability of legacy airlines has deteriorated significantly compared with low-cost airlines since 2000. Legacy airlines have collectively lost $24.3 billion over the last 3 years, while low-cost airlines made $1.3 billion in profits. During this time Congress provided the industry approximately $8.6 billion in assistance. Airlines responded to these financial challenges by reducing costs and cutting capacity. From October 1, 2001 through December 31, 2003, the collective operating costs of legacy airlines decreased by about $12.7 billion dollars, while capacity fell 12.6 percent. Of this total, legacy airlines worked with unions to achieve $5.5 billion in labor cost cuts. Despite these cost-cutting efforts, low-cost airlines still maintain a significant unit cost advantage over legacy airlines. Legacy airlines also face considerable debt and pension funding obligations in the next few years. Meanwhile, neither legacy nor low-cost airlines have been able to significantly improve their revenues owing to continued pressure on airline fares. In their efforts to cut costs further, despite significant rises in fuel costs, the legacy airlines have focused on labor costs, since they represent the single largest operating cost the airlines face. As part of reducing their labor costs, a number of legacy airlines have begun to consider terminating their DB pension plans, under current bankruptcy and pension laws. United Airlines recently announced that it would not make roughly $500 million in contributions to its pension plans this year. In addition, US Airways does not plan to make roughly $100 million in contributions to its remaining pension plans, and stated it would be “irrational” to make pension contributions during its current bankruptcy court filing. The potential termination of these underfunded pension plans confronts Congress with three key policy issues. The most visible is the financial exposure of PBGC. The agency reports that airline pensions are currently underfunded by $31 billion. This figure includes $8.3 billion of underfunding in United’s plans, and $2.3 billion of underfunding for US Airways. Second, thousands of plan participants and beneficiaries will lose pension benefits due to limits on PBGC guarantees and certain provisions affecting PBGC’s insurance program. Finally, airlines that terminate their plans may gain a competitive advantage because such terminations effectively lower overall labor costs. Those lower costs may also permit some airlines to continue operating that might otherwise be forced to exit the marketplace. I would like to emphasize three important facts that should put the airlines’ current problems in perspective. First, this is not the first time we have witnessed the simultaneous struggles of the airline industry and airline pension underfunding. As a former Acting Executive Director of PBGC and Assistant Secretary of Labor for Pension and Welfare Benefit Programs in the 1980s, I monitored similar issues plaguing major air carriers at the time. Since then, we’ve seen PBGC take over a number of badly underfunded plans including Pan American, Eastern, Braniff, and TWA. More recently, in early 2003, US Airways’ Pilots Plans terminated, presenting a claim of $754 million to the single-employer program. Second, the airlines’ experience illustrates the speed with which a pension funding crisis can develop. In 2001, PBGC reported that as a whole the air transportation industry had more than enough assets to cover the liabilities in its pension plans. Yet just 3 years later the industry threatens to saddle PBGC with its biggest losses ever from plan terminations. Finally, serious pension underfunding is not confined to the airline industry. Of the 10 most underfunded pension plan terminations in PBGC’s history, 5 have been in the steel industry, an industry that has faced extreme economic difficulty for decades. Looking ahead, in addition to airlines, automotive related firms may present the greatest ongoing risk to PBGC, with over $60 billion in underfunding as of 2003. Thus, while there are unique circumstances that have contributed to the airlines’ competitive and pension troubles, they unfortunately are not alone. We have highlighted several potential sources of problems in the pension system that have contributed to the broad underfunding of DB pension plans generally, including airline plans. Single-employer pension plans have suffered from a so-called “perfect storm” of key economic conditions, in which declines in stock prices lowered the value of pension assets used to pay benefits, while at the same time a decline in interest rates inflated the value of pension liabilities. The combined “bottom line” result is that many plans have insufficient resources to pay all of their future promised benefits. While these cyclical factors may improve and reverse some of the pension underfunding, other trends suggest more serious structural problems to the single-employer insurance program’s long-term viability. These include a declining number of DB plans, a decline in the percentage of participants that are active (as opposed to retired) workers, and a rise in alternative retirement savings vehicles, such as defined contribution (DC) plans, which provide retirement benefits with more portability but which transfer the investment risk from the employer to the employee. In addition, as the PBGC takeover of severely underfunded plans suggests, the existing pension funding rules have not ensured that sponsors contribute enough to their plans to pay all the retirement benefits promised to date. Also, while the current structure of insurance premiums paid by plan sponsors to PBGC requires higher premiums from some underfunded plans, in many cases these were not enough of an incentive for firms to fund their plans sufficiently. Furthermore, certain provisions of PBGC’s current guarantee and recovery provisions also need to be reviewed and possibly revised. The current pension crisis facing the airline industry and PBGC illustrates the need for comprehensive pension reform that tackles the full range of challenges across all industries, not just airlines. Such a comprehensive reform would focus on incentives, transparency, and accountability. Reforms must include meaningful incentives for sponsors to adequately fund their plans. They must provide additional transparency for participants, and ensure accountability for those firms that fail to match the benefit promises they make with the resources necessary to fulfill them. The airline industry’s funding problems also highlight the difficulties in addressing these problems during difficult economic times for an industry. These difficulties limit the feasible policy options for pension reform because many firms have fewer resources to support required plan contributions. Therefore, pension reform should attempt to improve incentives for firms to contribute more to their pension plans during good economic times, when they are more likely to be able to afford such contributions. Also, reform needs to consider the voluntary nature of pensions. After all, employers do not have to offer pensions, and reforms that may be deemed to be onerous might drive healthy plans out of the system. Nevertheless, firms should be held accountable for paying promised pension benefits to their workers. Along these lines, reforms should reconsider PBGC’s current premium rate structure to take into account the plan sponsor’s financial condition, the nature of the pension plan’s investment portfolio, and the structure of the plan’s benefit provisions (e.g., shutdown benefits or pension offset provisions). Charging more truly “risk-related” premiums could increase PBGC’s revenue while providing an incentive for plan sponsors to better fund their plans. However, significant increases in premiums that are not based on the degree of risk posed by different plans may force financially healthy companies out of the defined-benefit system and discourage other plan sponsors from entering the system. The rules of the current pension system, and any attempts to reform these rules, carry wide-ranging implications for airlines and other industries, as well as pension participants and beneficiaries, the PBGC, and potentially the American taxpayer. When PBGC takes over a pension plan from a bankrupt sponsor, participants can lose some of their promised pension benefits because PBGC guarantees may be capped. For 2004, PBGC pays a maximum monthly benefit of about $3,700 to a 65-year old pension participant; for younger participants, the guarantee declines, such that a 55-year old is guaranteed only $1,664 monthly. In addition, recent benefit increases and early retirement subsidies can also be reduced based on PBGC’s guarantee structure. For the agency itself, continued takeovers of severely underfunded plans make the eventual bankruptcy of PBGC an increasingly likely scenario. In the event that PBGC has insufficient funds to pay the benefits of plans it has taken over, it has the ability to borrow $100 million from the U.S. Treasury. This amount represents only a small fraction of the single-employer program’s $9.7 billion deficit as of March 2004. Congress would likely face enormous pressure to “bail out” the PBGC at taxpayer expense. If Congress decided not to fund a bailout of PBGC, pension participants and retirees would likely face drastic cuts in their pension benefits. Congress should consider the incentives that pension rules and reform may have on other financial decisions within affected industries. For example, under current conditions, the presence of PBGC insurance may create certain “moral hazard” incentives—struggling plan sponsors may place other financial priorities above “funding up” its pension plan because they know PBGC will pay guaranteed benefits. Firms may even have an incentive to seek Chapter 11 bankruptcy in order to escape their pension obligations. As a result, once a sponsor with an underfunded pension plan gets into financial trouble, existing incentives may exacerbate the funding shortfall for PBGC. This moral hazard effect has the potential to escalate, with the initial bankruptcy of firms with underfunded plans creating a vicious cycle of bankruptcies and plan terminations. Firms with onerous pension obligations and strained finances could see PBGC as a means of shedding these liabilities, thereby providing them with a competitive advantage over other firms that deliver on their pension commitments. This would also potentially subject PBGC to a series of terminations of underfunded plans in the same industry, as we have already seen with the steel and airline industries in the past 20 years. Overall, despite a series of reforms over the years, current pension funding and insurance laws create incentives for financially troubled firms to use PBGC in ways that Congress did not intend when it formed the agency in 1974. PBGC was established to pay the pension benefits of participants in the event that an employer could not. As pension policy has developed, however, firms with underfunded pension plans may come to view PBGC coverage as a fallback or “put option” for financial assistance. Further, because PBGC generally takes over underfunded plans of bankrupt companies, PBGC insurance may create an additional incentive for troubled firms to seek bankruptcy protection, which in turn may affect the competitive balance within an industry. This should not be the role for the pension insurance system. Certain rules that affect funding for underfunded plans of troubled sponsors can also create perverse incentives for employees that aggravate a plan’s underfunding. To the extent that participants believe that the PBGC guarantee may not cover their full benefits, many eligible participants may elect to retire and take all or part of their benefits in a lump sum rather than as lifetime annuity payments in order to maximize the value of their accrued benefits. In some cases, this may create a “run on the bank,” exacerbating the possibility of the plan’s insolvency as assets are liquidated more quickly than expected, and potentially leaving fewer assets to pay benefits for other participants. As previously noted, it can also create incentives for workers to retire prematurely, creating potential labor shortages in key occupations for the firm. We have seen aspects of these effects in some airline pilots’ reaction to the deteriorating financial condition of their employers and pension plans. Further, current rules may create an incentive for financially troubled sponsors to increase benefits, even if they have insufficient funding to pay current benefit levels. Currently, sponsors can increase plan benefits for underfunded plans, even in some cases where the plans are less than 60 percent funded. Thus, sponsors and employees that agree to benefit increases from an underfunded plan as a sponsor is approaching bankruptcy can essentially transfer this additional liability to PBGC, potentially exacerbating the agency’s financial condition. These represent just a few of the many issues that deserve the attention of the Congress. We have and will continue to perform work in this area in an effort to assist the Congress. The current problems plaguing many pensions in the airline industry should be seen as symptomatic for the pension system overall and should demonstrate that the way we currently fund and insure pension benefits has to change. Ignoring this warning would serve to adversely affect employers who continue to sponsor DB plans, workers and retirees who depend on those pension plans, and American taxpayers who may be asked to pay for these benefits in the future. Finally, the tragic events of September 11, 2001 combined with other factors are not only having an adverse affect on the financial condition of the airline industry, they are also affecting the financial condition of the Federal Aviation Administration’s Airport and Airway Trust Fund. This is a matter beyond the scope of this hearing that the Committee may want to address in the future. I would be happy to take any questions the Committee might have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | At the same time that "legacy" airlines face tremendous competitive pressures that are contributing to a fundamental restructuring of the airline industry, they face the daunting task of shoring up their underfunded pension plans, which currently are underfunded by an estimated $31 billion. Terminating these pension plans confronts Congress with three policy issues. The most visible is the financial exposure of the Pension Benefit Guaranty Corporation (PBGC), the federal agency that insures private pensions. The agency's single-employer pension program already faces a deficit of an estimated $9.7 billion, and the airline plans present a potential threat to the agency's viability. Second, plan participants and beneficiaries may lose pension benefits due to limits on PBGC guarantees. Finally, airlines that terminate their plans may gain a competitive advantage because such terminations effectively lower overall labor costs. This testimony addresses (1) the situation the airlines are facing today, (2) overall pension developments, and (3) the policy implications of addressing these issues. The problems posed by the airlines' underfunded plans, while extremely serious in the short term, are only the latest symptom of the decline in the health of our nation's defined benefit (DB) pension system. These problems illustrate weaknesses in the pension system overall and demonstrate that the way plans currently fund and insure pension benefits has to change. Underfunded pension plans are a symptom of the financial turmoil currently facing the airline industry. Industry trends, including the emergence of well-capitalized low cost airlines and other factors, have created a highly competitive environment that has been particularly challenging for the legacy airlines. Since 2000, the financial performance of legacy airlines has deteriorated significantly. Legacy airlines have collectively lost $24.3 billion over the last 3 years. Despite cost-cutting efforts, legacy airlines continue to face considerable debt and pension funding obligations. In this context, a number of legacy airlines have begun to consider terminating their DB pension plans. For example, United Airlines recently announced that it would not make roughly $500 million in contributions to its pension plans this year and US Airways announced that it does not plan to make roughly $100 million in contributions. The problems of underfunded DB pension plans extend far beyond the airline industry. We have highlighted several problems that have contributed to the broad underfunding of DB plans generally, including airline plans. These problems include cyclical factors like the so called "perfect storm" of key economic conditions, in which declines in stock prices lowered the value of pension assets used to pay benefits, while at the same time a decline in interest rates inflated the value of pension liabilities. The combined "bottom line" result is that many plans today have insufficient resources to pay all of their future promised benefits. Other long term trends suggest more serious structural problems to the system, including a declining number of DB plans, a decline in the percentage of participants that are active (as opposed to retired) workers, and other factors. Existing pension funding rules and the current structure for paying PBGC insurance premiums have not ensured that sponsors contribute enough to their plans to pay promised benefits. The current pension crisis facing the airline industry and PBGC, and how the Congress chooses to address that crisis, has wide-ranging implications for airlines and other industries, as well as for pension participants, PBGC, and potentially the American taxpayer. This crisis also illustrates the need for comprehensive pension reform that tackles the full range of challenges crossing all industries and not just airlines. Such a comprehensive reform would include meaningful incentives for sponsors to adequately fund their plans, provide additional transparency for participants, and ensure accountability for those firms that fail to match the benefit promises they make with the resources necessary to fulfill those promises. |
The Federal Aviation Administration’s (FAA) mission is to promote the safe, efficient, and expeditious flow of air traffic in the U.S. airspace system, commonly referred to as the National Airspace System (NAS). To accomplish its mission, FAA provides services 24 hours a day, 365 days a year, through its air traffic control (ATC) system—the principal component of the NAS. Predicted growth in air traffic and aging equipment led FAA to initiate a multibillion-dollar modernization effort in 1981 to increase the safety, capacity, and efficiency of the system. However, over the past 17 years, FAA’s modernization program has experienced substantial cost overruns, lengthy schedule delays, and significant performance shortfalls. Consequently, many of the benefits anticipated from the modernization program—new facilities, equipment, and procedures—have not been realized, and the efficiency of air traffic control operations has been limited. In addition, the expected growth in air traffic will place added strains on the system’s capacity. To get the modernization effort back on track and thereby address the limitations of the present system and meet the growing demand for increasing its capacity, FAA—in consultation with the aviation community—is developing plans to implement a phased approach to modernization, including a new concept of air traffic management known as “free flight.” To enable free flight, FAA intends to introduce a host of new technologies and procedures that will allow the agency to gradually move from its present system of air traffic control, which relies heavily on rules, procedures, and tight control over aircraft operations, to a more collaborative system of air traffic management. Under such a system, users would have more flexibility to select optimal flight paths, whose use would lower costs, improve safety, and help accommodate future growth in air traffic through the more efficient use of airspace and airport resources. Implementing this new air traffic management system will require FAA to introduce new technologies and procedures. FAA plans to test other new technologies and procedures through an initiative called Flight 2000 (now the Free Flight Operational Enhancement Program). FAA’s air traffic controllers direct aircraft through the NAS. Automated information-processing and display, communication, navigation, surveillance, and weather equipment allow air traffic controllers to see the location of aircraft, aircraft flight plans, and prevailing weather conditions, as well as to communicate with pilots. FAA controllers are primarily located in three types of facilities: air traffic control towers, terminal area facilities, and en route centers. The functions of each type of facility are described below. • Airport towers control the flow of aircraft—before landing, on the ground, and after take-off—within 5 nautical miles of the airport and up to 3,000 feet above the airport. A combination of technological and visual surveillance is used by air traffic controllers to direct departures and approaches, as well as to communicate instructions and weather-related information to pilots. • Terminal area facilities—known as Terminal Radar Approach Control (TRACON) facilities—sequence and separate aircraft as they approach and leave busy airports, beginning about 5 nautical miles and extending to about 50 nautical miles from the airport and up to 10,000 feet above the ground. • Air Route Traffic Control Centers (ARTCC)—or en route centers—control planes in transit over the continental United States and during approaches to some airports. Planes are controlled through regions of airspace by en route centers responsible for the regions. Control is passed from one en route center to another as a plane moves across a region until it reaches TRACON airspace. Most of the en route centers’ controlled airspace extends above 18,000 feet for commercial aircraft. En route centers also handle lower altitudes when dealing directly with a tower or after agreeing with a terminal facility. Aircraft over the ocean are handled by en route centers in Oakland and New York. Beyond the radars’ sight, controllers must rely on periodic radio communications through a third party—Aeronautical Radio Incorporated (ARINC), a private organization funded by the airlines and FAA to operate radio stations—to determine aircraft locations. • Flight Service Stations provide weather and flight plan services, primarily for general aviation pilots. See figure 1.1 for a visual summary of air traffic control over the continental United States and oceans. FAA will continue to operate en route, terminal, and tower facilities under the new air traffic management system; controllers in these facilities will be able to manage flight operations more collaboratively through the use of new decision support tools. For example, two new traffic management tools will allow en route and terminal controllers to better sequence aircraft as they move into the terminal environment—potentially increasing the system’s safety and efficiency. Free flight is a new way of managing air traffic that is designed to enhance the safety, capacity, and efficiency of the NAS. Under this new management system, air traffic control is expected to move gradually from a highly structured system based on elaborate rules and procedures to a more flexible system that allows pilots, within limits, to change their route, speed, and altitude, notifying the air traffic controller of the new route. In contrast, under the present system, while flight plans are developed in conjunction with air traffic control personnel, aircraft are required to fly along specific routes with minimal deviation. When deviations from designated routes are allowed—to, for example, avoid severe weather—they must be pre-approved by an air traffic controller. Under free flight, despite the availability of flexibilities to pilots, the ultimate decision-making authority for air traffic operations will continue to reside with controllers. While FAA and the aviation community have recently increased their efforts to implement free flight, the concept of free flight—allowing pilots to fly more optimal routes—is not new. In fact, the idea has been around for decades. With the development of navigation technology in the 1970s that allowed aircraft to fly directly from origin to destination without following fixed air routes (highways in the sky), the possibility of providing pilots with flexibility in choosing routes became viable. However, until recently, movement to develop the procedures and decision support systems needed to fully use this type of point-to-point navigation has been slow. In the last several years, because of the need to meet demands for increasing the system’s capacity and efficiency, FAA and aviation system users and their major trade organizations, representatives of air traffic control personnel, equipment manufacturers, the Department of Defense (DOD), and others (collectively referred to as stakeholders) have been working on plans to accelerate the implementation of free flight. To enable this new system of air traffic management, FAA plans to introduce a range of new technologies and procedures that will give pilots and controllers more precise information about the location of aircraft. This information will eventually allow for the distances between aircraft to be safely reduced—in turn, allowing more aircraft to operate in the system. For example, a new tool planned for use primarily in the en route environment will give controllers better information about the location of aircraft so that they can detect and resolve potential conflicts sooner than they can using current technology. Similarly, pilots will have more precise information about the location of their aircraft in relation to other aircraft. The use of these technologies will help to improve the system’s safety and capacity. While free flight will provide pilots with more flexibility, different situations will dictate its use. For instance, in clear, uncrowded skies, pilots may be able to use free flight fully, but some restrictions may be necessary during bad weather or in highly congested areas. “a safe and efficient flight operating capability under instrument flight rules in which the operators have the freedom to select their path and speed in real time. Air traffic restrictions are only imposed to ensure separation, to preclude exceeding airport capacity, to prevent unauthorized flight through Special Use Airspace (SUA), and to ensure safety of flight. Restrictions are limited in extent and duration to correct the identified problem. Any activity which removes restrictions represents a move toward Free Flight.” Stakeholders generally agree with the above broad concept—especially the idea that any activity that removes restrictions represents a move toward free flight. However, because users have different priorities based on their use of the system, they have different ideas about how best to implement this concept. RTCA has found that the implementation of free flight will affect a wide range of users—from part-time pilots to major airlines—depending on the operating environment. For example, in the en route environment, users will be allowed to fly more optimal routes between airports, thus saving time and money. In addition, under certain conditions, these users may be allowed to safely reduce the distance between themselves and other aircraft. Similarly, in airspace between 5 and 50 miles from the airport, the improved sequencing of traffic for approaches and landings will provide the potential for users to operate more efficiently than under the present system. Improved sequencing is expected to increase the number of aircraft that can safely operate in this environment at a given time. In addition, improved information sharing between pilots and controllers on the location of aircraft on an airport’s surface, for example, is expected to allow for better use of the airport’s surface capacity (such as runways and gates). Efficient use of this limited capacity is key to allowing users to maximize the benefits of operations under free flight. In light of FAA’s current efforts to replace its aging infrastructure and keep pace with increasing demands for air traffic services through the new system of air traffic management known as free flight, the chairmen and ranking minority members of the Senate Committee on Commerce, Science, and Transportation and its Subcommittee on Aviation asked us to monitor the implementation of FAA’s efforts and provide them with a series of reports. This initial report provides (1) an overview of FAA’s progress to date in implementing free flight, including Flight 2000 (now the Free Flight Operational Enhancement Program), and (2) the views of the aviation community and FAA on the challenges that must be met to implement free flight cost-effectively. To address the first objective, we met with key FAA officials responsible for the programs involved in the agency’s free flight implementation efforts to gain a better understanding of how FAA is coordinating the agencywide and program-specific elements of free flight. The issues discussed with these officials included (1) the definition/philosophy of free flight; (2) details on key agencywide and program-specific initiatives, such as Flight 2000 (now the Free Flight Operational Enhancement Program); and (3) the status of the agency’s efforts to develop, deploy, and integrate new technologies; mitigate risk; develop metrics; collaborate with other FAA program offices and stakeholders; improve certification procedures; develop cost/benefit analyses; and gain buy-in to free flight implementation efforts among FAA staff and the aviation community. We discussed these same issues with a broad range of stakeholders to get their views on the agency’s progress to date in implementing free flight. These stakeholders, who have collaborated with FAA to implement free flight, included representatives of RTCA, trade organizations (such as the Air Transport Association, Airports Council International, Regional Airline Association, National Business Aircraft Association, and Aircraft Owners and Pilots Association), employee unions (including the National Air Traffic Controllers Association, Air Line Pilots Association, and Professional Airways Systems Specialists), DOD, academic institutions (Massachusetts Institute of Technology (MIT) and University of Illinois, Champaign) and research and contracting organizations (MIT Lincoln Laboratory, Department of Transportation/Volpe Center, National Aeronautics and Space Administration, and MITRE), major airlines, cargo carriers, and aircraft and avionics manufacturers. In addressing the second objective, we asked the same FAA officials and stakeholders to identify the key challenges that must be met for free flight to be implemented cost-effectively. As part of our review for both objectives, we researched the current literature and reviewed relevant FAA documents (such as the NAS architecture and operational concept, capital investment plan, and cost and schedule information for key projects). In addition, we obtained and reviewed documentation from stakeholders in support of their positions on outstanding issues related to implementing free flight. We provided copies of a draft of this report to FAA for its review and comment. We met with FAA officials, including the Director, Program Office, Free Flight Phase 1, and the Acting Program Directors for Flight 2000 and Architecture and Systems Engineering, who generally agreed with the contents of the report and provided clarifying comments, which we incorporated as appropriate. We conducted our audit work from November 1997 through August 1998 in accordance with generally accepted government auditing standards. Under its air traffic control modernization program, FAA is upgrading its facilities and equipment—including replacing aging infrastructure, such as controllers’ workstations and the Host computer—and ensuring that its systems comply with Year 2000 requirements. While these efforts are not part of free flight, they will provide the infrastructure that is critical for its implementation. To define free flight and develop recommendations, associated initiatives, and time frames for its implementation, FAA has worked with stakeholders under the leadership of RTCA—a nonprofit organization that serves as an advisor to FAA. As of July 1998, 1 of 44 recommendations had been completed, and substantial progress has been made in implementing many of the initiatives that fall under the remaining recommendations. While working to implement the 44 recommendations, FAA and stakeholders agreed on the need to focus their efforts on deploying technologies designed to provide early benefits to users. These efforts led to consensus on a phased approach to implementing free flight—beginning with Free Flight Phase 1—including the core technologies to be used and the locations where the technologies will be deployed under this first phase, scheduled to be implemented by 2002. FAA has been working with stakeholders to resolve differences among them and to better define its planned limited demonstration, known as Flight 2000 (now the Free Flight Operational Enhancement Program), which is designed to identify and mitigate the risks associated with using free-flight-related communication, navigation, and surveillance technologies and associated procedures. As a result of these collaborative efforts, FAA and stakeholders—through RTCA—have agreed to a general roadmap for a restructured demonstration to be conducted in fiscal years 1999-2004. However, unresolved issues remain, including the need to secure funding and develop additional plans. In its October 1995 report, RTCA discussed the benefits of free flight and included recommendations and time frames for users and FAA to consider for implementing free flight. These recommendations, many of which have several initiatives, emphasized, among other things, the (1) consideration of human factors during all phases of developing free flight, (2) use of streamlined methods/procedures for system certification, and (3) expansion of the National Route Program. The vast majority of these recommendations (35 of 44) were to be completed in the near term (1995 through 1997), 6 are focused on the midterm (1998 through 2000), and 3 are to be completed in the far term (2001 and beyond). See appendix I for a list of these recommendations. Since late 1995, FAA and stakeholders have been working on various free flight recommendations and many associated initiatives and, in August 1996, agreed on an action plan to guide their implementation. According to FAA, through July 1998, they have fully implemented only 1 of the 35 near-term recommendations—to incorporate airline schedule updates, such as delays and cancellations, into FAA’s Traffic Flow Management system to help it reduce unnecessary restrictions and delays imposed on airline operations. However, FAA and stakeholders have made substantial progress in implementing many of the initiatives under the near-term recommendations. For example, as outlined under a recommendation to extend the benefits of data exchange, FAA has deployed digital pre-departure clearances at 57 sites, which provide pilots with departure information via digital cockpit displays and reduce the need for voice messages. In addition, 49 of these sites have Digital Automatic Terminal Information Service, which provides information about current weather, airport, and facility conditions around the world. Digital communications provide an advantage over voice communications by helping to relieve congested voice frequencies and reduce the number of operational errors that are caused directly or indirectly by miscommunication. Under another recommendation, FAA is working with stakeholders through an RTCA task force to find ways to reduce the time and cost associated with the agency’s process for approving new technologies for flight operations. To address another recommendation, FAA has deployed a technology, on a limited basis, for controllers’ use that is expected to improve the sequencing of air traffic as aircraft enter, leave, and operate within terminal airspace. Work is under way on six midterm and three far-term recommendations and their associated initiatives. For the most part, these recommendations focus on incremental improvements to the core technologies that are being deployed under Free Flight Phase 1 and those planned for deployment under Flight 2000 (now the Free Flight Operational Enhancement Program). For example, in the midterm, FAA has begun to modify controllers’ workstations and supporting computer equipment to accept, process, and display data received from satellites. In addition, under a far-term recommendation, FAA is studying the feasibility of using satellite-based information to provide more precise information for landing during periods of limited visibility. FAA also noted that while it was in the early stages of planning for the implementation of free flight, it took steps to maximize the air traffic control system’s capacity and efficiency by extending flexibilities to users—to select and fly more efficient flight paths when operating in designated altitudes/areas—through programs such as the National Route Program (NRP). FAA has two early efforts under way to allow users (under certain conditions) to select routes and procedures that will save them time and money—NRP and the Future Air Navigation System (FANS). Established in 1990, NRP is intended to conserve fuel by allowing users to select preferred or direct routes. FAA estimates that NRP saves the aviation industry over $40 million annually. These savings are realized, in part, because pilots are allowed to take advantage of favorable winds or minimize the effects of unfavorable winds, thereby reducing fuel consumption. Initially allowed only at higher altitudes, the program has been expanded to include operations down to 29,000 feet. FAA is also working to decrease, where appropriate, the present restriction that flights must be 200 miles from their point of departure before they can participate and must end their participation 200 miles prior to landing. FANS uses new technologies and procedures that enhance communication between pilots and air traffic controllers and provide more precise information on the position of aircraft—allowing for improvements in air navigation safety and in the ability of air traffic controllers to monitor flights. Used primarily over the oceans and in remote areas normally out of the range of ground-based navigation aids, FANS uses digital communication more than voice communication to exchange information such as an aircraft’s location, speed, and altitude. Although FANS is gradually being implemented in many regions and countries, the aviation community believes that for its full operational benefits (such as time and fuel savings) to be realized, air traffic control procedures need to be modified to shorten the distance currently required between aircraft. They also contend that FAA needs to deploy the promised hardware and software (Automatic Dependent Surveillance or ADS) infrastructure for FANS in facilities that support airline operations primarily over the Pacific Ocean in order for these benefits to be realized. In November 1997, the FAA Administrator began an outreach effort with the aviation community to build consensus on and seek commitment to the future direction of the agency’s modernization program. As part of this effort, she formed a task force of senior transportation officials, union leaders, and executives and experts from the aviation community to assess the agency’s modernization program—including the NAS architecture—and develop a plan for moving forward. Much as we found in reviewing the system’s logical architecture in February 1997, the task force found that the architecture under development appropriately built on the concept of operations for the NAS and identified the programs necessary to meet users’ needs. However, the task force found that the architecture was insufficient because of issues associated with cost, risk, and lack of commitment from users. In response, the task force recommended a phased approach that would cost less, focus more on providing near-term benefits to users, and modernize the NAS incrementally. Many of the initiatives identified under the near- and midterm recommendations will be included under this phased approach because these initiatives are expected to provide early benefits for users. A central tenet of this approach is the “build a little, test a little” concept of technology development and deployment—intended to limit efforts to a manageable scope, identify and mitigate risks, and deploy technologies before the system is fully mature when they can immediately improve the system’s safety, efficiency, and/or capacity. Such a phased approach to implementing free flight was designed to help the agency avoid repeating past modernization problems associated with overly ambitious cost, schedule, and performance goals and to restore users’ faith in its ability to deliver on its promises. As a first step toward the phased implementation of free flight, FAA—in coordination with stakeholders—outlined a plan for Free Flight Phase 1 in early 1998. This plan is expected to be implemented by 2002. As currently envisioned, Free Flight Phase 1 calls for the expedited deployment of certain NAS technologies. The technologies—which are at various stages of development and will be further refined and tested are the (1) Controller Pilot Data Link Communications (CPDLC) Build 1, (2) User Request Evaluation Tool (URET), (3) Single Center Traffic Management Advisor (TMA), (4) Collaborative Decision Making (CDM), (5) Surface Movement Advisor (SMA), and (6) Passive Final Approach Spacing Tool (pFAST). In general, these technologies are expected to provide tools for controllers that will help to increase the safety and capacity of the air traffic control system and benefit users through savings on fuel and crew costs. For example, FAA and many stakeholders believe that TMA and pFAST should improve controllers’ ability to more efficiently sequence traffic to improve its flow in crowded terminal airspace. Similarly, they believe the URET conflict probe will improve controllers’ ability to detect and resolve potential conflicts sooner than present technology allows. However, in June 1998, the air traffic controllers’ union at one of the two en route centers where URET is being tested asked that its use be terminated until several concerns about its use in the current environment can be resolved. Termination did not occur at this facility, and the issue has been elevated to the regional level within FAA for resolution. See appendix II for a summary of the status of the recommendations related to Free Flight Phase 1. The aviation community generally agrees on the core technologies for Free Flight Phase 1 and on the locations proposed for deploying and testing these technologies. See figure 2.1 for these sites. In addition, FAA is currently developing a Free Flight Phase 1 plan that will provide more details on implementing the program and recently appointed a program manager to lead this effort. As a companion effort, FAA has charged RTCA with responsibility for building consensus within the aviation community on how best to revise the vision for modernization (operational concept) and to develop the blueprint (architecture/framework) for carrying out the modernization. It is critical that the vision for modernization and the blueprint for implementing this vision be tightly integrated to help ensure that free flight activities are coordinated and working toward common goals. In January 1997, Vice President Gore announced an initiative—Flight 2000—to demonstrate and validate the use of navigation capabilities to support free flight. FAA then expanded Flight 2000 to include communication and surveillance technologies. FAA viewed Flight 2000 as an exercise for testing free flight technologies and procedures in an environment where safety hazards could be minimized. FAA expected the Flight 2000 program to validate the benefits of free flight, evaluate transition issues, and streamline the agency’s procedures for ensuring that new equipment is safe for its intended use. Proposed primarily for Alaska and Hawaii, Flight 2000 would have tested communication, navigation, and surveillance technologies, such as the Global Positioning System (GPS) and its augmentations, the Wide Area Augmentation System (WAAS) and the Local Area Augmentation System (LAAS); Controller Pilot Data Link Communications (CPDLC); and ADS-B technology. FAA initially selected these locations because they offer a controlled environment with a limited fleet, which includes all classes of users, all categories of airspace, and wide ranges of weather conditions and terrain. However, many in the aviation community questioned whether the lessons learned in Alaska and Hawaii would apply to operations in the continental United States. At their urging, FAA agreed to add at least one site within the continental United States to the Flight 2000 demonstration. Collaborative efforts between FAA and stakeholders on Flight 2000—through RTCA—have led to broad consensus on a general roadmap for restructuring this demonstration program, including four criteria for selecting the candidate operational capabilities to be demonstrated. In general, under these criteria (1) industry and FAA must address all aspects of modernization to be successful in moving toward free flight; (2) expected benefits are the major reason for implementing a given capability; (3) the capability does not interfere with or slow down any near-term activities; and (4) the risks associated with operational capabilities that require integrating multiple communication, navigation, and surveillance technologies should be addressed. Using these criteria, FAA and stakeholders reviewed over 70 potential operational capabilities and selected 9 of them. They also recommended demonstration locations in the Ohio Valley and Alaska. (See app. III for a description of these capabilities and the expected operational benefits.) For example, under this proposal, FAA would provide more accurate weather information to pilots and controllers to improve safety and potentially reduce flight times. In addition, FAA would improve airport surface navigation capabilities by providing pilots (and operators of other surface vehicles) with moving maps that display traffic in low-visibility conditions. FAA and stakeholders also recommended that the program be renamed the “Free Flight Operational Enhancement Program.” Stakeholders and FAA recognize that more detailed planning is needed—to identify risk-mitigation activities, select the final site, and estimate costs, schedules, and the number of required aircraft—and that this planning will require close coordination between FAA and industry to ensure that plans are consistent with stated operational capabilities and are achievable by FAA and users. FAA is currently considering the proposed RTCA roadmap for the restructured Flight 2000 demonstration and expects to reach a decision in the fall of 1998. If approved as scheduled, a detailed plan is expected by the end of 1998. FAA’s plan to implement free flight through an evolutionary (phased) approach is generally consistent with past recommendations that we and others have made on the need for FAA to achieve a more gradual, integrated, and cost-effective approach to managing its modernization programs. However, FAA and stakeholders recognize that significant challenges must be addressed if the move to free flight—including Free Flight Phase 1 and Flight 2000 (now the Free Flight Operational Enhancement Program) is to succeed. While FAA must address many of the challenges, stakeholders recognize that, as partners, they must assist the agency. The challenges for FAA are to (1) provide effective leadership and management of modernization efforts—including cross-program communication and coordination; (2) develop plans—in collaboration with the aviation community—that are sufficiently detailed to move forward with the implementation of free flight—including the identification of clear goals and measures for tracking the progress of the modernization efforts; (3) address outstanding issues related to the development and deployment of technology—such as the need to improve the agency’s process for ensuring that new equipment is safe for its intended use and methods for considering human factors; and (4) address other issues, such as the need for FAA to coordinate its modernization and free flight efforts with those of the international community and integrate free flight technologies. FAA and stakeholders identified a number of managerial issues that will need to be addressed if free flight is to be implemented successfully. For example, (1) provide strong senior leadership to guide the implementation of free flight both within and outside the agency and (2) implement an evolutionary rather than a revolutionary approach to modernization. Successfully addressing these issues will help the agency effectively implement free flight. Some FAA officials and stakeholders said that the agency will need to provide strong leadership both inside the agency and within the aviation community for free flight to be implemented successfully. For example, some FAA officials and stakeholders said that the agency will need to improve the effectiveness of its internal operations by encouraging communication and cooperation between the various program offices responsible for its free flight efforts. Additionally, some FAA officials and stakeholders said that the agency will need to continue efforts to build consensus among the aviation community and gain its commitment to the direction of the agency’s plans for modernization. Some FAA officials and stakeholders told us that improvements in communication and coordination across FAA program offices are needed to implement free flight successfully. For example, one FAA official told us that the primary challenge facing the agency in its efforts to implement free flight is developing effective communication and coordination across program lines. Some stakeholders shared this concern, observing that the various program offices within FAA do not communicate well or effectively coordinate their activities. Thus, according to some within FAA and stakeholders, despite the agency’s move to using cross-functional, integrated product teams to improve accountability and coordination across FAA, these teams have become insular and some team members tend to be motivated primarily by the priorities and management of the offices that they represent rather than the goals of a given team. One stakeholder stressed that the effectiveness of these teams has also been limited by (1) inadequate training of members on how to operate a team and (2) the fact that these teams are given responsibility for projects without the commensurate authority they need to carry out their responsibilities. Some stakeholders also noted that the agency has not made a number of decisions about modernization because of ongoing disagreements among various program offices over how best to proceed with its various components, such as the selection of new free flight technologies for communicating information digitally rather than by voice. The concerns cited above are consistent with our prior work on FAA’s culture as it affects acquisition management. In particular, we found that the agency has previously had difficulty communicating and coordinating effectively across traditional program lines. In addition, we learned from some FAA staff and functional managers that FAA has encountered resistance to the integrated product team concept and these teams’ operations. As we reported, one major factor impeding coordination has been FAA’s organization into different divisions whose “stovepipes,” or upward lines of authority and communication, are separate and distinct. Because FAA’s operational divisions are based on a functional specialty, such as engineering, air traffic control, or equipment maintenance, getting the employees in these units to work together has been difficult. Internal and external studies have found that the operations and development sides of FAA have not forged effective partnerships. To its credit, FAA is currently attempting to improve cross-agency communication and coordination by developing incentives for staff to work toward the agency’s goals and priorities. Plans are also under way to develop contracts with each integrated product team to hold its members accountable for developing and deploying a given operational capability. According to FAA officials, these contracts are intended to improve accountability for delivering technologies; in the past, such accountability has not been clearly assigned. In addition, efforts are under way to work with the aviation community to resolve disagreements that have persisted among FAA program offices, such as how to proceed with the use of digital communication. While stakeholders generally applaud FAA’s efforts to build consensus among stakeholders, some believe that the agency must be prepared to exercise strong leadership by (1) making difficult decisions after weighing stakeholders’ competing priorities, (2) holding to these decisions even amidst new and conflicting opinions about the value of one course of action over another, and (3) delivering on its commitments. Some stakeholders said they were particularly frustrated when, after announcing a planned course of action, FAA later delayed its implementation or retracted it and moved in a different direction. Some stakeholders told us that such indecision makes it very difficult for them to make plans for the future—such as determining investments for avionics upgrades—and further erodes their confidence in the agency’s ability to manage modernization programs and provide leadership to the aviation community. For example, several stakeholders cited FAA’s failure to deliver the ground-based infrastructure, needed for users to accrue benefits from equipping with new technologies under the Future Air Navigation System program, as a warning signal to them to proceed cautiously, since the agency may not deliver on its promises. In particular, users are concerned that if they invest in new technologies, they will not realize benefits in a timely manner to offset these investments. Some stakeholders believe that for FAA to successfully implement free flight, it must demonstrate that it can effectively manage its air traffic control modernization programs and deliver promised capabilities. To do so, FAA will need to implement an evolutionary approach to technology development and deployment. According to FAA, under such an approach, it will limit the scope of project segments so that it can deploy, test, evaluate, and refine a given technology until it obtains the desired capabilities. One stakeholder familiar with this approach emphasized the importance for FAA, in implementing it, of (1) assessing risks, (2) developing metrics, (3) limiting the scope of each phase of development, (4) evaluating progress before moving forward with the next phase of development, and (5) retraining staff. These steps would be applied to each cycle of the development process to help ensure that each completed iteration results in enhanced capabilities and moves a given technology closer to its desired level of maturity. FAA agreed that each of these steps will be important for successfully implementing this approach. FAA has not yet developed detailed plans for implementing this approach; however, in concept, it is consistent with our past recommendations that the agency avoid taking on unrealistic cost, schedule, and performance goals. For example, the recently developed plans for revising the Flight 2000 demonstration recommend an incremental approach, under which operational capabilities will be introduced over time into planned field demonstration sites. FAA and users expect such an approach will allow them to achieve success by taking smaller, less risky, more manageable steps. Some stakeholders told us that although they are encouraged by FAA’s efforts to date, they are taking a wait-and-see attitude as to whether the agency can effectively implement this approach to technology development and deployment. FAA and stakeholders have identified a wide range of concerns that need to be addressed to help ensure that efforts to implement free flight are sufficiently well developed as the agency moves forward with related modernization activities. These concerns include, among others, the need for (1) FAA—in collaboration with stakeholders—to develop clear goals and objectives for what it intends to achieve, as well as a measurement system for tracking progress, and (2) FAA and stakeholders to develop detailed plans that will allow for the cost-effective implementation of free flight. Stakeholders believe that more detailed plans are needed to provide the aviation community with assurances that moving forward with free flight is warranted. They believe that these plans should include the results of cost/benefit analyses, new procedures, and schedules for equipment installation. Because they expect that equipping with free flight technologies will be expensive, many users believe that FAA needs to demonstrate the near-term benefits of the new equipment—especially given FAA’s poor record of delivering promised benefits. As part of its efforts to develop plans for implementing free flight, FAA has conducted cost/benefit analyses to provide justification for free flight investments. However, stakeholders have raised concerns that these analyses have focused almost exclusively on the benefits to FAA. As a result, they believe that these analyses are of little value to users that must make business decisions about investing in new technologies. As one airframe manufacturer noted, FAA should develop a convincing case for changing the functions of the present system before selecting new technologies. While users expressed a desire for studies that consider their business needs, one airline official told us that meaningful cost/benefit analyses are very difficult to establish for the airline industry because the costs and benefits of equipping will vary considerably both among and within airlines. For example, the cost of investments and associated benefits will vary with factors such as (1) the cost of installing new avionics—including the cost of retrofitting older aircraft, (2) the timing of requirements for completing the installation of equipment, and (3) the routes flown. Even though these factors vary from one airline to another, some airlines expect FAA to conduct analyses that demonstrate that technology investments will be cost-effective for them. Similarly, DOD and general aviation users are concerned about potential penalties for not equipping their aircraft with technologies that will be needed to conduct operations under free flight. For example, DOD officials told us that they need more detailed information about whether—or under what circumstances—they may be excluded from certain airspace if they fail to equip with free flight technologies. DOD is also concerned that the lack of specificity in FAA’s plans may negatively affect its ability to meet its mission readiness requirements—including the ability to fly cost-efficient and effective routes. Some stakeholders have expressed concern that the cost of equipping with avionics for participation in the free flight environment may be prohibitive for the recreational end of the general aviation community. FAA is aware of this concern and plans to use the Flight 2000 demonstration (now the Free Flight Operational Enhancement Program) as a means for streamlining its process for ensuring the safety of new equipment for flight operations and developing affordable avionics for general aviation. A number of stakeholders told us that in order for FAA and users to fully exploit new capabilities to maximize the air traffic control system’s safety, capacity, and efficiency, the agency will need to develop procedures that will be used in the free flight environment. Such procedures will affect a wide range of operations. For example, new procedures will be required to approve, integrate, and deploy new technologies. New procedures will also be needed to enable pilots and controllers to use the new technologies. Hence, some stakeholders noted that it will be important for FAA to make explicit any changes in pilots’ and controllers’ roles and responsibilities. For example, if pilots and controllers are to share responsibility for making decisions about altitudes, speeds, and routes, the procedures need to be well defined. Under Free Flight Phase 1, the agency plans to implement new procedures as needed to demonstrate the use of new air traffic management tools that controllers will use to improve conflict detection and air traffic sequencing, among other things. Similarly, under Flight 2000 (now the Free Flight Operational Enhancement Program), FAA plans to develop new procedures for the new communication, navigation, and surveillance technologies that will be used by pilots and controllers. FAA is aware of the need to develop procedural changes for operations under free flight and is currently working with the aviation community to develop these new procedures. However, some stakeholders are concerned that the development and implementation of new procedures will not occur in a timely fashion. One of these stakeholders further stressed that having new equipment and technology working together is not enough, without new procedures, to deliver the benefits promised under free flight. Commercial airlines and DOD require adequate lead time to plan for the cost-effective installation of new equipment. To facilitate an efficient equipment installation process, FAA will need to work with users to consider their unique needs as they develop plans for moving to free flight operations. For example, to minimize costs, airlines would prefer to install new avionics within an aircraft’s regularly scheduled maintenance cycle. In addition, airlines do not want to install new equipment too early because they want to be able to take advantage of opportunities to purchase the best technologies at the lowest cost; however, they do not want to equip too late and miss out on the benefits. Similarly, because DOD must request funding well before installing new equipment, it needs ample lead time to develop budget requests and installation schedules for many of its aircraft, which number more than 16,000. Therefore, it is important for FAA to make timely decisions about future technology requirements and stick with those decisions to give all aviation user groups the lead time needed to ensure that their purchases are cost-effective and their installation schedules are efficient. To provide for a smooth transition, FAA has been working with DOD and other users to move forward with the selection of new technologies for operations under free flight. FAA’s most recent draft NAS architecture (blueprint) represents the agency’s attempt to provide the level of detail requested by the aviation community. However, some stakeholders have expressed concern that the draft architecture is too general to use in planning for future technology upgrades. For example, an airline representative noted that when airlines place orders for new aircraft, they request systems that provide maximum flexibility for later modifications or upgrades. However, future free flight equipment upgrades will still be costly, and the sooner FAA decides which technologies will be required for operations under free flight, the more effectively airlines can plan for those upgrades. Collaboration between FAA and stakeholders is critical to developing plans that will have the level of buy-in needed to start implementing free flight. FAA’s recent experiences in developing modernization plans have pointed to the need to work collaboratively with the aviation community from the onset of a given program to help ensure the effective resolution of issues as plans are developed. In March 1998, FAA and the aviation community reached consensus to begin implementing Free Flight Phase 1—including consensus on which technologies will be deployed and where. In addition, under this first phase, steps will be taken to identify and mitigate the risks associated with inserting new technologies and procedures into an operating air traffic control system. In contrast, until recently FAA and stakeholders have been sharply divided over the agency’s plans for conducting Flight 2000—a limited demonstration of free-flight-related communication, navigation, and surveillance technologies—primarily in Alaska and Hawaii. Problems began when the proposal was announced without consulting users and have persisted, despite FAA’s efforts to work collaboratively with stakeholders to resolve them. While many stakeholders we interviewed agreed with the need for FAA to conduct an operational demonstration of free flight technologies and related procedures, they had strong reservations about the utility of conducting such a demonstration in Alaska and Hawaii. In their view, few of the lessons learned would be transferable to operations in the continental United States, where free flight implementation will ultimately focus. In addition, stakeholders expressed concern that FAA has not focused enough attention on developing the detailed plans that it needs for conducting the demonstration, as required by the agency’s acquisition management system. In fact, the Department of Transportation’s fiscal year 1998 appropriation act prohibited FAA from spending any fiscal year 1998 funds on the Flight 2000 program. In the accompanying Conference Report for the act, the conferees noted that additional financial and technical planning was needed before the Flight 2000 demonstration program could be implemented. The Congress has not yet decided whether to fund this demonstration program in fiscal year 1999. To address these concerns, FAA has been working collaboratively with stakeholders—through RTCA—to develop a roadmap (general plans) for restructuring Flight 2000. These efforts have resulted in the (1) development of selection criteria for the operational capabilities to be used, (2) selection of demonstration sites in Alaska and the Ohio Valley, (3) selection of nine operational capabilities (see app. III), (4) proposed change of the program’s name from Flight 2000 to the “Free Flight Operational Enhancement Program,” and (5) revision of the time frame (1999-2004) for conducting the demonstration program. FAA is currently considering this RTCA proposal. FAA and stakeholders realize that they will need to continue to work collaboratively to refine these plans. The latest collaborative efforts appear to be a positive step toward developing the type of detailed plans FAA needs to carry out the demonstration and secure the necessary funding. Stakeholders and FAA officials identified several concerns about technology development and deployment that need to be resolved. Key among these were (1) the pace and cost of the agency’s process for ensuring that new equipment is safe for its intended use, (2) issues related to human factors, (3) uncertainties surrounding the use of GPS as a sole means of navigation, and (4) issues associated with the use of digital communication technologies. Many stakeholders and FAA officials stated that FAA’s certification process—methods for ensuring that new equipment is safe for its intended use—is a key challenge to the implementation of free flight because it takes too long and costs too much; they urged that the process be streamlined. The certification process could be problematic for free flight because many new types of equipment, such as those that are required for the use of new digital communication technology, will need to be certified before they can be implemented. As one aviation community stakeholder noted, “If something is going to change the aviation system, it has to go through the certification knot hole.” Recognizing that the certification process poses a barrier to implementing free flight, FAA has taken a number of steps to address this problem. For example, FAA asked, and RTCA agreed, to convene a task force to examine ways to improve the agency’s existing certification practices. The first meeting took place in June 1998, and the task force expects to report to FAA within 6 months. Among other things, this task force will (1) develop baseline information on the current system—including a review of avionics, infrastructure, and satellite needs; (2) consider human factors in the certification process—including how best to integrate human factors into the system’s design and operations; (3) identify ways to improve the current certification process—including an attempt to determine an acceptable range of failure for technologies and metrics for technology design and performance; and (4) review FAA’s certification services—including what customers should expect from the agency and alternative methods of satisfying certification requirements, such as granting approval authority for specific types of technologies to Centers of Excellence or individuals. In addition, RTCA has a special committee that is reviewing the use of digital communication technologies for free flight, including the development of standards that FAA could use to develop certification requirements. Furthermore, the agency plans to use the Flight 2000 (now the Free Flight Operational Enhancement Program) demonstration of free flight communication, navigation, and surveillance technologies as an opportunity for streamlining the agency’s equipment certification process. Several stakeholders told us that while the certification process could be streamlined, both FAA and stakeholders need to take a careful approach. They noted that the present system may be cumbersome, but it is providing the desired level of safety. If standards are going to be relaxed, then redundancies need to be built into the system to ensure that modifications to the certification process either maintain or improve upon existing levels of safety. Many stakeholders told us they believe that the successful implementation of free flight hinges on issues related to human factors, such as the ability and willingness of pilots, controllers, and maintenance staff to shift to a new system of air traffic management. Among the concerns raised are the need to (1) define the type of training that will best prepare human operators for the transition; (2) provide a reasonably paced training schedule to help ensure that pilots, controllers, and maintenance staff, in particular, are not overburdened with too many changes at one time; and (3) identify the risks associated with changes in technologies and procedures and the potential effects of these changes on human operations in a free flight environment. For example, a recent report by the National Research Council on human factors and automation raised concerns that, among other things, the increased use of automation may lead to confusion among pilots, controllers, and airline operations personnel over where control lies, especially in a free flight environment.As a result, the report recommended that until these and other human factors issues are better understood, the introduction of automated tools should proceed gradually and decision-making authority should continue to reside on the ground with controllers. A related issue is the need to incorporate the consideration of human factors into the product development cycle to avoid costly and cumbersome changes at the end of the development process. An FAA human factors official told us that FAA has learned a lot from its experience with the Standard Terminal Automation Replacement System (STARS) about the need to involve users in considering human factors throughout the product development cycle—from the mission needs statement, forward. This official stressed that the agency can pay to consider human factors throughout the acquisition cycle or pay more later, as it is doing with STARS, to fix the problems that arise when these factors are not considered. Furthermore, when human factors are not considered along the way, problems cannot always be fixed. Fewer options are available at the end of a development cycle for modifying a given technology. Stakeholders agreed with this assessment. While FAA has developed guidelines for considering human factors during the technology development process, it has not established a formal requirement for using these guidelines. In June 1996, we reported that FAA’s work on human factors was not centralized, and we recommended that the Secretary of Transportation direct FAA to ensure that all units coordinate their research through the agency’s Human Factors Division.According to some FAA officials and one stakeholder, such coordination is still lacking and the agency’s programs would benefit from assigning responsibility for human factors to a higher level within FAA so that these issues can receive sufficient attention from the agency’s senior management. In addition, several stakeholders stressed the importance of retaining the same members on teams that address concerns about human factors through the entire development process. One of these stakeholders believes that such continuity will help ensure that the team’s efforts are not derailed late in the process by the inclusion of new members and the introduction of a range of new issues and methods of resolving them. Human factors must also be considered in the operating environments where technologies will be deployed. According to one stakeholder involved in human factors work, because the air traffic control system has evolved—rather than being designed—it does not operate in a homogeneous fashion, and when the system is changed, the effects on humans can vary widely. For example, both en route and terminal facilities tailor their operations in many ways to factor in local conditions. As a result, this stakeholder stressed that as many as 1,000 letters of agreement between various components of FAA and users making adjustments to operating rules and procedures may exist—making it difficult for the agency to generalize across the system when considering the introduction of changes or improvements. In addition, under free flight, users and controllers (as well as maintenance staff) will rely more heavily on automated technologies to carry out their responsibilities—making the integrity of the system even more critical than it is now and increasing the need for more redundant systems and training to ensure that controllers can successfully switch, if necessary, to manual control techniques. Satellite navigation provides precise information on the position of aircraft and offers the potential for the required distances between aircraft to be safely reduced and, in turn, for the air traffic control system’s capacity to be increased. FAA initially planned its augmented satellite navigation system to be a sole means of navigation under free flight. However, FAA and stakeholders have expressed concerns about the vulnerability of an augmented satellite system to both intentional and unintentional (e.g., radio frequency interference) jamming, and about problems associated with the system’s weak signal. In view of these concerns, the Air Transport Association and the Aircraft Owners and Pilots Association have, in coordination with FAA, developed plans for a risk assessment of augmentations to satellite navigation. A research organization was selected in July 1998 to conduct the assessment, and a final report is expected in January 1999. An Air Transport Association official told us that this risk assessment will address concerns about the vulnerability of satellite navigation and stressed that such a study is critical because the use of satellite navigation as a sole means of navigation is the centerpiece of FAA’s architecture (blueprint) and is the basis for the agency’s cost/benefit analyses. According to this official, a risk assessment is needed to identify the risks and develop mitigation plans and cost estimates for mitigating each risk. The results of this study could affect both the costs and benefits for FAA and users because if FAA does not use the augmented system as a sole means of navigation, it could incur additional costs to retain some portion of its ground-based navigational aids. Similarly, users may find it necessary to maintain existing equipment and to purchase new equipment under free flight. FAA and stakeholders consider digital communication technologies—commonly referred to as data link—as critical to implementing free flight. FAA expects that the use of data link—in combination with other free flight technologies—will improve safety, increase capacity, reduce costs, and enhance the productivity of humans and equipment. Data link will replace or supplement many of the routine voice interactions between pilots and controllers with nonvocal digital data messages. For example, during peak periods, one controller often may be required to communicate on a single radio channel with 25 or more aircraft—leading to possible operational errors and system delays. FAA believes that using data link will (1) reduce nearly one-quarter of all domestic operational errors—caused directly or indirectly by miscommunication between pilots and controllers, (2) relieve highly congested voice communication channels, and (3) save the airlines millions of dollars annually on communication-related delays that occur during both taxi and in-flight operations. Data link comprises three components: (1) software applications—including Controller Pilot Data Link Communications (CPDLC), weather information, and Automatic Dependent Surveillance (ADS); (2) hardware systems installed on the ground and avionics in the cockpit; and (3) the communication medium that allows for the transfer of data between the ground and airborne equipment. FAA is responsible for implementing ground systems, and the aviation community is responsible for implementing airborne systems. As partners, both FAA and the aviation community are responsible for ensuring the interoperability of these systems. Stakeholders told us that despite the importance of data link, many issues remain unresolved. Chief among these issues is the lack of agreement within FAA on how, when, and at what pace to proceed with the use of data link. This lack of agreement may be attributed, at least in part, to the fact that data link efforts are being managed and implemented by different organizational elements of FAA and by the aviation community. Recognizing this, FAA has been working with stakeholders to reach agreement on data link issues. In May 1998, a group of FAA officials and stakeholders under the Administrator’s NAS Modernization Task Force began developing a consensus plan for implementing controller pilot data link in the en route environment. In July 1998, this group presented its plan to RTCA for consideration. In August 1998, RTCA modified the plan and endorsed the implementation of CPDLC Build 1 as part of Free Flight Phase 1—recommending that the location and communication medium for CPDLC Build 1 be changed. FAA—in consultation with stakeholders—intends to further develop the plans for deploying CPDLC Build 1 by the end of 1998. FAA’s approval is expected by early 1999. A number of other issues were identified by FAA officials and stakeholders as needing resolution for free flight to be implemented successfully. Among these issues were the need to (1) coordinate modernization activities with the international aviation community, (2) integrate free flight technologies, and (3) address airport capacity issues. Airlines that operate internationally and DOD believe that FAA needs to work diligently to ensure that, to the extent possible, carriers do not have to purchase multiple types of avionics to operate in different parts of the world. Currently, both FAA and various elements of the aviation community are working collaboratively with their international counterparts on a number of modernization issues. For example, FAA is a member of an airline-led group with international participation—including Eurocontroland several foreign airlines—known as the Communication, Navigation, and Surveillance/Air Traffic Management Focused Team. The purpose of this team is to facilitate the implementation of new communication, navigation, and surveillance and air traffic management technologies by developing consensus among global airlines on economic issues. In addition, the agency is working with the European community on human factors issues and data link applications. However, some stakeholders question the sufficiency of the agency’s efforts to coordinate technology selection decisions that will allow users to operate worldwide. Because the airline industry is becoming increasingly global, it requires the development of compatible operational concepts, technologies, and systems architectures throughout the world. One airframe manufacturer noted that the airlines are increasingly demanding global solutions to minimize the cost of changes to avionics and flight systems. The costs of purchasing new avionics, retrofitting them into the aircraft (and the down time required), and training pilots in their use for a large fleet of airplanes will quickly exceed any benefits if these benefits are not realized as soon as additional or improved capabilities are introduced. According to some stakeholders, FAA has historically been the international leader in air traffic control modernization efforts—a position that has given the agency the flexibility to develop and deploy technologies that best serve the needs of users in the United States. However, many stakeholders expressed concern that FAA’s position as the international leader in this arena has eroded in recent years. According to some of these stakeholders, the United States may have to follow the lead of the European community in selecting the types of new technologies that will be used under free flight. For example, some stakeholders noted that Europe is at least 3 years ahead of the United States in developing and deploying the data link technology that will serve as a centerpiece for implementing free flight. While several stakeholders noted that valuable lessons may be learned from the Europeans’ work on data link, one stakeholder stressed that it is important for the United States to position itself so that it can make decisions about technology requirements that best reflect the needs of U.S. operations. Some FAA officials and stakeholders told us that the agency needs to integrate free flight technologies with one another and into the operating air traffic control system. This integration is expected to allow FAA and users to fully exploit the capabilities of these technologies to help ensure that promised improvements in safety, capacity, and efficiency are realized. For example, as noted in chapter 2, FAA has new technologies that are expected to improve the efficiency of operations at high altitudes, close to the terminal, and on the ground. Because some of these technologies have not been designed to work together, some stakeholders and FAA officials contend that their potential benefits—e.g., allowing the distance between aircraft to be safely reduced, when practical, throughout a flight’s operation—will not be maximized unless they are integrated. One airframe manufacturer noted that the key impediment to changing the NAS is not new technology, but how to integrate that technology into an operating NAS. As a result, care must be taken to help ensure that planned changes in operations, procedures, and airspace usage will not adversely affect safety and will meet users’ future needs. Another stakeholder noted that integrating new technologies (and associated procedures) into the present operating system is difficult because there are complex interdependencies between the technologies currently being used—making incremental changes to the system complex and the consequences of introducing abrupt changes unpredictable. Stakeholders have raised concerns that FAA does not have sufficient internal expertise to complete integration tasks. FAA officials acknowledge that they do not have the internal expertise or experience to do the avionics systems integration work for Flight 2000 (now the Free Flight Operational Enhancement Program); the agency plans to hire an integration contractor to do this work. FAA believes that it has sufficient expertise to do the remainder of the integration work required for free flight. However, to enhance expertise within the agency, FAA has identified competencies essential to efficiently manage complex acquisition programs and is providing a variety of opportunities for staff to further develop their expertise. Stakeholders questioned whether FAA is paying enough attention to increasing airport capacity. Many stakeholders stressed that using free flight in the en route environment may get aircraft to their destinations sooner, but the planes may then be delayed by limits on airport surface capacity, such as too few runways and gates. Several stakeholders also stressed that poor weather conditions limit airports’ capacity and said that more sophisticated technology is needed to predict hazardous weather conditions so that airports’ capacity can be optimized. In June 1998, we reported that FAA has not assigned weather information a high priority in its plans for the NAS architecture. Because weather information is not considered critical, research on weather systems is often among the first to be cut—potentially jeopardizing mulityear studies of weather problems affecting aviation. Given the significant impact of hazardous weather on aviation safety and efficiency, improving the weather information available to all users should be one of FAA’s top priorities. The agency is taking steps to address its shortcomings in this area, and in fiscal year 1999, FAA is elevating weather research as a funding priority. | Pursuant to a congressional request, GAO reviewed the: (1) status of the Federal Aviation Administration's (FAA) efforts to implement free flight, including a planned operational demonstration formerly known as Flight 2000 and now called the Free Flight Operational Enhancement Program; and (2) views of the aviation community and FAA on the challenges that must be met to implement free flight in a cost-effective manner. GAO noted that: (1) since 1994, FAA officials and stakeholders, under the leadership of the Radio Technical Commission for Aeronautics (RTCA), have been collaborating to implement free flight; (2) these early efforts led to a definition of free flight, a set of recommendations, and an action plan to gradually move toward a more flexible operating system; (3) while working to implement the recommendations, FAA and stakeholders agreed on the need to focus their efforts on deploying technologies that will provide early benefits to users; (4) in early 1998, FAA and stakeholders developed a strategy that calls for the phased implementation of free flight, beginning with Free Flight Phase 1; (5) under this first phase, FAA and stakeholders have agreed upon the core technologies that are expected to provide these early benefits, as well as the locations where they will be deployed; (6) however, until recently, FAA and many stakeholders have not agreed on how best to conduct a limited operational demonstration of free-flight-related technologies and procedures--known as the Flight 2000 Program; (7) FAA is currently prohibited from spending any fiscal year 1998 funds on the Flight 2000 demonstration itself; (8) stakeholders concurred that FAA had yet to develop a detailed plan for conducting this demonstration; (9) to address the concerns of stakeholders, FAA has been working with them--under the leadership of RTCA--to restructure the Flight 2000 demonstration; and (10) FAA and stakeholders have identified numerous challenges that will need to be met if free flight--including Flight Phase 1 and Flight 2000--is to be implemented cost-effectively: (a) stakeholders told GAO that FAA will need to provide effective leadership and management of the modernization efforts both within and outside the agency; (b) stakeholders cited the need for FAA to further develop its plans for implementing free flight, including establishing clear goals for what it intends to achieve and developing measures for tracking the progress of modernization and free flight; (c) FAA and stakeholders agreed on the need to address outstanding issues related to technology development and deployment; and (d) FAA and stakeholders also identified a range of other challenges that will need the agency's attention, including coordinating FAA's modernization and free flight efforts with those of the international community and integrating the various technologies that will be used under free flight operations with one another as well as into the air traffic control system. |
While the term “data center” can be used to describe any room used for the purpose of processing or storing data, OMB defines a data center as a room that is greater than 500 square feet, that is used for processing or storing data, and that meets stringent availability requirements. Other facilities are classified as “server rooms,” which are typically less than 500 square feet and “server closets,” which are typically less than 200 square feet. According to OMB, the number of federal data centers grew from 432 in 1998 to 2,094 in July 2010. Operating such a large number of centers places costly demands on the government. While the total annual federal spending associated with data centers has not yet been determined, OMB has found that operating data centers is a significant cost to the federal government, including hardware, software, real estate, and cooling costs. For example, according to the Environmental Protection Agency (EPA), the electricity cost to operate federal servers and data centers across the government is about $450 million annually. According to the Department of Energy (Energy), data center spaces can consume 100 to 200 times as much electricity as standard office spaces. Reported server utilization rates as low as 5 percent and limited reuse of these data centers within or across agencies lends further credence to the need to restructure federal data center operations to improve efficiency and reduce costs. In 2010, the Federal Chief Information Officer (CIO) reported that operating and maintaining such redundant infrastructure investments was costly, inefficient, and unsustainable. Concerned about the size of the federal data center inventory and the potential to improve the efficiency, performance, and environmental footprint of federal data center activities, in February 2010 OMB, under the direction of the Federal CIO, announced the Federal Data Center Consolidation Initiative (FDCCI). This initiative’s four high-level goals are to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. As part of FDCCI, OMB required 24 departments and agencies that participate on the Chief Information Officers Council (see table 1) to submit a series of documents that ultimately resulted in a data center consolidation plan. Specifically, the departments and agencies were to provide the following: An initial asset inventory (due April 30, 2010), which was to provide a high-level understanding of the scale and size of existing data centers, IT infrastructure assets, and applications supported by the data centers. An initial data center consolidation plan (due June 30, 2010), which was to identify potential areas for consolidation, areas where optimization through server virtualization or cloud computing alternatives could be used, and a high-level roadmap for transitioning to the consolidated end-state architecture. A final asset inventory baseline (due July 30, 2010), which was to contain more detailed information and serve as the foundation for developing the final data center consolidation plans. The final inventory was also to identify the consolidation approach to be taken for each data center. A final data center consolidation plan (due August 30, 2010), which was to be incorporated into the agency’s fiscal year 2012 budget and was to include a technical roadmap and approach for achieving the targets for infrastructure utilization, energy efficiency, and cost efficiency. In October 2010, OMB reported that all of the agencies had submitted their plans and that there were 2,094 federal data centers as of July 2010. OMB announced plans to monitor agencies’ consolidation activities on an ongoing basis as part of the annual budget process. Further, starting in fiscal year 2011, agencies will be required to provide an annual updated data center asset inventory at the end of every third quarter and a consolidation progress report at the end of every fourth quarter. To manage the initiative, OMB designated two agency CIOs as executive sponsors to lead the effort within the Chief Information Officers Council. Additionally, the General Services Administration (GSA) has established the FDCCI Program Management Office, whose role is to support OMB in the planning, execution, management, and communication for FDCCI. In this role, GSA collected the responses to the four document deliveries and reviewed the submissions for completeness and reasonableness. GSA also sponsored three workshops on the initiative for agencies and facilitated a peer review of the initial and final data center consolidation plans. In December 2010, OMB published its 25-Point Implementation Plan to Reform Federal Information Technology Management as a means of implementing IT reform in the areas of operational efficiency and large- scale IT program management. Among the 25 initiatives, OMB has included two goals that relate to data center consolidation: 1. By June 2011, complete detailed implementation plans to consolidate at least 800 data centers by 2015. 2. By June 2012, create a governmentwide marketplace for data center availability. To accomplish its first goal, OMB required each FDCCI agency to identify a senior, dedicated data center consolidation program manager. It also launched a Data Center Consolidation Task Force comprised of the data center consolidation program managers from each agency. OMB officials stated that this task force is critical to driving forward on individual agency consolidation goals and to meeting the overall federal target of closing a minimum of 800 data centers by 2015. To that end, in April 2011, OMB announced plans to close 137 data centers by the end of December 2011. OMB also plans to launch a publicly-available dashboard for observing agencies’ consolidation progress, but this has not yet been completed. To accomplish its second goal, OMB and GSA plan to create a governmentwide marketplace by June 2012 that will better utilize spare capacity within operational data centers. This online marketplace is intended to match agencies that have extra capacity with agencies with increasing demand, thereby improving the utilization of existing facilities. The marketplace will help agencies with available capacity promote their available data center space. Once agencies have a clear sense of the existing capacity landscape, they can make more informed consolidation decisions. We have previously reported on OMB’s efforts to consolidate federal data centers. In March 2011, we reported on the status of the FDCCI and noted that data center consolidation makes sense economically and as a way to achieve more efficient IT operations, but that challenges exist. For example, agencies face challenges in ensuring the accuracy of their inventories and plans, providing upfront funding for the consolidation effort before any cost savings accrue, integrating consolidation plans into agency budget submissions (as required by OMB), establishing and implementing shared standards (for storage, systems, security, etc.), overcoming cultural resistance to such major organizational changes, and maintaining current operations during the transition to consolidated operations. We further reported that mitigating these and other challenges will require commitment from the agencies and continued oversight by OMB and the Federal CIO. To help agencies plan their consolidations, OMB issued guidance on the required content of data center inventories and consolidation plans. Specifically, the inventories were to include descriptions of the assets present within individual data centers, as well as information about the physical data center itself. The consolidation plans were to address key elements, including goals, approaches, schedules, cost-benefit calculations, and risk management plans. As required, 23 of the 24 agencies submitted their inventories and consolidation plans by the end of September 2010; the remaining agency explained that consolidation was not applicable to them. However, of the 23 reporting agencies, all but one of the inventories and all of the plans are missing key elements. For example, 14 agencies do not provide a complete listing of data centers and 15 do not provide a complete listing of software assets in their inventories. Further, OMB did not require that agencies verify these inventory data. Additionally, in their consolidation plans, 20 agencies do not provide a master schedule, 12 agencies do not address cost-benefit calculations, and 9 do not address risk management. Several agency officials noted that they had difficulty completing their inventories and plans within OMB’s timelines. Other agencies reported trouble with identifying either required information for the plans or data on the assets within their data centers. Until these inventories and plans are complete, agencies may not be able to implement their consolidation activities or to realize expected cost savings. Moreover, without an understanding of the validity of agencies’ consolidation data, OMB cannot be assured that agencies are providing a sound baseline for estimating consolidation savings and measuring progress against those goals. In their consolidation plans, agencies identified goals for reducing the number of data center facilities and these facilities’ related costs. Specifically, the 23 reporting agencies identified 1,590 data centers as of April 2011, and established goals for reducing that number by 652 centers by the end of fiscal year 2015. Most federal departments and agencies also estimated cost savings over time. Specifically: Fourteen agencies reported savings totaling about $700 million between fiscal years 2011 and 2015; however, actual savings may be even higher because 12 of these agencies’ estimates were incomplete. For example, 11 agencies included expected energy savings and reductions in building operational costs, but not savings from other sources, such as equipment reductions. Two agencies expect to accrue net savings after fiscal year 2015. Two agencies do not expect to attain net savings from their consolidation efforts. Five agencies did not provide estimated cost savings; however, two of these agencies suggested that they plan to develop cost-benefit analyses in the future. As part of the data center consolidation initiative, OMB required agencies to provide an inventory of data center assets. This inventory is to address four key elements: (1) IT software assets; (2) IT hardware assets and their utilization; (3) IT facilities, energy, and storage; and (4) geographic location and real estate. According to OMB’s guidance, the information is to be organized by data center. For example, in the IT software area, agencies are to report by data center on each major and nonmajor system present in the center. For each identified system, the agency is to report the associated support platforms, servers and computers, and proposed consolidation approach (i.e., decommissioning, consolidation, cloud computing, or virtualization). Table 2 provides a detailed description of each of the four key elements. When collecting data, it is important to have assurance that the data are accurate. We have previously reported on the need for agencies, when providing information to OMB, to explain the procedures used to verify their data. Specifically, agencies should ensure that reported data are sufficiently complete, accurate, and consistent, and also identify any significant data limitations. Explaining the limitations of information can provide a context for understanding and assessing the challenges agencies face in gathering, processing, and analyzing needed data. Such a presentation of data limitations can also help identify the actions needed to improve the agency’s ability to measure its performance. More recently, we have reiterated the importance of providing OMB with complete and accurate data and the possible negative impact of that data being missing or incomplete. Only 1 of the 23 agency data center inventories contains complete data in all four of the required elements. Specifically, while many agencies provide partial inventory data, one agency provides complete information in all four areas, five agencies provide complete information in three of the four areas, one agency provides complete information for two of the areas, eight agencies have complete information for only one area, and eight agencies do not have any complete areas in their inventories. Figure 1 provides an assessment of the completeness of agencies’ inventories, by key element, and a discussion of the analysis of each area follows the figure. IT software assets. Eight agencies provide complete information on their software assets; 14 agencies provide partial information; and 1 agency did not provide information. For example, the Small Business Administration (SBA) provides information on its data center systems, their technical dependencies on platforms and servers, and consolidation approaches for its systems; while GSA provides information on its data center systems and a consolidation approach for each system, but provides only partial information on each system’s technical dependencies on platforms and servers. Additionally, Energy provides only partial information on the systems in its data center, the systems’ technical dependencies, and consolidation approaches for those systems. IT hardware assets and utilization. Nine agencies provide complete information on their IT hardware assets and the utilization of those assets and 14 provide partial information. For example, EPA provides complete information on maximum and average server utilization, as well as counts of its physical servers, virtual hosts, and virtual operating systems; while SBA provides complete information on counts of its physical servers, virtual hosts, and virtual operating systems, but only partial information on maximum and average server utilization. Another 7 agencies, including the Departments of Defense (Defense), Homeland Security (DHS), and Transportation (Transportation), provide partial information on their maximum and average server utilization, and on their counts of physical servers, virtual hosts, and virtual operating systems. IT facilities, energy, and storage. Three agencies provide complete information on their IT facilities, energy, and storage, while 20 provide partial information. For example, the Department of State (State) includes all the required information, while the Department of Education (Education) provides complete information on its annual data center operational cost, total rack count, and storage information, but only provides partial information on its annual data center electricity cost and total electricity usage. Also, the National Aeronautics and Space Administration (NASA), has partial information on its total rack count and does not provide any information for the other required parts of this element. Geographic location and real estate. Nine agencies provide complete information on their data center locations, while 14 provide partial information. For example, the Office of Personnel Management (OPM) provides complete information on its number of data centers and the gross floor area of those centers, but does not provide information on the number of server rooms and closets and the gross floor area of those facilities. Other agencies such as Energy and the Department of Labor (Labor) only provide partial information on their number of centers, server rooms, and closets, and the gross floor area of those facilities. Because agency goals are intended to be built on the information provided by the inventories, agencies cannot ensure the reliability of their savings and utilization forecasts until the underlying inventories have been completed. While it is important that reported data are sufficiently complete, accurate, and consistent, OMB’s guidance on agency inventories does not require agencies to document what they did to verify their data, or to disclose any limitations on that data. Nonetheless, several agencies took informal steps to validate their data. For example, Department of Agriculture (Agriculture) officials stated that they interviewed staff who submitted inventory information and conducted on-site visits of data centers. Additionally, Department of Commerce (Commerce) officials reported that they reviewed the inventory data and clarified missing or suspect entries with those who submitted the information. Also, a Department of the Treasury (Treasury) official stated that there were two rounds of data verification and that Treasury bureaus were sometimes asked to verify submitted information. However, officials from other agencies, such as Defense, Energy, and NASA, confirmed that their inventories had not been verified. Further, in some cases, such as with the Department of Health and Human Services (HHS) and NASA, agency officials reported that their inventory information was estimated. Notwithstanding agencies’ informal verification efforts, complete, accurate, and consistent performance information will be important to OMB to guide its decisions on how best to oversee federal data center consolidations. Without an understanding of the validity and limitations on agencies’ data, OMB cannot be assured that agencies are providing a sound baseline for estimating savings and measuring progress against their goals. In addition to the agencies’ inventories, OMB required agencies to establish consolidation plans that address key elements, including quantitative goals, qualitative impacts, approach, scope, timeline, and master schedule, as well as summaries of a cost-benefit analysis, performance metrics, risk management, and communications planning. OMB noted the importance of agencies’ consolidation plans in providing a technical road map and approach for achieving specified targets for infrastructure utilization, energy efficiency, and cost efficiency. Table 3 provides a detailed description of each of these elements. While 23 agencies submitted consolidation plans to OMB, selected elements are missing from each plan. For example, 22 agencies provide complete information on their qualitative impacts, but only 6 provide complete information on their quantitative goals. Further, while all 23 agencies specify their consolidation approach, only 5 indicate that a cost- benefit analysis was performed for the consolidation initiative. In many cases, agencies submitted some, but not all, of the required information. Figure 2 provides an assessment by element, and a discussion of each element follows the figure. A detailed summary of the agencies’ status of completion of each key element is provided in appendix II. In addition, this information is provided for each agency in appendix III. Quantitative goals. Six agencies provide complete savings and utilization forecasts and 17 agencies provide partial forecasts. For example, Defense’s savings and utilization forecasts are incomplete, while Treasury and the Social Security Administration (SSA) provide complete savings forecasts, but incomplete utilization forecasts. Some agencies identified reasons for not having completed these forecasts. For example, Treasury’s plan states that the department’s savings and utilization targets do not include demand from new organizations required under recent legislation and that the plan will be updated as further information becomes available. The plan also notes that the forecasts could change when the department completes associated cost-benefit analyses. Additionally, NASA’s plan states that the agency is performing an assessment of its assets to form an accurate baseline, on which actual targets for reduction can be predicted. This plan also notes that as the agency’s asset and inventory information is improved, NASA will evaluate opportunities to further consolidate applications and virtualize operating systems. Qualitative impacts. Twenty-two agencies describe the qualitative impacts of their consolidation initiatives and 1 agency does not. For example, Agriculture’s plan describes goals such as reducing overall energy use and reducing the real estate footprint for data centers. Additionally, HHS reports that the consolidation effort will result in more efficient monitoring of data center power. Finally, the U.S. Agency for International Development (USAID) describes goals such as optimizing use of IT funding by increasing efficiencies and increasing the availability of resources and systems to the user community. However, Education does not provide any qualitative impacts. Although the department links its consolidation plan to a federal strategic sustainability plan, the sustainability plan does not contain qualitative impacts such as those required by OMB. Summary of consolidation approach. All 23 agencies include a summary of the agencies’ proposed consolidation approaches. For example, Commerce describes five approaches that will support the department’s FDCCI goals: consolidating and decommissioning data centers, increasing server virtualization and IT equipment utilization, moving to cloud computing, acquiring green products and services, and promoting “green IT.” Similarly, the Department of the Interior (Interior) provides four approaches to help realize its consolidation goals: decommissioning, consolidation, cloud computing, and server and storage virtualization. For a more detailed discussion of alternative data center consolidation approaches, see appendix IV. Scope of consolidation. Nineteen agencies’ plans include a well- defined scope for data center consolidation, 2 provide partial information on the scope of their consolidation efforts, and 2 do not provide this information. Specifically, the agencies that provide this information list the data centers included in the consolidation effort and what consolidation approach will be taken for the systems within each center (i.e., decommissioning, consolidation, cloud computing, or virtualization). For example, the Department of Veterans Affairs (VA) lists 87 data centers and Labor lists 20 data centers, in which all of the systems will be consolidated, decommissioned, or virtualized. Alternatively, Energy has not yet determined what action will be taken for each of its facilities and Interior identifies the total numbers of centers to be retained or expanded, but does not describe the consolidation approach for individual data centers. Two agencies, the Department of Justice (Justice) and NASA, are still working to determine which of their data centers are to be consolidated. High-level timeline. Twenty agencies include a high-level timeline for consolidation efforts, 1 agency includes partial information on its timelines, and 2 do not provide timelines. For example, Labor and VA both provide the fiscal years in which every data center listed will be consolidated and the National Science Foundation (NSF) states what year the agency’s primary data center will be decommissioned and replaced with private and public cloud services. In contrast, Defense only describes broad goals to be accomplished by fiscal year 2013 and does not include specific milestones for each data center. Further, NASA does not include this information in its plan and notes that the agency is still working to determine its data center inventory. Performance metrics. Six agencies identify specific performance metrics for their consolidation programs, 4 agencies provide partial information on their metrics, and 13 agencies did not identify specific metrics. For example, both Transportation and GSA specify metrics such as savings in energy consumption, cost variance, and schedule variance. Alternatively, State reports that the department’s data center consolidation program maintains metrics at both the system and process performance levels, but does not provide any specifics as to the nature of those metrics. Further, although Defense does not provide metrics at the department level, the Air Force has developed a method to provide such measures. Master program schedule. Three agencies reference a completed master program schedule, and 20 do not. For example, while Agriculture, DHS, and Interior discuss their master schedules, other agencies, such as Commerce and HHS do not. Some agencies, such as Defense, Labor, and the Nuclear Regulatory Commission (NRC), plan to develop them in the future. Cost-benefit analysis. Five agencies provide a cost-benefit analysis that encompasses their entire consolidation initiative, 6 agencies provide only selected elements of a cost-benefit analysis, and 12 agencies do not provide a cost-benefit analysis. For example, DHS details full annualized investment and savings estimates through fiscal year 2015, while other agencies, such as State and OPM, provide only partial information. Specifically, State acknowledges that not all costs are accounted for in its analysis and OPM reports that its analysis is preliminary. Additionally, Commerce provides costs and savings for several data center consolidations but acknowledges that estimates cannot be provided for all of the department’s planned consolidation initiatives. Eight of the agencies that do not provide a cost-benefit analysis, such as HHS, Justice, and USAID, plan to conduct one in the future. Risk management. Fewer than half of the agencies both reference a consolidation risk management plan and require that risks be tracked. For example, HHS discusses its approach to risk management and identifies a series of technical, security, funding, and management risks and provides a mitigation strategy for each. Additionally, VA describes a five-phase approach to risk management that includes identifying and monitoring risks. However, Education requires that risks be tracked, but does not reference the existence of an actual risk management plan. Nine agencies do not reference a risk management plan or requirements for tracking risks. Communications plan. Eighteen agencies consider a communications strategy for the agencies’ consolidation initiatives, and 5 agencies do not. For example, Energy describes a series of coordinated activities that are intended to support the consolidation effort. Additionally, NASA details its approach to consolidation coordination and communication and SBA details individual communication responsibilities among consolidation stakeholders. However, Treasury and NRC do not describe such a communications strategy. When asked about the elements missing from their plans, many agency officials stated that they completed what they could within the timelines provided by OMB. Several agency officials noted that it was difficult to obtain all of the required data from component agencies, while others reported that their data collection efforts were made more difficult by OMB’s tight time frames and changes in templates and guidance. Moreover, officials from two agencies stated that some of the information contained in their plans had been estimated. However, OMB has not required agencies to complete the missing elements or to resubmit their final plans. According to an OMB official, agencies have been instructed to move forward with their consolidation initiatives, and as noted earlier, OMB intends to monitor the agencies’ progress annually. We have previously reported that without a clear description of the strategies and resources an agency plans to use in meeting its goals, it will be difficult to assess the likelihood of the agency’s success in achieving its intended results. In the absence of completed consolidation plans, agencies run the risk of moving forward on their respective initiatives with, among other things, poorly defined approaches and outcomes. Without this information, agencies may not realize anticipated cost savings, improved infrastructure utilization, and energy efficiency. In preparing agencies for the data center consolidation initiative, OMB held workshops that, among other things, discussed challenges that agencies might face so that they could anticipate and mitigate them. In addition, agencies identified multiple challenges they are facing during data center consolidations. These include challenges related to the data center consolidation initiative as well as those that are cultural, funding related, operational, and technical. Some challenges are more common than others. Specifically, the number of agencies reporting a particular challenge range from 1 agency to 19 agencies. Table 4 details the reported challenges as well as the numbers of agencies experiencing that challenge. The table is followed by a discussion of the most prevalent challenges. Agencies reported nine challenges that are specific to OMB’s data center consolidation initiative, including meeting tight FDCCI deadlines and obtaining power usage data as required by OMB. Specifically, 19 agencies reported that obtaining power usage data was a challenge. For example, a Commerce official stated that while more than half of the agency’s data centers have power consumption figures or costs associated with power usage, some of the agency’s facilities do not have metering capabilities for power consumption and that the agency has some server rooms that lack metering at the equipment. Similarly, Labor’s Deputy CIO stated that the department does not have metering for data centers. Consequently, the agency used best practices to estimate how much power is being used at data centers. In addition, 11 agencies found that the tight FDCCI deadlines were a challenge. For example, Labor’s Deputy CIO stated that the time frames were overly aggressive and did not allow the agency to provide the information OMB requested or to complete the planning that is necessary for such an important undertaking. An Energy official stated that there is no quick path to consolidation and that the agency was faced with the decision of either moving forward with inadequate inventory information or taking more time to make decisions. The official stressed that the agency would like to consolidate in the correct manner. Agencies reported four cultural challenges to data center consolidation, including accepting cultural change and implementing consolidation in an organizational structure not geared towards consolidation (i.e., a decentralized enterprise). The most prevalent challenge was acceptance of cultural change, with 15 agencies reporting it as a challenge. For example, an Agriculture official stated that there is a challenge in addressing cultural change surrounding data center consolidation. With data center consolidation, systems personnel may not be in the same location as the data. To address this, Agriculture refined its communications plan so that lessons learned can be passed on to other staff. An EPA official noted the agency experienced this challenge when employees were reluctant to cede control of resources under their immediate control. EPA mitigated that challenge by building relationships with stakeholders early on. In addition, 8 agencies reported that implementing the consolidation was challenged by their organizational structures. For example, a Justice official stated that the agency uses a federated IT approach within which the departmentwide CIO’s office has primary responsibility for architecture, common infrastructure, and standards decisions, while each component IT department has primary responsibility for application resource decisions. According to this official, such a federated approach offers Justice’s components more autonomy when making decisions, but creates obstacles for departmentwide efforts such as FDCCI. Further, an Energy official cited the department’s decentralized environment as a challenge in being able to collect data center inventory information. The data center consolidation initiative is supposed to result in cost savings, but multiple agencies reported challenges in funding their initiatives. For example, a Transportation official reported that the need for upfront funding for consolidation efforts was a challenge. A NASA official stated that the agency spent approximately $1.5 million on an asset management tool to assist the agency in creating its inventory. Further, a State official noted the challenge of having to fund the consolidation efforts long before cost savings will be realized. In addition, 9 agencies reported that identifying cost savings for consolidation efforts was a challenge. For example, an Energy official stated that it was too early in the consolidation process for Energy to be able to quantify cost savings, since the agency does not have data on cost or exactly which data centers will be closed. A State official stated that it is difficult to estimate cost savings since the department does not have information on power usage for all facilities. Further, a Treasury official noted challenges in identifying cost savings, particularly because a reduction in utilized square footage at a facility does not mean that the leasing agency will issue a refund check on the lease. Similarly, building managers at private facilities will typically not issue a refund if an agency begins using less energy. Agencies reported eight operational challenges to data center consolidation, including maintaining services during the consolidation transition and implementing cloud computing. Nine agencies reported that maintaining services during the consolidation transition is a challenge. For example, a Labor official stated that keeping the business running during the transition is a big concern of the agency. Three agencies reported that moving to cloud computing was a challenge. For example, a Commerce official stated that the agency’s biggest challenge was how to implement cloud computing. This official cited the need to investigate how private and public cloud computing will fit into the agency’s mission, and to determine how to manage security issues surrounding cloud computing. Agencies reported 10 technical challenges to data center consolidation, including maintaining the appropriate level of system security and planning the migration strategy. While agencies reported more technical challenges than any other type of challenge, these challenges are more diverse, with fewer agencies experiencing each individual challenge. Three agencies reported that maintaining the appropriate level of system security was a challenge. For example, an OPM official stated that one of the challenges faced was the need to maintain personally identifiable information while exploring options such as cloud computing. In addition, a State official identified the challenge of including classified servers in the consolidation initiative. Two agencies reported that planning the migration strategy was a challenge. For example, an SSA official pointed out the difficulty in scheduling migration across approximately 1,500 field offices. One approach agencies can use to manage challenges such as the ones listed above is through formal risk management processes. However, as noted in the prior section, less than half of the agencies included a discussion of risk management in their data center consolidation plans. We have previously reported on the importance of using lessons learned—a principal component of an organizational culture committed to continuous improvement. Sharing such information serves to communicate acquired knowledge more effectively and to ensure that beneficial information is factored into planning, work processes, and activities. Lessons learned also provide a powerful method of sharing good ideas for improving work processes, facility or equipment design and operation, quality, safety, and cost-effectiveness. OMB posted lessons learned by three states on its data center consolidation Web page for federal agencies to review. However, there is more that agencies can learn. Many state governments have undertaken data center consolidation initiatives in recent years. Although they have encountered unique challenges, they have also encountered challenges similar to those reported by federal agencies. Specifically, the National Association of State CIOs and a literature search identified 20 states that reported on challenges they faced, or lessons they learned, from their data center consolidation initiatives. Of these, 19 reported lessons learned that could be leveraged at the federal level. For example, officials from North Carolina reported that organizations are typically concerned that by consolidating data centers, they will lose control of their data, service levels will decline, or costs will rise. The state learned that to help mitigate this during the process of consolidation, the organizations’ concerns should be documented, validated, and addressed. In another example, a West Virginia official reported that since the state had no funding for a consolidation, it had to be creative in executing the consolidation. The state used the natural aging cycle of hardware to force consolidation; that is, when a piece of hardware was ready to be replaced, the applications and software were put onto a consolidated server. As a final example, two states reported lessons learned that could be applied to the challenge of providing a quality asset inventory. Officials from Utah and Texas emphasized the importance of having an accurate inventory of all equipment that could be impacted by the project. The official from Texas added that it is important to have a third party collect technical data across agencies, and that it is beneficial to have this work completed by outside parties, in order to ensure objectivity and consistency. Table 5 identifies lessons learned by states that could be applied by federal agencies. With agencies reporting having almost 1,600 federal data centers, OMB’s goal of consolidating 800 centers by 2015 is ambitious. To its credit, OMB has established an accountability infrastructure through its data center consolidation task force composed of representatives from each of the participating agencies. OMB and federal agencies have also taken important steps to reduce the number and increase the efficiency of the federal data centers. However, only one agency has completed its required data center asset inventory, no agencies have completed their consolidation plans, and OMB has not required that agency inventory information be verified. Despite these limitations, OMB has instructed agencies to move forward with their plans. Moving forward to consolidate obviously redundant or underutilized centers is warranted—and should result in immediate cost savings and increased efficiency. However, without a complete asset inventory and a comprehensive plan, agencies are at increased risk that they will be ill-prepared to manage such a significant transformation. This could slow the consolidations and reduce expected savings and efficiencies. In moving ahead in their consolidation efforts, agencies are encountering challenges, including those that are technical, operational, and cultural in nature. Some state governments have also engaged in data center consolidation initiatives and dealt with similar obstacles in doing so. By virtue of these experiences, these states can offer insights and suggestions that federal agencies can use to mitigate their challenges and risks. In doing so, agencies will be better positioned to address their consolidation goals and to meet OMB’s goals for reducing the number and cost of federal data centers. To better ensure that the federal data center consolidation initiative improves governmental efficiency and achieves cost savings, we are making four recommendations to OMB. Specifically, we are recommending that the Director of the Office of Management and Budget direct the Federal Chief Information Officer to require that agencies, when updating their data center inventories in the third quarter of each fiscal year, state what actions have been taken to verify the inventories and to identify any limitations of this information; require that agencies complete the missing elements in their respective plans and submit complete data center consolidation plans, or provide a schedule for when they will do so, by September 30, 2011; require agencies to consider consolidation challenges and lessons learned when updating their plans; and utilize the existing accountability infrastructure by requiring the Data Center Consolidation Task Force to assess agency consolidation plans to ensure they are complete and to monitor the agencies’ implementation of their plans. In addition, we are making two recommendations to each of the department secretaries and agency heads of the 23 departments and agencies participating in the federal data center consolidation initiative. Specifically, we are recommending that the secretaries and agency heads direct their component agencies and their data center consolidation program managers to complete the missing elements in their respective data center consolidation inventories and plans; and require their data center consolidation program managers to consider consolidation challenges and lessons learned when updating their consolidation plans. We received comments on a draft of our report from OMB and the 23 agencies to which we made recommendations. Most agencies generally agreed with our recommendations. Specifically, in commenting on the draft, 15 agencies agreed with our recommendations; 4 agreed with the report’s content or findings, but offered no comments on the recommendations; 3 offered no comments on the report’s findings or recommendations; and Defense and SSA both did not agree with one of our recommendations, but agreed with the second. Agencies also provided technical comments, which we incorporated as appropriate. Each agency’s comments are discussed in more detail below. In comments provided via e-mail, an OMB official from the General Counsel Office wrote that the agency generally agreed with our report. The agency offered no comments on our recommendations. In comments provided via e-mail, Agriculture’s CIO agreed with our recommendations and noted that our assessment of USDA’s inventory was accurate. In written comments, the Secretary of Commerce concurred with the general findings as they apply to the department and with specific reporting on the department’s data center consolidation plan. The Secretary offered no comments on our recommendations, but noted that Commerce plans to address GAO’s finding on the department’s consolidation master program schedule in the next version of its consolidation plan. Commerce’s written comments are provided in appendix V. In written comments, Defense’s CIO partially concurred with one of our recommendations and concurred with the second. Specifically, regarding our recommendation that the department complete the missing elements from its data center inventory and consolidation plan, the CIO cited the importance of completing consolidation metrics and noted that many of the department’s centers and buildings are not equipped to meter energy usage and that using incomplete estimates of such usage would result in inaccurate extrapolations of cost savings. However, OMB addressed such concerns in its guidance on the FDCCI, noting alternative means by which agencies could develop energy utilization estimates. OMB further recognized that these estimates may need time to become more accurate. As such, we believe our recommendation is reasonable and appropriate. Defense’s written comments are provided in appendix VI. In written comments, Education’s CIO concurred with one recommendation and outlined plans to address the second. The CIO noted the department’s plans to complete, to the extent practicable, the missing information in Education’s data center inventory and consolidation plan. The CIO also cited the department’s intent, when updating its annual consolidation plan, to consider relevant consolidation challenges and lessons learned. Education’s written comments are provided in appendix VII. In written comments, Energy’s Director of the Corporate IT Project Management Office agreed with our assessment of Energy’s data center consolidation plan and offered no comments on our recommendations. The Director cited a series of planned actions by the department intended to gather missing information in Energy’s data center inventory, update the department’s consolidation plan, and document its data center management best practices. Energy’s written comments are provided in appendix VIII. In written comments, HHS’ Assistant Secretary for Legislation stated that the draft accurately depicts the HHS data center consolidation plan as it was delivered to OMB in August 2010 and notes that the agency has continued to improve its inventory and make progress on its data center consolidation goals since that time. Further, the Assistant Secretary outlined a series of actions planned by the department to complete HHS’ data center inventory and consolidation plan. The department did not offer comments on our recommendations. HHS’ written comments are provided in appendix IX. In written comments, DHS’s Director of the Departmental GAO/OIG Liaison Office concurred with our recommendations. Further, the Director outlined the department’s planned actions to complete the missing information from its data center inventory and noted that the department is working to share its consolidation lessons learned with, among others, the Federal Chief Information Officers Council. DHS’s written comments are provided in appendix X. In written comments, Interior’s Assistant Secretary for Policy, Management, and Budget concurred with our findings and recommendations. The Assistant Secretary also noted that the department is continuing to refine its data center inventory and consolidation goals. Interior’s written comments are provided in appendix XI. In comments provided via e-mail, the Justice Audit Liaison concurred with our recommendations. Justice also provided technical comments, which we incorporated as appropriate. In written comments, Labor’s Assistant Secretary for Administration and Management stated that, after carefully reviewing the draft report, the department did not have any comments to contribute. Labor’s written comments are provided in appendix XII. In written comments, State’s Chief Financial Officer concurred with our recommendations and outlined a series of actions planned by the department to complete State’s data center inventory and consolidation plan. State’s written comments are provided in appendix XIII. In comments provided via e-mail, Transportation’s Deputy Director of Audit Relations stated that the department had no comments on the report and agreed to consider our recommendations. In written comments, Treasury’s Deputy Assistant Secretary for Information Systems and Chief Information Officer did not provide comments on our recommendations, but noted that Treasury has started its annual data center inventory collection, which will address missing data elements, and that Treasury intends to collect and leverage data center consolidation challenges and lessons learned when updating the department’s consolidation plans. Treasury’s written comments are provided in appendix XIV. In written comments, VA’s Chief of Staff generally agreed with the findings and concurred with our recommendations. Further, the Chief of Staff noted planned actions to complete missing information from the department’s consolidation plan and to supplement updates to the plan with narrative responses on consolidation challenges and lessons learned. The Chief of Staff also noted that the department is continuing to refine its data center inventory, consolidation goals, and consolidation timeline. VA’s written comments are provided in appendix XV. In written comments, EPA’s Assistant Administrator and Chief Information Officer did not agree or disagree with our recommendations, but did offer clarification on its plans to fulfill our recommendations. Specifically, in relation to our recommendation to complete missing elements of the agency’s consolidation plan, the Assistant Administrator clarified that the majority of EPA server rooms are located within leased space managed by GSA. As such, EPA will be able to estimate operating electrical use for these rooms, but the cost-benefit analysis will not reflect a reduction in real estate costs or electricity consumption because such reductions would not result in cost savings to EPA. Further, the CIO asserted that we had mischaracterized the agency’s overall plan for consolidation by stating that EPA does not plan to further consolidate its four data centers and in our description of actions the agency plans to take within those four centers. Specifically, EPA stated that we did not provide adequate detail about EPA’s existing infrastructure or the services that will be provided by these four existing centers. However, EPA’s data center consolidation plan states that the agency had four data centers as of the end of fiscal year 2010 and the agency plans to have four data centers at the end of fiscal year 2014. As such, we maintain that our description of EPA’s broad consolidation goals is factual. EPA’s written comments are provided in appendix XVI. In written comments, GSA’s Administrator agreed with both our findings and our recommendations and stated that GSA would take actions commensurate with our recommendations. GSA’s written comments are provided in appendix XVII. In written comments, NASA’s CIO concurred with our recommendations. Further, the CIO cited ongoing work by NASA to complete the missing information in the agency’s data center inventory and consolidation plan. Additionally, the CIO noted the agency’s plans to consider consolidation challenges and lessons learned when updating consolidation plans. NASA’s written comments are provided in appendix XVIII. In comments provided via e-mail, NSF’s Acting CIO did not provide comments on our recommendations, but noted NSF’s planned actions to complete the missing information from the agency’s consolidation plan. In written comments, NRC’s Deputy Executive Director for Corporate Management within the Office of the Executive Director for Operations stated that the agency had no comments. NRC’s written comments are provided in appendix XIX. In comments provided via e-mail, OPM’s Deputy CIO concurred with our recommendations. The Deputy CIO noted that since OPM does not plan to consolidate further than its one data center, the agency’s consolidation focus will be to complete its asset inventory and explore ways to operate more efficiently. In comments provided via e-mail, the SBA Program Manager for the Office of Congressional and Legislative Affairs concurred with our recommendations. In written comments, SSA’s Deputy Chief of Staff disagreed with one recommendation and agreed with the second. Specifically, regarding our recommendation that the agency complete the missing elements from its data center inventory and consolidation plan, the Deputy Chief of Staff disagreed with our assessment of SSA’s asset inventory and consolidation plan, stating that SSA responded to OMB’s directive in a timely and satisfactory manner. The Deputy Chief of Staff further noted that because SSA does not plan to consolidate its two existing data centers, the plan elements we noted as missing were not applicable to SSA’s circumstances. However, in its asset inventory, SSA does not provide one of the OMB-specified consolidation approaches, which includes an option of “not applicable,” for any of the major and nonmajor systems in SSA’s data centers. While we acknowledge that SSA does not plan to consolidate from its two physical locations, OMB still required agencies to provide a consolidation approach for every identified system. Further, in a written response to OMB questions about SSA’s consolidation plan, SSA acknowledged that the agency planned to virtualize systems within one of its two locations. In light of this planned work, it is reasonable to assume that an agency would complete the important governance-related key plan elements we identified as missing, such as a master program schedule, a cost- benefit analysis, and a risk management plan. In its guidance on the FDCCI, OMB echoed this importance, noting that an agency’s governance framework needs to provide specific details about the oversight and internal mechanics that will measure and manage performance and risk of the consolidation implementation. As such, we believe our recommendation is reasonable and appropriate. SSA’s written comments are provided in appendix XX. In comments provided via e-mail, the liaison from USAID’s Office of the Chief Financial Officer concurred with our recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees; the Director of OMB; the secretaries and agency heads of the departments and agencies addressed in this report; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-6253 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XXI. Our objectives were to (1) assess whether agency consolidation documents include adequate detail, such as performance measures and milestones, for agencies to consolidate their centers; (2) identify the key challenges reported by agencies in consolidating centers; and (3) evaluate whether lessons learned during state government consolidation efforts could be leveraged to mitigate challenges at the federal level. For this governmentwide review, we assessed the 24 departments and agencies (agencies) that were identified by the Office of Management and Budget (OMB) and the Federal Chief Information Officer (CIO) to be included in the Federal Data Center Consolidation Initiative (FDCCI). Table 6 lists these agencies. To evaluate the agencies’ data center inventories and consolidation plans, we reviewed OMB’s guidance and identified key required elements for each. We compared agency consolidation inventories and plans to OMB’s required elements, and identified gaps and missing elements. We rated each element as “Yes” if the agency provides complete information; “Partial” if the agency provides some, but not all, of the information; and “No” if the agency did not provide the information. We followed up with agencies to clarify our initial findings and to determine why parts of the inventories and plans were incomplete or missing. We assessed the reliability of the data agencies provided in their data center inventories and plans. Specifically, we interviewed agency officials to determine how the data in the inventories and plans had been collected and their processes for ensuring the reliability of the data contained in these inventories. We reviewed the inventories and plans for omissions, outliers, and typographic mistakes. We compared inventory summary data contained in the consolidation plans to inventories and noted any inconsistencies. In doing so, we found multiple gaps in agency-provided data. We also found that almost half of the agencies had not taken steps to verify their inventory data. We have reported on these limitations in the body of this report. To identify the key challenges encountered by agencies in consolidating data centers, we analyzed available literature on data center consolidation challenges and interviewed agency officials to determine what challenges to consolidation had been encountered. We then categorized the agency-reported challenges to determine ones that were encountered most often. To evaluate whether lessons learned during state government consolidation efforts could be leveraged to mitigate challenges at the federal level, we conducted a literature search for information on state experiences in data center consolidation and interviewed the National Association of State Chief Information Officers regarding states’ experiences with data center consolidation. These sources identified 20 states that reported challenges or lessons learned from their data center consolidation initiatives. We sought clarification on challenges and lessons learned through e-mail and interviews with state officials. We compared states’ challenges and lessons learned to the challenges facing federal agencies in order to identify which lessons learned could be applied to federal consolidation efforts. We conducted our work at multiple agencies’ headquarters in the Washington, D.C., metropolitan area. We conducted this performance audit from August 2010 to July 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 7 identifies the agencies that provide complete, partial, or no information for each key element of its data center consolidation plan. As part of its data center consolidation initiative, OMB required 24 federal departments and agencies to submit a data center inventory and a data center consolidation plan. Key elements of the inventory were to include, for each data center, information on IT hardware, IT software, facilities/energy/storage, and geographic location. Key elements of the plan were to include information on quantitative goals, qualitative impacts, consolidation approach, consolidation scope, timeline, performance metrics, master schedule, cost-benefit analysis, risk management, and consideration of a communications plan. For each of the agencies, the following sections provide a brief summary of the agencies’ goal for reducing the number of data centers, and an assessment of the completeness of their inventories and plans. The following information describes the key that we used in tables 8 through 30 to convey the results of our assessment of the agencies’ compliance with OMB’s requirements for the FDCCI. ● The agency provides complete information for this element. ◐ The agency provides some, but not all, aspects of the element. ○ The agency does not provide information for this element. Agriculture plans to consolidate from 46 data centers to 7 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, the department provides complete information for 1 element and partial information for the remaining 3 elements. Additionally, in its consolidation plan, Agriculture provides complete information for 8 of the 10 elements, partial information for 1 element, and does not provide information for the remaining element. Table 8 provides our assessment of Agriculture’s compliance with OMB’s requirements. Commerce plans to consolidate from 41 data centers to 23 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the department provides complete information for 3 of the 4 key elements and partial information for the remaining element. Additionally, in its consolidation plan, Commerce provides complete information for 8 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining element. Table 9 provides our assessment of Commerce’s compliance with OMB’s requirements. Defense plans to consolidate from 772 data centers to 532 by fiscal year 2013. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, Defense provides only partial information for all 4 key elements. Additionally, in its consolidation plan, Defense provides complete information for 3 of the 10 elements evaluated, provides partial information for 5 elements, and does not provide information for the remaining 2 elements. A Defense official explained that because they are a decentralized agency and are fighting multiple wars, it was difficult to meet OMB’s extremely short deadlines. The official also noted that OMB’s changing templates and definitions made it more difficult to compile the needed information. Table 10 provides our assessment of Defense’s compliance with OMB’s requirements. Education does not have plans to consolidate any of its three data centers before fiscal year 2015. Rather, the agency plans to increase server virtualization within in its centers. However, Education’s asset inventory and consolidation plan are not complete. In its asset inventory, the agency provides complete information for 3 key elements and provides partial information for the remaining element. Additionally, in its consolidation plan, Education provides complete information for 4 of the 10 elements evaluated, provides partial information for 3 elements, and does not provide information for the remaining 3 elements. Education officials stated that they did not provide selected plan elements because they are not applicable given the agency’s focus on virtualization rather than consolidation. Table 11 provides our assessment of Education’s compliance with OMB’s requirements. Energy plans to consolidate from 31 data centers to 25 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, Energy reports that although it began identifying contractor-operated data centers, this initiative was not completed in time to be included in the inventory. Additionally, in its consolidation plan, Energy provides complete information for 3 of the 10 elements evaluated, provides partial information for 2 elements, and does not provide information for the remaining 5 elements. Table 12 provides our assessment of Energy’s compliance with OMB’s requirements. HHS plans to consolidate from 185 data centers to 131 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, the department provides partial information for all 4 key elements. Additionally, in its consolidation plan, HHS provides complete information for 6 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 3 elements. Agency officials stated that they are working to complete the elements that are missing or incomplete. Table 13 provides our assessment of HHS’ compliance with OMB’s requirements. DHS plans to consolidate from 43 data centers to 2 by fiscal year 2014. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, DHS provides partial information for all 4 key elements. Additionally, in its consolidation plan, the department provides complete information for 8 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining element. Table 14 provides our assessment of DHS’s compliance with OMB’s requirements. The Department of Housing and Urban Development did not submit a data center inventory or consolidation plan. Instead, it submitted a letter that asserts that the department does not own any data centers and has no arrangements to take ownership of any data centers at the end of any contracts. Interior plans to consolidate from 95 data centers to 5 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, Interior provides complete information for 1 key element and partial information for the remaining 3 elements. In its consolidation plan, Interior provides complete information for 8 of the 10 elements evaluated, and provides partial information for 2 elements. Interior officials stated that they are working to complete the elements that are missing or incomplete. Table 15 provides our assessment of Interior’s compliance with OMB’s requirements. Justice plans to consolidate from 65 data centers to 50 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, Justice provides complete information for 1 key element and partial information for the remaining 3 elements. Additionally, in its consolidation plan, Justice provides complete information for 5 of the 10 elements evaluated, provides partial information for 2 elements, and does not provide information for the remaining 3 elements. Table 16 provides our assessment of Justice’s compliance with OMB’s requirements. Labor plans to consolidate from 20 data centers to 18 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, Labor provides complete information for 1 key element and provides partial information for the remaining 3 elements. Additionally, in its consolidation plan, Labor provides complete information for 7 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 2 elements. Table 17 provides our assessment of Labor’s compliance with OMB’s requirements. State plans to consolidate from 13 data centers to 6 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its asset inventory, State provides complete information for 3 of the key elements and provides partial information for the remaining element. Additionally, in its consolidation plan, State provides complete information for 6 of the 10 elements evaluated, provides partial information for 3 elements, and does not provide information for the remaining element. An agency official stated that they have a master program schedule and performance metrics, but acknowledged that they did not provide them to OMB as part of their consolidation plans. Table 18 provides our assessment of State’s compliance with OMB’s requirements. Transportation plans to consolidate from 35 data centers to 31 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, Transportation provides partial information for all 4 key elements, noting that in some instances, data center owners did not provide the requested information. Additionally, in its consolidation plan, Transportation provides complete information for 9 of the 10 elements evaluated and does not provide information for the remaining element. Table 19 provides our assessment of Transportation’s compliance with OMB’s requirements. Treasury plans to consolidate from 42 data centers to 29 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, Treasury provides complete information for 1 key element and provides partial information for the remaining 3 elements. Additionally, in its consolidation plan, Treasury provides complete information for 5 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 4 elements. An agency official stated that the agency is working to complete the missing or incomplete items. Table 20 provides our assessment of Treasury’s compliance with OMB’s requirements. VA plans to consolidate 87 data centers into 4 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, VA provides partial information for all 4 key elements. Additionally, in its consolidation plan, VA provides complete information for 6 of the 10 elements evaluated, provides partial information for 2 elements, and does not provide information for the remaining 2 elements. Table 21 provides our assessment of VA’s compliance with OMB’s requirements. EPA does not plan to further consolidate its four primary data centers. Instead, the agency plans to focus its consolidation efforts on achieving efficiencies via virtualization within those four centers. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides complete information for 3 of the key elements and provides partial information for the remaining element. Additionally, in its consolidation plan, EPA provides complete information for 5 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 4 elements. Table 22 provides our assessment of EPA’s compliance with OMB’s requirements. GSA plans to consolidate from 15 data centers to 3 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides complete information for 2 of the key elements and provides partial information for the remaining 2 elements. Additionally, in its consolidation plan, GSA provides complete information for 8 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining element. Table 23 provides our assessment of GSA’s compliance with OMB’s requirements. NASA plans to consolidate from 79 data centers to 57 by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides partial information for 3 of the key elements and does not provide information for the remaining element. Additionally, in its consolidation plan, NASA provides complete information for 4 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 5 elements. Table 24 provides our assessment of NASA’s compliance with OMB’s requirements. NSF owns and operates one data center and utilizes one commercial data center. The agency aims to transition all operations to one commercial data center by fiscal year 2014. The agency’s asset inventory is complete, but its consolidation plan is not. Specifically, NSF provides complete information for 7 of the 10 elements evaluated, and does not provide information for the remaining 3 elements. Agency officials stated that they have a master schedule and risk management plan, but acknowledged that they did not provide this information to OMB as part of their consolidation plan. Table 25 provides our assessment of NSF’s compliance with OMB’s requirements. NRC plans to consolidate from three existing data centers into one new center by fiscal year 2013. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides complete information for 3 of the key elements and provides partial information for the remaining element. Additionally, in its consolidation plan, NRC provides complete information for 4 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 5 elements. Table 26 provides our assessment of NRC’s compliance with OMB’s requirements. OPM does not plan to further consolidate its one data center. Instead, the agency plans to continue to examine and execute ways to improve the efficiency of its IT operations, such as through virtualization. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides complete information for 1 key element and partial information for the remaining 3 elements. Additionally, in its consolidation plan, OPM provides complete information for 5 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 4 elements. Table 27 provides our assessment of OPM’s compliance with OMB’s requirements. SBA plans to reduce its number of data centers from four to two by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides complete information for 1 key element and partial information for the remaining 3 elements. Additionally, in its consolidation plan, SBA provides complete information for 5 of the 10 elements evaluated, partial information for 1 element, and does not provide information for the remaining 4 elements. Table 28 provides our assessment of SBA’s compliance with OMB’s requirements. SSA does not plan to further consolidate its two data centers. In line with the goals of the FDCCI, the agency plans to improve the efficiency, performance, and stability of its IT infrastructure by reducing the number of its remote operations control centers. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides partial information for all 4 key elements. Additionally, in its consolidation plan, SSA provides complete information for 4 of the 10 elements evaluated, partial information for 1 element, and does not provide information for the remaining 5 elements. Table 29 provides our assessment of SSA’s compliance with OMB’s requirements. USAID plans to consolidate from two data centers into one by fiscal year 2015. However, the agency’s asset inventory and consolidation plan are not complete. In its inventory, the agency provides complete information for 1 key element and partial information for the remaining 3 elements. Additionally, in its consolidation plan, USAID provides complete information for 5 of the 10 elements evaluated, provides partial information for 1 element, and does not provide information for the remaining 4 elements. Table 30 provides our assessment of USAID’s compliance with OMB’s requirements. In its guidance on data center consolidations, OMB identified four approaches for agencies to consider while evaluating the feasibility of consolidating the individual systems found within each data center. OMB directed agencies to specify which of these approaches was to be utilized for each data center. The four approaches are as follows: Decommissioning: the system is no longer in use or it is redundant and will be decommissioned. Consolidation: the system will be consolidated onto a shared infrastructure with other similar systems. Cloud computing: the system will be migrated to or replaced by Internet-based services and resources. Virtualization: the system will be migrated to a virtual machine environment. In response to OMB’s guidance, agencies reported that they will pursue a variety of consolidation approaches. Agency-specific examples of how these approaches will be employed are provided below. Agencies may choose to decommission their underutilized physical servers as a part of their data center consolidation plans. For example, EPA plans to decommission more than 900 physical servers by 2015. Also, Labor plans to decommission unused servers and storage hardware and replace inefficient hardware with “green IT” hardware. Further, GSA plans to identify and decommission inefficient and underutilized legacy servers and equipment. Agencies can also choose to decommission an entire data center by moving to an outsourced data center or reducing the number of physical assets. For example, as part of its data center consolidation initiative, NSF plans to decommission its single data center by fiscal year 2014 and to move to a commercial facility. Transportation plans to decommission data centers that spread out across multiple buildings and reduce the department’s number of data centers by approximately 25 percent by the close of fiscal year 2015. Consolidation is a means of combining workload onto fewer computers or concentrating data processing into fewer physical facilities. Physically moving data processing equipment from multiple locations to a lesser number of locations can assist agencies in reaching consolidation goals, such as reducing the cost of data center hardware, software, and operations, in addition to real estate and energy costs. For example, DHS has 43 principal data centers, all of which will be moved into one of two enterprise data centers by the end of fiscal year 2014. In addition, NASA plans to consolidate from 79 data centers to 57 by fiscal year 2015. Cloud computing is an emerging form of computing that relies on Internet- based services and resources to provide computing services to customers, while freeing them from the burden and costs of maintaining the underlying infrastructure. This approach is a form of delivering IT services that takes advantage of several broad evolutionary trends, including the use of virtualization; the decreased cost and increased speed of networked communications, such as the Internet; and overall increases in computing power. Examples of cloud computing include Web-based e-mail applications and common business applications that are accessed online through a browser instead of through a local computer. Several agencies are considering both cloud computing and virtualization as a means of achieving their consolidation goals. For example, SBA has plans to migrate commodity computing services such as Web hosting and messaging to cloud solutions. We have recently reported on challenges associated with the implementation of cloud computing. Virtual machines can be stored as files, making it possible to save a virtual machine and move it from one physical server to another. Virtualization is often used as part of cloud computing. For example, one Defense component reports that 45 percent of all server operating environments supporting customer workload in its data centers have been virtualized. State plans to reduce its environmental impact by hosting 70 percent of the department’s servers on virtual infrastructure by 2015. Also, NRC has virtualized 41 Windows-based applications and has identified 50 additional applications to be virtualized by 2013. In addition to the contact named above, Colleen Phillips (Assistant Director), Neil Doherty, Rebecca Eyler, Nancy Glover, Dave Hinchman, Linda Kochersberger, and Jessica Waselkow made key contributions to this report. | Over time, the federal government's demand for information technology has led to a dramatic rise in the number of federal data centers and an increase in operational costs. Recognizing this increase, the Office of Management and Budget (OMB) has launched a governmentwide initiative to consolidate data centers. GAO was asked to (1) assess whether agency consolidation documents include adequate detail for agencies to consolidate their centers, (2) identify the key consolidation challenges reported by agencies, and (3) evaluate whether lessons learned during state government consolidation efforts could be leveraged at the federal level. To address these objectives, GAO assessed the completeness of agency inventories and plans, interviewed agencies about their challenges, and evaluated the applicability of states' consolidation lessons to federal challenges. In launching its federal data center consolidation initiative, OMB required the 24 participating agencies to submit data center inventories and consolidation plans by the end of August 2010, and provided guidance on key elements to include in the inventories and plans--such as hardware and software assets, goals, schedules, and cost-benefit calculations. The plans indicate that agencies anticipate closing about 650 data centers by fiscal year 2015 and saving about $700 million in doing so. However, only one of the agencies submitted a complete inventory and no agency submitted complete plans. Further, OMB did not require agencies to document the steps they took, if any, to verify the inventory data. For example, in their inventories, 14 agencies do not provide a complete listing of data centers and 15 do not list all of their software assets. Also, in their consolidation plans, 20 agencies do not reference a master schedule, 12 agencies do not address cost-benefit calculations, and 9 do not address risk management. The reason for these gaps, according to several agency officials, was that they had difficulty completing their inventories and plans within OMB's timelines. Until these inventories and plans are complete, agencies may not be able to implement their consolidation activities and realize expected cost savings. Moreover, without an understanding of the validity of agencies' consolidation data, OMB cannot be assured that agencies are providing a sound baseline for estimating consolidation savings and measuring progress against those goals. Agencies identified multiple challenges during data center consolidation, including those that are specific to OMB's consolidation initiative as well as those that are cultural, funding-related, operational, and technical in nature. For example, in attempting to fulfill OMB's requirements, 19 agencies reported difficulty in obtaining power usage data. In addition, 9 agencies reported challenges in maintaining services during the transition to consolidated services. Moving forward, it will be important for agencies to focus on mitigating such challenges as they implement their consolidation plans. Many state governments have undertaken data center consolidation initiatives in recent years and have encountered challenges similar to those reported by federal agencies. Specifically, 19 states reported lessons learned that could be leveraged at the federal level. For example, a West Virginia official reported that since the state had no funding for data center consolidation, it used the natural aging cycle of hardware to force consolidation; that is, when a piece of hardware was ready to be replaced, the new applications and software were put onto a consolidated server. Also, officials from North Carolina reported that organizations are typically concerned that by consolidating data centers, they will lose control of their data, service levels will decline, or costs will rise. The state learned that during the process of consolidation, the organizations' concerns should be documented, validated, and addressed. GAO is recommending that the Federal Chief Information Officer, department secretaries, and agency heads take steps to ensure that agency data center inventories and consolidation plans are complete. Most agencies agreed with GAO's recommendations. Defense and SSA did not agree to complete all missing elements of their inventories and plans. Based on OMB guidance on the importance of these elements, GAO maintains these recommendations to be reasonable and appropriate. |
Since fiscal year 2000, DOD has significantly increased the number of major defense acquisition programs and its overall investment in them. During this same time period, acquisition outcomes have not improved. For example, in last year’s assessment of selected DOD weapon programs, we found that total acquisition costs for the fiscal year 2007 portfolio of major defense acquisition programs increased by $295 billion or 26 percent and development costs increased by 40 percent from first estimates—both of which are higher than the corresponding increases in DOD’s fiscal year 2000 portfolio. In most cases, the programs we assessed failed to deliver capabilities when promised—often forcing warfighters to spend additional funds on maintaining legacy systems. Our analysis showed that current programs experienced, on average, a 21- month delay in delivering initial capabilities to the warfighter, a 5-month increase over fiscal year 2000 programs as shown in table 2. Continued cost growth results in less funding being available for other DOD priorities and programs, while continued failure to deliver weapon systems on time delays providing critical capabilities to the warfighter. We are currently updating our analysis and intend to issue our assessment of DOD’s current portfolio later this month. Several underlying systemic problems at the strategic level and at the program level continue to contribute to poor weapon system program outcomes. At the strategic level, DOD does not prioritize weapon system investments and the department’s processes for matching warfighter needs with resources are fragmented and broken. DOD largely continues to define warfighting needs and make investment decisions on a service- by-service basis and assess these requirements and their funding implications under separate decision-making processes. Ultimately, the process produces more demand for new programs than available resources can support, promoting an unhealthy competition for funds that encourages programs to pursue overly ambitious capabilities, develop unrealistically low cost estimates and optimistic schedules, and suppress bad news. Similarly, DOD’s funding process does little to prevent programs from going forward with unreliable cost estimates and lengthy development cycles, which is not a sound basis for allocating resources and ensuring program stability. Invariably, DOD and Congress end up continually shifting funds to and from programs—undermining well- performing programs to pay for poorly performing ones. At the program level, programs are started without knowing what resources will truly be needed and are managed with lower levels of product knowledge at critical junctures than expected under best practices standards. For example, in our March 2008 assessment, we found that only 12 percent of the 41 programs we reviewed had matured all critical technologies at the start of the development effort. None of the 26 programs we reviewed that were at or had passed their production decisions had obtained adequate levels of knowledge. In the absence of such knowledge, managers rely heavily on assumptions about system requirements, technology, and design maturity, assumptions that are consistently too optimistic. These gaps are largely the result of a lack of a disciplined systems engineering analysis prior to beginning system development, as well as DOD’s tendency to allow new requirements to be added well into the acquisition cycle. This exposes programs to significant and unnecessary technology, design, and production risks, ultimately damaging cost growth and schedule delays. With high-levels of uncertainty about technologies, design, and requirements, program cost estimates and related funding needs are often understated, effectively setting programs up for failure. When DOD consistently allows unsound, unexecutable programs to pass through the requirements, funding, and acquisition processes, accountability suffers. Program managers cannot be held accountable when the programs they are handed already have a low probability of success. Moreover, program managers are not empowered to make go or no-go decisions, have little control over funding, cannot veto new requirements, have little authority over staffing, and are frequently changed during a program’s development. Consequently, DOD officials are rarely held accountable for these poor outcomes, and the acquisition environment does not provide the appropriate incentives for contractors to stay within cost and schedule targets, making officials strong enablers of the status quo. With regard to improving its acquisition of weapon systems, DOD has made changes consistent with the knowledge-based approach to weapons development that GAO has recommended in its work. In December 2008, DOD revised DOD Instruction 5000.02, which provides procedures for managing major defense acquisition programs in ways that aim to provide key department leaders with the knowledge needed to make informed decisions before a program starts and to maintain discipline once it begins. For example, the revised instruction includes procedures for the completion of key systems engineering activities before the start of the systems development, a requirement for more prototyping early in programs, and the establishment of review boards to monitor weapon system configuration changes. We have previously raised concerns, however, with DOD’s implementation of guidance on weapon systems acquisition. At the same time, DOD must begin making better choices that reflect joint capability needs and match requirements with resources. DOD’s investment decisions cannot continue to be driven by the military services that propose programs that overpromise capabilities and underestimate costs simply to start and sustain development programs. Recent congressional actions, including efforts by your committtes reflect the need for achieving better acquisition outcomes. We commend this Committee for forming a special panel on Defense Acquisition Reform to address broad issues surrounding the defense acquisition process including how to evaluate performance and value in the current system, the root causes of system failures, the administrative and cultural pressures that lead to negative outcomes, and the reform recommendations of previous studies. The Senate Committee on Armed Services also has proposed legislation with provisions to strengthen DOD’s acquisition processes including provisions to improve systems engineering, developmental testing, technology maturity assessments, independent cost estimates and the role of the combatant commanders, among other provisions. DOD relies increasingly on contractors to support its missions and operations. For example, DOD estimated that more than 230,000 contractor personnel were supporting operations in Iraq and Afghanistan as of October 2008. Officials have stated that without a significant increase in its civilian and military workforce, the department is likely to continue to rely on contractors both in the United States and overseas. Contractors can provide important benefits, such as flexibility to fulfill immediate needs. But, using contractors also comes with inherent risks, which must be mitigated through effective management. DOD’s reliance on contractors has not been the result of a strategic or deliberate process but instead resulted from thousands of individual decisions to use contractors in specific situations. DOD’s longstanding guidance for determining the appropriate military, civilian, and contractor mix needed to accomplish the department’s mission, focuses on individual decisions of whether to use contractors to provide specific capabilities and not the overarching question of what the appropriate role of contractors should be. We have repeatedly called for DOD to be more strategic in how it uses contractors. Without a fundamental understanding of when, where, and how contractors should or should not be used, DOD’s ability to mitigate the risks associated with using contractors is limited. Our work has highlighted risks, which include differing ethical standards, diminished institutional capacity, potentially greater costs, and mission risks. For example: Contractor employees often work side-by-side with government employees, performing such tasks as studying alternative ways to acquire desired capabilities, developing contract requirements, and advising or assisting on source selection, budget planning, and award- fee determinations. Contractor employees are generally not subject, however, to the same laws and regulations that are designed to prevent conflicts of interests among federal employees. Reliance on contractors can create mission risks when contractors are supporting deployed forces. For example, because contractors cannot be ordered to serve in contingency environments, the possibility that they will not deploy can create risks that the mission they support may not be effectively carried out. Further, if commanders are unaware of their reliance on contractors, they may not realize that substantial numbers of military personnel may be redirected from their primary responsibilities to provide force protection or assume functions anticipated to be performed by contractors, and commanders therefore may not plan accordingly. The Chairman of the Joint Chiefs of Staff has directed the Joint Staff to examine the use of DOD service contracts (contractors) in Iraq and Afghanistan in order to better understand the range and depth of contractor capabilities necessary to support the Joint Force. One underlying premise of using contractors is that doing so will be more cost-effective than using government personnel. This assumption may not always be the case. In one instance, we found that the Army Contracting Agency’s Contracting Center of Excellence was paying up to 27 percent more for contractor-provided contract specialists than it would have for similarly graded government employees. Once the decision has been made to use contractors to support DOD’s missions or operations, it is essential that DOD clearly define its requirements and employ sound business practices, such as using appropriate contracting vehicles. Our work, however, has identified weaknesses in DOD’s management and oversight, increasing the government’s risk. For example, In June 2007, we found significant use of time-and-materials contracts. These contracts are considered high risk for the government because they provide no positive profit incentive to the contractor for cost control or labor efficiency and their use is supposed to be limited to cases where no other contract type is suitable. We found that DOD underreported its use of time-and-materials contracts; frequently did not justify why time-and-materials contracts were the only contract type suitable for the procurement; made few attempts to convert follow-on work to less risky contract types; and was inconsistent in the rigor with which contract monitoring occurred. In that same month, we reported that DOD needed to improve its management and oversight of undefinitized contract actions, under which DOD can authorize contractors to begin work and incur costs before reaching a final agreement on contract terms and conditions, including price. The contractor has little incentive to control costs during this period, creating a potential for wasted taxpayer dollars. We found that the government’s federal procurement data system did not track undefinitized contract actions awarded under task or delivery order contracts. Moreover, we found that the use of some undefinitized contract actions could have been avoided with better acquisition planning, that DOD frequently did not definitize the undefinitized contract actions within the required time frames thereby increasing the cost risk to the government, and that contracting officers were not documenting the basis for the profit or fee negotiated, as required. In response to GAO’s recommendations relative to time-and-materials contracts and undefinitized contract actions, DOD has taken actions to limit risk to the government under both circumstances. Our previous work has also demonstrated that better collection and distribution of information on contract management could limit risks. For example: Our 2008 review of several Army service contracts found that contracting offices were not documenting contract administration and oversight actions taken in accordance with DOD policy and guidance. As a result, incoming contract administration personnel did not know whether the contractors were meeting their contract requirements effectively and efficiently and therefore were limited in their ability to make informed decisions related to award fees, which can run into the millions of dollars. In addition, several GAO reports and testimonies have noted that despite years of experience using contractors to support deployed forces in the Balkans, Southwest Asia, Iraq, and Afghanistan, DOD has made few efforts to systematically collect and share lessons learned regarding the oversight and management of contractors supporting deployed forces. As a result, many of the management and oversight problems we identified in earlier operations have recurred in current operations. Properly managing the acquisition of contractor services requires a workforce with the right mix of skills and capabilities. Individuals and organizations involved in the acquisition process include not just the contracting officers who award contracts, but also those military and civilian officials who define requirements, receive or benefit from the services provided, and oversee contractor performance, including the Defense Contract Audit Agency (DCAA) and the Defense Contract Management Agency (DCMA). We and others have raised questions whether DOD has a sufficient number of trained acquisition and contract oversight personnel to meet its needs. For example, the increased volume of contracting is far in excess of the growth in DOD contract personnel. Between fiscal years 2001 and 2008, DOD obligations on contracts when measured in real terms, have more than doubled to over $387 billion in total, and to more than $200 billion just for services. Over the same time period, however, DOD reports its contracting career field grew by only about 1 percent as shown in figure 1. In 2008, DOD completed an assessment of its civilian contracting workforce to provide a foundation for understanding the skills and capabilities of its current workforce and to determine how to close any gaps. DOD has not yet completed its assessments of the competencies and skills in the rest of its acquisition workforce. To facilitate improvements to DOD’s acquisition workforce, the National Defense Authorization Act for Fiscal Year 2008 required DOD to establish and dedicate funding to an Acquisition Workforce Development Fund. DOD is in the process of implementing this fund and has focused its efforts in three key areas: (1) recruiting and hiring, (2) training and development, and (3) retention and recognition. We are currently assessing DOD’s ability to determine the sufficiency of its acquisition workforce and its efforts to improve its workforce management and oversight and will be issuing a report in the spring. Having too few contract oversight personnel presents unique difficulties at deployed locations given the more demanding operational environment compared to the United States because of an increased operational tempo, security considerations, and other factors. We and others have found significant deficiencies in DOD’s oversight of contractors because of an inadequate number of trained personnel to carry out these duties and the lack of training for military commanders and oversight personnel. As we testified in 2008, limited or no pre-deployment training on the use of contractor support can cause a variety of problems for military commanders in a deployed location, such as being unable to adequately plan for the use of those contractors and confusion regarding the military commanders’ roles and responsibilities in managing and overseeing contractors. Lack of training also affects the ability of contract oversight personnel to perform their duties. While performing oversight is often the responsibility of military service contracting officers or their representatives, DCAA and DCMA play key roles in the oversight process. DCAA provides a critical internal control function on behalf of DOD and other federal agencies by performing a range of contract audit services, including reviewing contractors’ cost accounting systems, conducting audits of contractor cost proposals and payment invoices, and providing contract advisory services to help assure that the government pays fair and reasonable prices. To be an effective control, DCAA must perform reliable audits. In a report we issued in July 2008, however, we identified a serious noncompliance with generally accepted government auditing standards at three field audit offices responsible for billions of dollars of contracting. For example, we found that workpapers did not support reported opinions and sufficient audit work was not performed to support audit opinions and conclusions. As a result, DCAA cannot assure that these audits provided reliable information to support sound contract management business decisions or that contract payments are not vulnerable to significant amounts of fraud, waste, abuse, and mismanagement. The DCAA Director subsequently acknowledged agencywide problems and initiated a number of corrective actions. In addition, DOD included DCAA’s failure to meet professional standards as a material internal control weakness in its fiscal year 2008 agency financial report. We are currently assessing DCAA’s corrective actions and anticipate issuing a report later this spring. Similarly, DCMA provides oversight at more than 900 contractor facilities in the United States and across the world, providing contract administration services such as monitoring contractors’ performance and management systems to ensure that cost, performance, and delivery schedules comply with the terms and conditions of the contracts. DCMA has also assumed additional responsibility for overseeing service contracts in Iraq, Afghanistan, and other deployed locations, including contracts that provide logistical support and private security services. In a July 2008 report, we noted that DCMA had increased staffing in these locations only by shifting resources from other locations and had asked the services to provide additional staff since DCMA did not have the resources to meet the requirement. As a result, it is uncertain whether DCMA has the resources to meet its commitments at home and abroad. GAO’s body of work on contract management and the use of contractors to support deployed forces has resulted in numerous recommendations over the last several years. In response, DOD has issued guidance to address contracting weaknesses and promote the use of sound business arrangements. For example, in response to congressional direction and GAO recommendations, DOD has established a framework for reviewing major services acquisitions; promulgated regulations to better manage its use of contracting arrangements that can pose additional risks for the government, including time-and-materials contracts and undefinitized contracting actions; and has efforts under way to identify and improve the skills and capabilities of its workforce. For example, we reported in November 2008 that DOD has been developing, revising, and finalizing new joint policies and guidance on the department’s use of contractors to support deployed forces (which DOD now refers to as operational contract support) and has begun to develop training programs for non- acquisition personnel to provide information necessary to operate effectively on contingency contracting matters and work with contractors on the battlefield. As the department moves forward, it needs to ensure that guidance is fully complied with and implemented. Doing so will require continued, sustained commitment by senior leadership to translate policy into practice and to hold decision makers accountable. In addition, at the departmentwide level, DOD has yet to conduct the type of fundamental reexamination of its reliance on contractors that we called for in 2008. Without understanding the depth and breadth of contractor support, the department will be unable to determine if it has the appropriate mix of military personnel, DOD civilians, and contractors. As a result, DOD may not be totally aware of the risks it faces and will therefore be unable to mitigate those risks in the most cost-effective and efficient manner. Contract and project management challenges are not unique to DOD. DOE manages over 100 construction projects with estimated costs over $90 billion and 97 nuclear waste cleanup projects with estimated costs over $230 billion. DOE is the largest civilian contracting agency in the federal government, spending about 90 percent of its budget on contracts. It has about 14,000 employees to oversee the work of more than 93,000 contractor employees. While other DOE program offices have recently made progress, the National Nuclear Security Administration (NNSA), which is responsible for maintaining the safety and reliability of the nuclear weapons stockpile, remains on our High-Risk List for continued weaknesses in contract and project management. As the largest component organization within DOE, the NNSA has an annual budget of approximately $9 billion for the management and security of the nation’s nuclear weapons, nuclear nonproliferation, and naval reactors programs. For the past 2 years, we have been reporting on the lack of sufficient action by NNSA as well as specific projects that continue to face contract and project management challenges. For example, on March 4, 2009, we testified on, among other things, significant cost overruns and schedule delays on five of NNSA’s largest construction projects. These construction projects experienced cumulative cost increases of nearly $6 billion above the initial cost estimates. These projects also experienced cumulative schedule delays in excess of 32 years above initial estimates. Though some of the cost and schedule delays can be tied to increased cost of materials and labor, most of these cost and schedule increases were the result of poor performance on the part of NNSA and its contractors. Specifically, we have found NNSA in some instances: failed to follow its own project guidance, produced internal cost and schedule estimates for projects that are not conducted insufficient and ineffective project reviews, relied on technologies without assessing their readiness, and lacked sufficient federal staffing and expertise for project management oversight. We have made a series of recommendations to strengthen DOE’s and NNSA’s contract management, which collectively call for the agencies to take the following actions: ensure that project management requirements are consistently followed, improve oversight of contractors, and strengthen accountability for performance. DOE and NNSA have generally agreed with our recommendations and, over the last 2 years, have been working to better understand the underlying weaknesses in contract and project management and develop appropriate corrective actions to address the weaknesses. As part of the Office of Management and Budget initiative for federal agencies to develop detailed corrective action plans for high-risk areas, DOE obtained input from headquarters and field officials, including NNSA officials, with contract and project management expertise to develop a root-cause analysis of NNSA’s weaknesses. DOE then used this analysis to develop a corrective action plan and performance measures to assess progress. However, we continue to believe that further improvements are needed. For example, as of the end of fiscal year 2008, NNSA had still not implemented any of the 21 recommendations we had made in January 2007 that were aimed, in part, at improving NNSA contractor oversight and project management. More recently, in a March 2, 2009, report issued to this Committee’s Strategic Forces Subcommittee, we found that NNSA and DOD have not effectively managed the project cost, schedule, and technical risks for programs to extend the lifetimes of two warheads in the nuclear weapons stockpile. We are concerned that weaknesses, such as these, if left unaddressed, will impact NNSA’s plans to modernize its infrastructure and create a smaller, more responsive nuclear weapon’s complex as NNSA and DOD have recently proposed. This effort, known as Complex Transformation, is expected to require tens of billions of dollars over several decades to complete. The administration is placing greater emphasis on the need to address contracting related challenges governmentwide. President Obama has just issued an executive memorandum directing, in part, the Director of the Office of Management and Budget—in collaboration with the Secretary of Defense, the Administrator of the National Aeronautics and Space Administration, the Administer of General Services, the Director of Personnel Management, and the heads of any other agencies that the Director of the Office of Management and Budget determines appropriate— to develop and issue government-wide guidance to assist agencies in reviewing, and creating processes for ongoing review of existing contracts in order to identify contracts that are wasteful, inefficient, or otherwise unlikely to meet the agencies needs, and to formulate corrective action in a timely manner. Congress is also emphasizing the need to address government-wide contracting related challenges. For example, in the National Defense Authorization Act for Fiscal Year 2008, Congress created the Commission on Wartime contracting to study federal agency contracting for the reconstruction, logistical support of coalition forces, and the performance of security functions, in Iraq and Afghanistan. The Senate Committee on Homeland Security and Governmental Affairs also recently announced the creation of a new Ad Hoc Subcommittee on Contracting Oversight. Supply chain management continues to be on our high-risk list as a result of weaknesses in the DOD’s management of supply inventories and responsiveness to warfighter requirements. The availability of spare parts and other critical supply items that are procured and delivered through DOD’s supply chain network affects the readiness and capabilities of U.S. military forces, and can affect the success of a mission. DOD reported spending approximately $178 billion on its supply chain in fiscal year 2007. While DOD has taken a number of positive steps toward improving its supply chain management, such as consolidating certain inventories in regional hubs and improving transportation management of military freight, it has continued to experience weaknesses in its ability to provide efficient and effective supply support to the warfighter. Consequently, the department has been unable to consistently meet its goal of delivering the “right items to the right place at the right time” to support the deployment and sustainment of military forces. For example, the military services continued to have billions of dollars worth of spare parts that were in excess of current requirements, representing a significant portion of their inventories. For example, in our most recent reviews of inventory management, we found that, the Army and Navy, over a 4-year period from fiscal years 2004 to 2007, averaged an annual total of $11 billion in inventory value (in constant fiscal year 2007 dollars) that exceeded current requirements. The Navy’s portion of the total—$7.5 billion—represented about 40 percent of the average annual value of its total inventory ($18.7 billion). The Army’s portion—$3.6 billion—represented 22 percent of the average annual value of its total inventory ($16.3 billion). A major cause for the services’ excess inventories was weakness in demand forecasting. Moreover, we noted a lack of metrics and targets focusing on the cost efficiency of inventory management. In addition, DOD had not instituted a coordinated management approach to improving distribution and supply support for joint military operations, and faced challenges in achieving widespread implementation of key technologies aimed at improving asset visibility. We have also reported that DOD, as it looked ahead to drawing down its forces from Iraq, lacked a unified or coordinated command structure to plan for the management and execution of the return of material and equipment from Iraq, worth approximately $16.5 billion. While the U.S. Central Command has recently taken steps to refine and solidify a theater logistics command to address these weaknesses, corrective actions have not yet been fully implemented. DOD has recognized the need for a comprehensive, integrated strategy for transforming logistics and in July 2008 released its Logistics Roadmap with the intent to provide a more coherent and authoritative framework for logistics improvement efforts, including supply chain management. However, we found that the road map was missing key elements that would make the information more useful for DOD’s senior leaders. First, it did not identify the scope of DOD’s logistics problems or gaps in logistics capabilities. Second, it lacked outcome-based performance measures that would enable DOD to assess and track progress toward meeting stated goals and objectives. Third, DOD had not clearly stated how it intended to integrate the roadmap into DOD’s logistics decision- making processes or who within the department was responsible for this integration. DOD has generally concurred with our recommendations, and in some cases has committed to take action or has taken action. For example, when DOD updates the Logistics Roadmap later this year, DOD plans to remedy some of the weaknesses we identified. To successfully resolve key supply chain management problems, DOD needs to: sustain top leadership commitment and long-term institutional support for the Logistics Roadmap and demonstrate progress in achieving the objectives in the road map; address the elements missing from its Logistics Roadmap, to ensure that the road map provides a comprehensive, integrated strategy for guiding supply chain management improvement efforts; conduct systematic evaluations of demand forecasting used for inventory management to identify and correct weaknesses and establish goals and metrics for tracking and assessing the cost efficiency of inventory management; develop and implement a coordinated and comprehensive management approach to guide and oversee efforts across the department to improve distribution and supply support for U.S. Forces in a joint theater; collect cost and performance data on the initial implementation of asset visibility technologies, analyze the return on investment for these technologies, and determine whether they have received sufficient funding priority; and take steps to fully implement DOD’s recent initiative to establish a unified or coordinated chain of command over logistics operations in support of the retrograde of equipment and materiel from Iraq, and correct incompatibility weaknesses in the various data systems used to maintain visibility of equipment and materiel while they are in-transit. Achieving and sustaining progress will require commitments and a coordinated management approach at the highest level of the department as well as the military services and other DOD components. Efficient and effective management and accountability of DOD’s hundreds of billions of dollars worth of resources require timely, reliable, and useful information. However, DOD’s pervasive financial and related business management and system deficiencies continue to adversely affect its ability to control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent and detect fraud, waste, and abuse; and address pressing management issues. To date, while the U.S. Army Corps of Engineers, Civil Works has achieved a clean audit opinion on its financial statements, none of the military services have. For many years, DOD has annually acknowledged that long-standing weaknesses in its business systems and processes have prevented auditors from determining the reliability of DOD’s financial statement information. We also have previously reported that a weak overall control environment and poor internal controls limit DOD’s ability to prevent and detect fraud, waste, abuse, and improper payments. For example, before awarding contracts or making purchases from the General Services Administration’s Federal Supply Schedule, contracting officers and other agency officials are required to check the Excluded Party List System to ensure that a prospective vendor is not prohibited from doing business with the federal government. However, in February 2009, we reported that failure to follow contract award procedures resulted in DOD’s contracting officers making awards to debarred or suspended companies. Over the years, DOD has initiated numerous efforts intended to improve its financial management practices. In response to a congressional mandate, DOD issued its Financial Improvement and Audit Readiness (FIAR) Plan in December 2005, which it updates twice a year, to outline its strategy for addressing its financial management challenges and achieving clean audit opinions. In addition, DOD has taken steps toward developing and implementing a framework for addressing its long-standing financial management weaknesses and improving its capability to provide timely, reliable, and relevant financial information for decision making and reporting, a key defense transformation priority. This framework includes a Standard Financial Information Structure and Business Enterprise Information System, intended to provide standardization in financial reporting. DOD’s efforts should help to improve the consistency and comparability of its financial information and reporting; however, a great deal of work needs to be done. In particular, data cleansing, improvements in policies, processes, and controls; as well as successful system implementations are needed to improve DOD’s financial management and reporting. We are in the process of reviewing the department’s September 2008 FIAR Plan to determine if there are any areas where improvements are needed to enhance the plan’s effectiveness as a management tool for guiding, monitoring, and reporting on the department’s efforts to identify and resolve its financial management weaknesses and achieve financial statement auditability. We will provide the Committee a copy of the report when it is issued. Key to successful transformation of DOD’s financial management operations will continue to be: development and sustained implementation of a comprehensive and integrated financial management transformation strategy, within an overall business transformation strategy, to guide financial management improvement efforts, prioritization of initiatives and resources, and monitoring of progress through the establishment and utilization of cascading performance goals, objectives, and metrics. We designated strategic human capital management as a high risk area because of the federal government’s long-standing lack of a consistent approach to human capital management and the continuing need for a governmentwide framework to advance human capital reform. Like other federal agencies, DOD also faces challenges in managing its human capital, particularly with its civilian workforce. With almost 30 percent of its total civilian workforce (about 670,000) becoming eligible to retire in the next few years, DOD may be faced with deciding how to fill numerous mission-critical positions—positions that involve developing policy, providing intelligence, and acquiring weapon systems. Having the right number of civilian personnel with the right skills is critical to achieving the department’s mission. In recent years, Congress has passed legislation requiring DOD to conduct human capital planning efforts for the department’s overall civilian workforce and its senior leaders. Specifically, the National Defense Authorization Act for Fiscal Year 2006 requires DOD to develop a strategic human capital plan, update it annually through 2010, and address eight requirements. The National Defense Authorization Act for 2007 added nine requirements to the annual update to shape DOD’s senior leader workforce. In February 2009, we reported while DOD’s 2008 strategic human capital plan update, when compared with its 2007 plan, showed progress in addressing the National Defense Authorization Act for Fiscal Year 2006 requirements, it only partially addressed each of the act’s requirements. For example, DOD identified 25 critical skills and competencies—referred to as enterprisewide mission-critical occupations, which included logistics management and medical occupations. The update, however, did not contain assessments for over half of the 25 occupations, and the completed assessments of future enterprisewide mission-critical occupations did not cover the required 10-year period. Also, DOD’s update only partially addressed the act’s requirements for a plan of action for closing the gaps in DOD’s civilian workforce. Although DOD recently established a program management office whose responsibility is to monitor DOD’s updates to the strategic human capital plan, the office, at the time of our review, did not have and did not plan to have a performance plan that articulates how the legislative requirements will be met. Until such a plan is developed, DOD may not be well positioned to design the best strategies to meet its civilian workforce needs. Regarding plans for DOD’s senior leader workforce, DOD’s 2008 update and related documentation addressed four of the nine requirements in the fiscal year National Defense Authorization Act for Fiscal Year 2007, but only partially addressed the remaining five. For example, DOD’s update notes that the department has not completely addressed the requirement to assess its need for senior leaders. Although DOD recently established an executive management office to manage the career life cycle of DOD senior leaders, as well as the National Defense Authorization Act for Fiscal Year 2007 requirements, this office, at the time of our review, did not have and did not plan to develop a performance plan to address the national defense authorization act requirements. Until DOD develops a performance plan to guide its efforts to strengthen its human capital strategic planning, it may be unable to design the best strategies to meet its senior leader workforce needs. We designated the effective protection of technologies critical to U.S. national interests as a high risk area due to weaknesses GAO identified in the effectiveness and efficiency of government programs designed to protect such technologies. The U.S. government approves selling DOD weapon systems and defense-related technologies overseas for foreign policy, security, and economic reasons and has a number of long-standing programs to identify and protect critical technologies from reverse engineering and illegal export. These include the anti-tamper program, militarily critical technologies program, and the export controls systems for defense-related and dual-use items. DOD is responsible for implementing several of these programs and is a key stakeholder in others. We have identified actions specific to DOD, including that it needs to: develop and provide departmentwide guidance to program managers in how to implement anti-tamper protection, develop an approach to identify and catalogue technologies that best meet the needs of U.S. government programs that control militarily critical technologies, and resolve disagreements with the Department of State on export control exemption use and guidelines. While actions at the agency level can lead to improvements, agencies have yet to take action to address our major underlying concern, which is the need for a fundamental re-examination of current government programs and evaluate the potential of alternative approaches to protect critical technologies. Federal agencies, including DOD, face challenges in protecting the security of information technology systems—commonly referred to as cybersecurity, including those systems that support our nation’s critical infrastructures (e.g., power distribution system, telecommunications networks). Long-standing, pervasive security control weaknesses continue to place national, federal, and DOD assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Well publicized computer-based attacks against information technology systems in the United States and other countries show these threats pose a potentially devastating impact to federal systems and operations and the critical infrastructures. To address the threats, the President in January 2008 began implementing a series of initiatives—called the Comprehensive National Cybersecurity Initiative—aimed primarily at improving the security of DOD and other information technology systems within the federal government. More recently, in February 2009, the new President initiated a review of the government’s overall cybersecurity strategy and supporting activities with the goal of reporting its finding in April 2009. We currently have work under way for this Committee’s Subcommittee on Terrorism and Unconventional Threats and Capabilities to assess the interagency Comprehensive National Cyber Initiative and its results. We are also examining the progress DOD has made in developing its organizational structure, policies, plans, doctrine, and capabilities for cyber defensive and offensive operations. Without sustained leadership and comprehensive strategic planning, DOD’s ability to achieve and sustain measurable progress in addressing high-risk areas and thereby improving its business operations is at risk. We have long advocated that DOD establish a Chief Management Officer (CMO) to be responsible and accountable for the department’s business transformation and a strategic planning process to direct its efforts and measure progress. DOD’s senior leadership has shown a commitment to transforming business operations and taken many steps to strengthen its management approach, both in response to congressional requirements and on its own accord. For example, the Secretary of Defense designated the Deputy Secretary of Defense as CMO of the department in May 2007. The National Defense Authorization Act for Fiscal Year 2008 subsequently codified the position, created a Deputy CMO, directed that CMO duties be assigned to the Under Secretary of each military department, and required DOD to develop a strategic management plan for business operations. In 2008, DOD issued its first Strategic Management Plan, which it characterizes as a first step toward providing Congress with the comprehensive plan required by law and as a primer for incoming officials that describes newly established and existing structures and processes within DOD to be used by the CMO for delivering effective and efficient support to the warfighter. DOD also issued directives broadly defining the roles and responsibilities of the CMO and Deputy CMO, established a DCMO office, and named an Assistant Deputy CMO to lead the stand-up of the office prior to the nomination and filling of the Deputy CMO position. Prior to these actions, DOD had established various management and governance entities that, in addition to the CMO and Deputy CMO, will comprise the management framework for business transformation, such as the Defense Business Systems Management Committee and the Business Transformation Agency. While DOD has taken several positive steps, it still lacks critical elements needed to ensure successful and sustainable transformation efforts. Specifically, it has not fully or clearly defined the authority, roles, and relationships for some positions and entities. For example, the Deputy CMO position has not been assigned clear decision making authority or accountability for results, and the position appears to be advisory in nature. Therefore, it is unclear how the creation of the Deputy CMO position changes the existing structure of DOD’s senior leadership. It is also unclear how the Deputy CMO will work with other senior leaders across the department who have responsibility for business operations and who are at the same level or even higher, such as the various Under Secretaries of Defense and the military department CMOs. The roles and relationships of various governance entities are similarly unclear. In addition, DOD’s first Strategic Management Plan lacks key information and elements of a strategic plan. For example, it does not clearly define business operations; does not contain goals; objectives; or performance measures; and does not assign accountability for achieving desired results in its transformation efforts. Therefore, the plan cannot be used to link resources to performance, measure progress, or guide efforts of the military components. DOD plans to update its Strategic Management Plan in July 2009 and every 2 years thereafter as required by the National Defense Authorization Act for Fiscal Year 2008. We recognize that DOD has only recently established the CMO position and that DOD is in the early stages of implementation for several of its improvement efforts. To help DOD proceed with its efforts, the new administration needs to move quickly to nominate and fill key leadership positions that are currently vacant. These positions include the Deputy CMO and military department CMOs. Moving forward, DOD needs to further: define and clarify the roles, responsibilities, and relationships among the various positions and governance entities within DOD’s management framework for business transformation, and develop its strategic management plan and implement a strategic planning process that will allow DOD to measure progress, establish investment priorities, and link resource needs to performance. Because of the complexity and long-term nature of DOD’s business transformation efforts, we have repeatedly advocated the need for the CMO to be a separate, full-time position with significant authority, experience, and a set term. As DOD continues to develop its approach and carries out planned additional actions, we remain open to the possibility of further progress and that these efforts will have a positive impact. However, because of the current statutory requirements and the roles and responsibilities currently assigned to key positions, it is still unclear whether DOD will provide the long-term sustained leadership needed to address these significant challenges in its business operations. DOD and DOE have recognized they face challenges in the selected high risk areas we have outlined today and have taken some steps to address these challenges. However, the current fiscal climate presents an imperative for both agencies to refocus management attention and commitment at the highest levels and to aggressively take additional actions to achieve greater progress in the key business areas that underpin the ability to achieve mission success. As DOD moves forward, among other things, it will need to continue to reform its approach to acquiring major weapon system programs, fundamentally reexamine its reliance on contractors as well as take action to better size and train its contractor workforce, and develop and implement viable strategies for managing its supply chain and improving its financial management. For DOE’s NNSA, it is important that actions be taken to improve contract and project management in order to reverse the historical trend of schedule delays, cost growth, and increased risks in its major projects. As DOD and DOE compete for resources in a constrained fiscal environment, they can no longer afford to miss opportunities to achieve greater efficiencies and free up resources for higher priority needs. Furthermore, because of the complexity and magnitude of the challenges facing DOD in transforming its business operations, it will need strong and sustained leadership, as well as sound strategic planning to guide and integrate its efforts. The new Deputy Secretary of Defense has been given the unique opportunity to set the precedent going forward as DOD’s statutory Chief Management Officer. It will be important within the first year of this administration, that the Deputy Secretary of Defense clearly articulate the department’s expectations for this position, clarify the roles, responsibilities, and relationships among all individuals and entities that share responsibility for transforming DOD’s business operations, and establish a strategic planning process to guide efforts and assess progress across the department. Mr. Chairman and Members of the Committee, this concludes my statement. I would be happy to answer any questions you may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) spends billions of dollars to sustain key business operations intended to support the warfighter. In January, GAO released its 2009 high-risk series update report for the 111th Congress. This series emphasizes federal programs and operations that are at high risk because of vulnerabilities to fraud, waste, abuse, and mismanagement and has also evolved to draw attention to areas associated with broad-based transformation needed to achieve greater efficiency, effectiveness, and sustainability. Of the 30 high-risk areas identified by GAO across government, DOD bears sole responsibility for eight defense specific high-risk areas and shares responsibility for seven other high-risk areas--all of which are related to its major business operations. The Committee asked GAO to provide its views on (1) actions needed to achieve measurable outcomes in DOD's high-risk areas and (2) DOD's progress in strengthening its management approach for business transformation, including establishing the Chief Management Officer (CMO) position. GAO was additionally asked to highlight information regarding the high-risk area related to contract management at the Department of Energy's (DOE) National Nuclear Security Administration. Longstanding weaknesses in DOD's business operations adversely affect the department's economy, efficiency, and effectiveness, and have resulted in a lack of adequate accountability. As a result, DOD continues to experience cost growth in many of these areas and wastes billions of dollars annually that could be freed up for higher priority needs. DOD's senior leadership has shown a commitment to transforming business operations, and taken many steps to address weaknesses. However, additional actions are needed to achieve and sustain progress. DOD has taken some steps to establish the CMO and other key positions, but still lacks some critical elements to strengthen its management approach. The National Defense Authorization Act for Fiscal Year 2008 codified the CMO position, created a Deputy CMO, directed that CMO duties be assigned to the Under Secretary of each military department, and required a strategic plan for business operations. DOD has yet to clearly define the roles, responsibilities, and relationships among key positions, including the Deputy CMO and military department CMOs. Also, its first plan, issued in July 2008, lacks clear goals, objectives, and performance measures. As DOD's approach continues to evolve, GAO remains open to the possibility of further progress. However, because of the roles and responsibilities currently assigned to key positions, it is still unclear whether DOD will provide the long-term sustained leadership needed to address significant challenges in its business operations. |
Among the 1.4 million active duty servicemembers in fiscal year 2011, over half (57 percent) were married, according to DOD’s data. More than 90 percent of the spouses of active duty servicemembers were women. Recent studies have found that among those in the labor force, being a military spouse is correlated with a higher unemployment rate, compared to civilian spouses. In addition, among those employed, being a military spouse is correlated with a lower wage on average, relative to civilian spouses. Researchers have posited several possible reasons for this. First, military spouses tend to be a younger group than civilian spouses, as well as more likely to be caring for young children. As a result, a larger proportion of military spouses are more likely to be at the beginning of their careers compared to civilian spouses, and a larger proportion have childrearing responsibilities that may make obtaining or maintaining a job more challenging. Second, military spouses move more often than civilian spouses as a whole, which may make it more difficult to retain jobs and develop careers. Some have also speculated that employers may be less willing to hire military spouses than other populations, for example, if they are concerned that military spouses will relocate. Third, demanding work schedules for the servicemembers may mean that spouses bear a larger share of childrearing or other family responsibilities, particularly when servicemembers are deployed. One recent study controlled for many of these characteristics and found that they explained some, though not all, of the correlation between being a military spouse and having a higher unemployment rate and lower average wage, relative to civilian spouses. Recognizing the challenges that military spouses face in beginning or maintaining a career, DOD has historically had efforts to help military spouses obtain employment. The military services have operated employment assistance programs at military installations since the 1980s. While these programs serve spouses, they also serve many other populations in the military community, including dependent children, active duty servicemembers, active Reserve and National Guard members, DOD civilian personnel, servicemembers transitioning to civilian life, wounded warriors, and DOD retirees (see fig. 1). These programs assist in a variety of ways, including providing referrals to job openings, job fairs, one-on-one employment counseling, and workshops on resume writing, networking, entrepreneurship, and other topics. These programs are often located at military installations’ family centers, where a variety of “family readiness services” are provided. These services may include relocation assistance (e.g., providing information on housing, child care, and schooling options), non-medical counseling, financial education and counseling, deployment assistance (e.g., educating servicemembers and their families about challenges they may face and services to help them cope), services for family members with special needs, child abuse and domestic violence prevention and response, emergency family assistance, and transition assistance to help servicemembers separating from the military and their families to reenter the civilian workforce. Over the years, the Congress and the executive branch have sought to enhance the employment assistance provided to military spouses. In 2001, Congress directed DOD to examine its spouse employment programs and develop partnerships with private-sector firms to provide for improved job portability for spouses, among other things. A study we conducted in 2002 discussed a number of efforts DOD was making, including holding a “spouse employment summit” to identify needed actions, establishing partnerships with private-sector employers, and seeking the Department of Labor’s assistance to resolve issues with different state residency and licensing requirements for particular occupations. More recently, in 2008, Congress authorized DOD to establish programs to assist spouses of active duty servicemembers in obtaining the education and training required for a degree, credential, education prerequisites, or professional license that expands employment and portable career opportunities. Congress also authorized DOD to establish a pilot program to help military spouses secure internships at federal agencies by reimbursing agencies for the costs associated with the first year of employment of an eligible spouse. In 2011, the administration issued a report identifying commitments federal agencies made to help military spouses obtain employment. In that report, DOD committed to expanding an employer partnership program that the Army initiated in 2003 to the other military services, improving employment counseling, and providing financial assistance to help certain spouses obtain further education. Since 2009, DOD has established three programs targeted to military spouses to help them obtain employment: (1) the Military Spouse Career Advancement Accounts (MyCAA) tuition assistance program; (2) the Military Spouse Employment Partnership (MSEP), which connects military spouses with employers; and (3) the Military Spouse Career Center, consisting of a call center and a website through which spouses can obtain counseling and information. These three programs comprise DOD’s Spouse Education and Career Opportunities (SECO) initiative (see fig. 2). DOD has two goals for its SECO programs: (1) reduce unemployment among military spouses and (2) close their wage gap with civilian spouses. Military Spouse Career Advancement Accounts (MyCAA): DOD created the MyCAA program to help spouses obtain further education and training toward a portable career. To enroll in this program, spouses must identify the course of study they want to pursue, develop an educational plan, and apply to DOD for tuition assistance. The tuition funds must be used for education or training toward a portable career field, defined by DOD and the Department of Labor as a high-growth, high-demand career field that is likely to have job openings near military installations. Since its inception in 2009, there have been several changes to the program’s eligibility criteria and benefits. After a pilot period, DOD established that any spouse of an active duty servicemember could participate in the program and could receive up to $6,000 in tuition funds for any continuing education, including educational programs to obtain certificates and licenses, as well as bachelor’s and advanced degrees. Due to concerns about rising costs and enrollment requests, however, DOD (1) tightened the eligibility criteria to target the program to spouses of junior servicemembers, (2) reduced the benefit amount to $4,000, and (3) restricted the funds’ use to the attainment of certificates and licenses for portable careers, not for bachelor’s or advanced degrees. In our site visits and interviews with advocacy groups, some felt that the program should be expanded to allow spouses to obtain higher-level degrees or enable more spouses to use the program. DOD officials said that MyCAA’s revised criteria reflects the original intent of the program and ensures fiscal sustainability. In fiscal year 2011, DOD spent approximately $55 million on the MyCAA program; however, MyCAA’s expenditures have fluctuated as the program has changed. Specifically, DOD’s spending increased in the first 2 years after it was launched, and then declined 70 percent in its third year, after DOD changed the eligibility criteria, benefit amount, and types of training or educational programs for which the funds could be used (see appendix II for further information on MyCAA expenditures). According to a DOD official, approximately 125,000 spouses received MyCAA tuition assistance from October 2008 to May 2012. Military Spouse Employment Partnership (MSEP): DOD created MSEP in 2011 as an expansion of an Army program to connect spouses from all services to employment opportunities at Fortune 500 companies, nonprofits, and government agencies. Specifically, MSEP establishes partnerships with employers who pledge to offer spouses transferrable, portable career opportunities. Any spouse interested in working for these employers then registers for MSEP and accesses MSEP’s web-based portal. The MSEP portal allows spouses to search for job openings posted by participating employers, build their resume, and apply for jobs. Currently, MSEP is partnering with more than 125 companies, according to DOD. In fiscal year 2011, DOD spent $1.2 million on the MSEP program for the contractors that operate and enhance the web-based portal and work with employers (see appendix II for further information on MSEP expenditures). Military Spouse Career Center (the Career Center): The Career Center consists of a call center, through which spouses can speak with employment counselors, and a website with employment information. The counselors at the call center and the website provide assistance with spouses’ general employment needs, such as exploring career options, resume writing, interviewing, and job search. In addition, the Career Center helps spouses learn about and navigate DOD’s other spouse employment programs. For example, spouses interested in MyCAA may speak with a counselor at the Career Center to help them develop their education plan, which is a requirement for receiving MyCAA benefits. Until recently, the Career Center was part of DOD’s Military OneSource, which provides information and referral to services for servicemembers and their families. Information has not been available on the amount spent on the Career Center because expenditure data for the center was not separated from Military OneSource expenditures. DOD has recently separated the Career Center from Military OneSource. In addition to the three SECO programs, spouses can also receive employment assistance from the long-standing programs operated at military service installations. The services’ programs also provide counseling to spouses and information about DOD’s spouse employment programs, but they differ from the Career Center in that they are provided in-person. For example, some of the activities offered for spouses at installations we visited in the Washington, D.C. area include an annual spouse job fair, a dress-for-success workshop with stylists at a department store, and a spouse support group with guest speakers, such as MSEP representatives. DOD officials explained that they created the Career Center to supplement the services’ programs, which may not have been fully meeting the needs of all spouses. The military services’ programs are available only during business hours and may not be accessible to spouses who do not live on a military installation. In contrast, any military spouse may access the Career Center, 24 hours a day, 7 days a week. Furthermore, DOD officials noted that installations vary in the level of employment assistance they provide to spouses. For example, some of the services’ employment assistance programs are staffed by generalists who provide other types of counseling as well, and many of the services’ programs also serve other members of the military community, such as servicemembers and retirees. In contrast, the Career Center is staffed by counselors with specialized knowledge in employment services, and the counselors are focused specifically on assisting military spouses. With the Career Center, spouses who do not feel that the employment assistance programs at their local installation are meeting their needs have an alternative resource they can turn to. The creation of the new SECO programs has had many benefits, according to advocacy group representatives, program staff, and spouses we interviewed. Officials and spouses agreed that these programs help address unique challenges faced by military spouses, such as frequent relocation to installations with varying services offered. For example, a spouse we spoke with explained how she spoke with a Career Center counselor to identify job opportunities in a rural installation and applied for MyCAA tuition assistance upon relocating to another installation. Additionally, one official with a spouse group praised MSEP for connecting spouses with private sector job opportunities throughout the nation. However, with the establishment of the new SECO programs overlaid on the services’ existing programs, program staff, spouses, and advocacy groups we spoke with expressed some confusion and noted gaps in coordination: A representative from an advocacy group noted that the information spouses are provided about the various employment programs is inconsistent across installations and websites, and the names and terminology used for the programs also varies. This may make it confusing for spouses as they move and seek assistance in different locations. The advocacy group representatives also said that while the various programs refer spouses to other programs, spouses may not be provided information to help them make the best use of other programs. For example, they said that staff at the services’ employment assistance programs may refer spouses to the Career Center website but do not inform them about the breadth of services that Career Center counselors can provide. As a result, some spouses may not be aware of the various types of assistance that the Career Center can offer. With regard to MSEP, the representatives said that counselors at the Career Center and the services’ employment assistance programs refer spouses to the MSEP web portal but do not provide them with further guidance on how they can effectively use the portal to obtain a job with an MSEP partner. An advocacy group representative and program managers we spoke with indicated that the various programs’ websites may not be easy to navigate or find. For example, a representative from an advocacy group noted that the Career Center website has good information, but it is difficult for spouses to find it within the Military OneSource website. A program manager at one installation said that some spouses have had difficulty finding the MyCAA website. Another program manager said that the Career Center website does not have links to local installations’ employment assistance programs. Additionally, during our site visits and interviews, we heard about some issues that have been created by having two different programs—the Career Center and the military services’ employment assistance programs—that appear to offer some similar services. Specifically, we heard the following accounts about how often spouses are referred to the Career Center, instances where spouses have been referred back and forth between the two programs, and potential duplication of efforts: A program manager at one installation noted that she would not refer spouses to the Career Center unless she was unable to handle the workload. A program manager at one installation said that she refers spouses to the Career Center when she believes the type or level of services they need would be better provided by Career Center counselors. However, she said, in several cases, those spouses have been referred back to her. At a different installation, a program manager said that she encourages her staff to refer spouses to the Career Center because of the quality of services offered. However, she also noted that because the Career Center provides some of the same services as her office (e.g., counseling and help with resume writing and interviewing skills), there is a potential duplication of effort. She said that it would be acceptable to her if her office no longer provided those employment services that the Career Center can provide and instead, focused on delivering other services spouses need. DOD has taken some steps to help spouses navigate among the various programs. Its guidance for its family readiness programs, which the military services’ employment assistance programs are one part of, directs military services’ staff to assess a spouse’s need for SECO services and identify opportunities to refer spouses to other services that support their well-being. In addition, DOD officials said they recently established a policy to ensure that MyCAA participants were referred to the other SECO programs. Beginning in early 2012, spouses who want to enroll in MyCAA are expected to speak with a counselor at the Career Center first and also register for MSEP. However, DOD does not currently have guidance describing its overall strategy and how its various programs should coordinate to help spouses obtain employment. According to DOD officials, DOD is in the process of developing such guidance to provide direction on SECO programs and address coordination and referral among the various programs. To do so, DOD has convened an advisory group that includes representatives from all of the services. As DOD develops its new guidance, our prior work on enhancing and sustaining collaboration may be helpful. We identified the following eight practices that can help sustain collaboration across organizational boundaries: 1. Define and articulate a common outcome. 2. Establish mutually reinforcing or joint strategies. 3. Identify and address needs by leveraging resources. 4. Agree on roles and responsibilities. 5. Establish compatible policies, procedures, and other means to operate across agency boundaries. 6. Develop mechanisms to monitor, evaluate, and report on results. 7. Reinforce agency accountability for collaborative efforts through agency plans and reports. 8. Reinforce individual accountability for collaborative efforts through performance management systems. While all of these are relevant to DOD’s spouse employment programs, two are particularly relevant because of the issues raised in our site visits and interviews: (1) agreeing on roles and responsibilities, and (2) establishing compatible policies, procedures, and other means to operate across agency boundaries. The concerns about duplication of effort and referrals back and forth between the Career Center and the military services’ programs may indicate that the roles and responsibilities of the two programs may not be sufficiently clear or defined. Similarly, the inconsistencies and gaps in collaboration may indicate a need to establish compatible policies, procedures, or other operational means, for example, common names and terminology for the programs and new procedures or mechanisms to ensure spouses are informed about the programs that can help them. DOD is not yet able to measure the overall effectiveness of its spouse employment programs in achieving the goals of reducing unemployment among military spouses and the wage gap with civilian spouses. Additionally, DOD only has limited information on the performance of its individual programs. DOD is aware of these limitations and is taking steps to assess the programs’ effectiveness and develop a more robust performance monitoring system. To assess effectiveness of the three SECO programs, DOD is planning on contracting with a research organization to conduct a long-term evaluation. DOD officials would like the research organization to examine whether the programs have affected spouses’ unemployment rates and their wage gap with civilian spouses, as well as determine whether the programs have had an effect on servicemembers’ retention in the military and the families’ financial well-being. It is too soon to tell whether this evaluation will be able to measure these possible outcomes and also demonstrate whether the outcomes can be attributed to DOD’s spouse employment programs. DOD officials anticipate establishing the contract for this evaluation in fiscal year 2013. In the meantime, DOD is conducting limited monitoring of the performance of two of its spouse employment programs. First, DOD monitors the number of spouses hired by employers participating in MSEP. Second, DOD tracks the percentage of courses funded by MyCAA tuition assistance that spouses complete with a passing grade. DOD’s performance monitoring is limited for several reasons. First, DOD has no performance measures for the Career Center. Second, DOD’s data on the MSEP program are of questionable reliability because they derive from an informal, nonstandardized process. Specifically, the data on the number of spouses hired by employers participating in MSEP are collected primarily by Army program managers through informal contacts with spouses. These informal methods create the potential that DOD is not obtaining reliable data. For example, if program managers vary in the questions they ask spouses, information spouses provide may be inconsistent. Moreover, by using data primarily from Army program managers, DOD is missing information from spouses of servicemembers in the Air Force, Marine Corps, and Navy who are working at MSEP employers. Finally, DOD’s performance measure for MyCAA—showing that more than 80 percent of courses funded with MyCAA tuition assistance were completed with passing grades in fiscal year 2011—may be a useful interim measure for monitoring how the funds are being used. However, this does not show whether the MyCAA funds are helping spouses obtain employment or increase their earnings. DOD recognizes the need to improve its performance monitoring for its spouse employment programs and is taking steps to improve the data it collects on its individual programs: For the Career Center, DOD is planning to ask the contractor who runs the call center to follow up with spouses who use the center’s services and ask them about their employment situation. DOD officials said that these follow-ups could be used to obtain information on employment outcomes of spouses who used the center, as well as those who used MyCAA and MSEP programs, since the call center also provides counseling to spouses using those programs. For MSEP, DOD is planning to implement new procedures to collect data from participating employers on the number of military spouses they hire. Spouses hired by an MSEP employer will self-identify to the employer that they are a military spouse. Employers will then report to DOD the number of spouses they hired through a reporting mechanism in the MSEP web portal. For MyCAA, DOD has established methods to obtain data on when spouses complete their planned programs of study and the educational degrees they have obtained due to MyCAA funding. DOD’s web-based portal for MyCAA now asks spouses to report to DOD when they are taking a class that will complete their planned program of study. It also asks schools to report when a spouse has obtained a certificate or degree. DOD has also identified four measures that it would like to track, including three broader measures related to the SECO programs’ goals and one measure for MyCAA: (1) spouses’ unemployment rate, (2) the wage gap between military and civilian spouses, (3) spouses’ ability to maintain their jobs or similar jobs after relocation, and (4) the change in earnings among MyCAA participants. DOD has been conducting a survey of spouses biennially to obtain information on military spouses’ unemployment rate, and it will be fielding a new survey in late 2012 to obtain updated information. DOD does not yet have processes for collecting data on a regular basis on the three other measures it is considering. As DOD continues to develop its performance monitoring system, our previous work on developing effective performance measures may be helpful. Specifically, we identified nine key attributes of successful performance measures (see table 1). No set of performance measures is perfect, and a performance measure that lacks a key attribute may still provide useful information. However, these attributes can help identify areas for further refinement. For example, one of the attributes calls for covering core program activities. As we noted above, DOD does not have a performance measure for the Career Center, and thus its measures do not cover all of its core program activities intended to support military spouse employment. DOD’s performance measure for MSEP has also been lacking the attribute of reliability, since DOD has not had a standardized process for collecting the data. Reliability refers to whether standard procedures for collecting data or calculating results can be applied to the performance measures so that they would likely produce the same results if applied repeatedly to the same situation. Another key attribute that may be relevant is limiting overlap. There is potential for overlap if DOD has performance measures that track employment outcomes for each of the three SECO programs, but military spouses often use more than one program. For example, if many spouses who use MSEP also use the Career Center, a measure on the number of spouses who obtained employment through MSEP could overlap with a measure to track spouses’ employment through the Career Center, since the two measures would capture employment attainment for many of the same individuals. Because DOD has multiple employment programs that military spouses may use, our work on practices for enhancing and sustaining collaboration, which we discussed above, may offer some helpful insights as DOD refines its performance monitoring system. Specifically, we noted that developing mechanisms for monitoring, evaluating, and reporting on results of collaborative efforts can help agencies identify areas for improvement. We also stated that agencies can use their strategic and annual performance plans as tools to drive collaboration and establish complementary goals and strategies for achieving results. The federal government has two hiring mechanisms targeted specifically to military spouses seeking federal jobs. The first mechanism—a noncompetitive hiring authority for military spouses—is available to any federal agency. The second—DOD’s Military Spouse Preference (MSP) program, which allows DOD to give military spouses preference in hiring for civilian or nonappropriated fund positions—applies only to DOD. These two mechanisms can increase a military spouse’s chances of obtaining federal employment, but they do not guarantee that spouses will obtain the jobs they apply for. DOD provides general information to military spouses on these mechanisms through the Career Center’s website and the military services’ employment assistance programs. Civilian personnel offices at local installations may provide more detailed information and also inform spouses about how to apply for DOD and other federal job openings. The noncompetitive authority, which became effective in late fiscal year 2009, allows any federal agency the option of hiring qualified military spouses into the competitive service without going through the competitive examination process. In other words, this authority allows eligible military spouses to be considered separately from other candidates, meaning that military spouses do not have to compete directly against other candidates as is the case under the competitive examination process. To be considered for a position under this authority, military spouses applying for federal jobs indicate in their applications that they would like to be considered and include documentation verifying their eligibility. According to OPM, the purpose of the noncompetitive authority is to minimize disruptions in military families due to permanent relocations, disability, and deaths resulting from active duty service. Agencies can use the noncompetitive authority to hire: (1) spouses who are relocating because of their servicemember’s orders for up to 2 years after the relocation, (2) widows or widowers of servicemembers killed during active duty, and (3) spouses of active duty servicemembers who retired or separated from the military with a 100 percent disability. The extent to which use of this authority results in employment of a military spouse depends on a variety of factors. First, federal hiring managers have the discretion whether to consider candidates under this authority for a job vacancy. Second, if the hiring manager chooses to consider candidates under this authority, the hiring manager is not required to select a qualified military spouse, and the manager can ultimately decide to select a qualified candidate other than a military spouse. This authority allows for eligible military spouses to be considered and selected for federal jobs, but it does not provide a hiring preference over other qualified applicants. Federal agencies may also consider using noncompetitive appointment authorities or hiring mechanisms for other populations, such as those for veterans, people with disabilities, and federal employees who lost their jobs due to downsizing or restructuring. OPM officials told us that they conduct oversight of this authority as part of their general oversight of federal agencies’ human capital systems. OPM officials said that thus far, they have found no irregularities in agencies’ use of this hiring mechanism. OPM officials also said they have provided technical assistance and briefings to federal agencies and stakeholders on this authority and other ways to support military families, such as using authorities for hiring veterans. Federal agencies hired about 2,000 military spouses using this hiring authority in the first 2 years of implementation, with more hired in the second year (about 1,200 in fiscal year 2011 and about 800 in fiscal year 2010). The approximately 1,200 military spouses hired in fiscal year 2011 represented about 0.5 percent of all federal hires that year. For context, spouses of active duty servicemembers represented 0.4 percent of the working-age population in 2010. DOD has been the primary user of this authority, hiring 94 percent of all military spouses hired under the authority. OPM officials said this was likely due to military spouses’ greater familiarity with DOD, and that DOD is more likely than other agencies to have job openings where military spouses are located. DOD’s Military Spouse Preference (MSP) program provides military spouses priority in selection for DOD positions. The MSP includes two hiring mechanisms—one for spouses seeking DOD civilian positions, and one for spouses seeking DOD nonappropriated fund positions. With regard to the mechanism for DOD civilian positions, the MSP provides hiring preference to qualified spouses for DOD positions if the spouse is among persons determined to be best qualified for the position. The other mechanism provides military spouses with preference in hiring for nonappropriated fund positions below a certain pay level. Nonappropriated fund positions within DOD include those paid for by funds generated from services provided, such as at exchanges, recreation programs, and child care centers on military installations. To be considered for a position under the MSP program, military spouses must register for MSP, provide supporting documentation, and identify which types of jobs they would be willing or able to perform based on their backgrounds and geographic location. When a spouse’s qualifications and desired job characteristics match a job opening, the spouse must submit his or her application through MSP. As with the noncompetitive authority, the extent to which this authority is used depends on several factors. MSP only applies to civilian jobs at DOD that a hiring manager chooses to fill through a competitive process, which generally means that the hiring manager is to consider more than one candidate for the position and select the best-qualified candidate based on job-related criteria. The characteristics of the job opening (location, type, level) must also match the criteria indicated by the spouse when he or she registered for MSP. In addition, the spouse must be among the best qualified applicants for the job. Furthermore, according to DOD officials, the agency also uses hiring preferences for other populations who may have a higher priority than the spouse, such as DOD employees whose positions were recently eliminated. If the registered MSP spouse is determined to be among the best qualified applicants, and if there are no other best qualified candidates with a higher priority preference, the hiring manager must select the military spouse for the job. For nonappropriated fund jobs, the MSP program only applies to jobs below a certain pay level. A DOD official said that these positions generally have relatively high turnover rates, so spouses often do not need to use the MSP to obtain the job. DOD’s civilian personnel office oversees the MSP and other preference programs, and officials said that they have found no irregularities in MSP use. DOD’s civilian personnel office also tracks the number of spouses who register for MSP and are placed into jobs on a monthly basis. While these are useful measures of program activity, DOD officials said that they do not provide information on whether the agency is making sufficient use of MSP. Examining sufficiency of MSP use would require a study that takes into account the many complex factors that affect MSP, including how many vacancies DOD had at spouses’ locations, how many vacancies matched the types of jobs spouses identified in their registration as being qualified for, how the qualifications of spouses who applied compared to those of other candidates, and whether other candidates for the position were eligible for special hiring mechanisms as well, such as noncompetitive appointments. DOD officials indicated that such an analysis would be challenging to conduct, and DOD has not attempted a comprehensive study. Nonetheless, DOD officials we spoke with felt that the MSP program had helped a large number of spouses obtain jobs. Over the 10-year period of fiscal years 2002 to 2011, a total of about 12,500 military spouses were placed in civil service jobs through the MSP, according to DOD’s data. This number includes both new hires and conversions of DOD employees. The numbers have fluctuated from year to year in this time period, from a low of 890 to a high of 1,722. DOD officials said that the fluctuations likely correspond with overall DOD hiring levels. With regard to nonappropriated fund jobs, DOD does not consistently track the number of spouses hired through the MSP, but overall, about 26,000 military spouses were employed by DOD in nonappropriated fund jobs as of June 2012. This represented 19 percent of all employees in these jobs. While DOD is at an early stage of implementing its new spouse employment programs, it has an opportunity to ensure that a well- coordinated structure is in place to deliver employment services to spouses, and that its system for monitoring performance is well-designed. Specifically, through its advisory group, DOD has the potential to include program stakeholders in a meaningful effort to support spouses and military families, while also ensuring effective delivery of services and addressing potential areas of duplication. As its advisory group moves forward with developing guidance on spouse employment programs, DOD has an opportunity to incorporate practices that can enhance and sustain coordination, including agreeing on roles and responsibilities for both SECO and the military services to provide employment assistance to spouses. Without guidance that incorporates key collaboration practices, DOD may miss opportunities to ensure all spouses consistently receive high quality employment assistance from SECO and the military services and can navigate smoothly from program to program, while avoiding duplication of efforts. With regard to its performance monitoring, DOD has taken steps in the right direction by exploring options to collect outcome data and planning for a long-term evaluation. However, as DOD works to identify the performance measures it will use to conduct ongoing monitoring of its programs and report its progress to policymakers, DOD can benefit by considering attributes of successful performance measures. These include ensuring that it uses reliable data and that its performance measures enable it to monitor all of its key program activities and their planned outcomes. Without integrating successful elements of performance measurement into its evaluation efforts, DOD runs the risk that it will not collect sufficient and accurate information to determine if DOD funds are being used in the most effective way to help military spouses obtain employment. To enhance collaboration among the various entities involved in delivering employment services to military spouses and to better monitor the effectiveness of these services, we recommend that the Secretary of Defense take the following actions: consider incorporating key practices to sustain and enhance collaboration when developing and finalizing its spouse employment guidance, such as agreeing on roles and responsibilities and developing compatible policies and procedures. consider incorporating key attributes of successful performance measures when developing and finalizing performance measures, such as ensuring reliability of the data used in the measures and covering key program activities. We provided a draft of this report to the Secretary of Defense and the Director of OPM for review and comment. In DOD’s written comments, which are reproduced in appendix III, DOD partially concurred with our recommendations. DOD said that in general, our report correctly addresses the issues concerning collaboration and performance measure development. DOD also provided technical comments, which we incorporated as appropriate. OPM had no comments. DOD partially concurred with our recommendation to consider incorporating key collaboration practices to sustain and enhance collaboration when developing and finalizing its spouse employment guidance. While DOD said it looked forward to incorporating collaboration practices as the SECO program matures, DOD stated that it has already taken initial action in this area. For example, DOD cited the advisory group it created, as well as partnerships developed with various organizations. Our report recognizes DOD’s efforts. However, these initial actions do not directly address the particular area highlighted in our recommendation—developing and finalizing guidance for its spouse employment programs. As we state in our report, the programs under the SECO initiative are new and there are some gaps in coordination. Thus, we continue to believe that incorporating key collaboration practices into the guidance that DOD is developing, such as agreeing on roles and responsibilities, would be beneficial. This could help ensure that the various entities involved in DOD’s multiple spouse employment programs work cohesively and avoid duplicating efforts while helping military spouses seamlessly navigate across the programs. DOD also partially concurred with our recommendation to consider incorporating key attributes of successful performance measures when developing and finalizing its performance measures. DOD said that it looks forward to improving performance measurement but that it has already taken steps to incorporate key attributes of successful performance measures. For example, DOD said it is developing employment data collection for military spouses directly from MSEP partners and anticipates completion by winter of 2013. We recognize DOD’s efforts to collect additional data. However, because DOD is in the early stages of this process, we continue to believe that it can benefit from incorporating attributes of successful performance measures as it further develops its performance monitoring system. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Director of OPM, and other interested parties. The report is also available at no charge on the GAO website at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions in this report are listed in appendix IV. We addressed the following research objectives in this study: 1. What efforts has DOD recently made to help military spouses prepare for and obtain employment? 2. What steps has DOD taken to assess the effectiveness of these programs? 3. What hiring mechanisms exist to help military spouses obtain federal jobs? We identified DOD’s recent efforts to help military spouses prepare for and obtain employment by interviewing DOD officials and reviewing literature and documents, including DOD websites, reports, program descriptions, strategic planning documents, and guidance. We focused on identifying programs that would be a primary resource for military spouses to enhance their job skills and increase their employability, identify job opportunities, and/or help them obtain employment. We did not include in our review programs that may have an employment focus but generally did not serve spouses of active duty servicemembers, nor did we include programs that may target military spouses but had a primary focus other than employment. We developed a preliminary list of programs that included the three programs under DOD’s Spouse Education and Career Opportunities (SECO) initiative, as well as the employment assistance programs that the military services have long operated. We shared our list with DOD officials, who agreed with our assessment that these are the primary programs that provide employment services to spouses. Our first objective is focused primarily on the three new SECO programs, but we note that spouses can also use the military services’ employment assistance programs, and we discuss coordination across the SECO and military services’ programs. As we identified the programs to examine, we conducted further interviews with officials involved in each of DOD’s spouse employment programs, both at DOD headquarters and with each of the military services (Air Force, Army, Marine Corps, and Navy). We requested more detailed information, such as on the programs’ purposes, budgets, services they provide, and coordination. In examining coordination among the programs, we consulted GAO’s prior work that identified practices that can help federal agencies enhance and sustain collaboration. To obtain additional perspectives, we interviewed two advocacy groups who support military families, and we visited employment assistance programs at three military installations in the Washington-D.C. area. These installations are: Fort Meade (Army and Navy programs), Joint Base Andrews (Air Force), and Henderson Hall (Marine Corps). During our visits to these installations, we spoke with local program officials, and we spoke to military spouses in three of the four services. The information we obtained from these installations is not generalizable. To identify the steps DOD has taken to assess the effectiveness of its spouse employment programs, we interviewed DOD officials to obtain information on how DOD is currently measuring effectiveness, as well as its plans to conduct evaluations or collect data on performance. We also reviewed documents DOD provided, including internal and external reports, strategic planning documents, and descriptions of existing and potential performance measures. In assessing DOD’s performance measures, we consulted with GAO’s prior work that identified attributes of successful performance measures and described requirements for reporting on performance under the Government Performance and Results Act, as amended. We also assessed the reliability of the data used for DOD’s performance measures by interviewing officials knowledgeable about the data and reviewing relevant documents. Based on our work, we determined that the data used for the performance measure on the Military Spouse Employment Program are of questionable reliability, and we discuss this in our report. We identified the hiring mechanisms intended to help military spouses obtain federal employment by interviewing officials at OPM and DOD and reviewing relevant federal laws, regulations, executive orders, and guidance. Our interviews and the documents also provided information on the processes for how these mechanisms can be used. To obtain data on the number of spouses hired through one of these mechanisms, the noncompetitive authority, we analyzed data from OPM’s Central Personnel Data File (CPDF), a database of federal employees. We present data for fiscal years 2010 and 2011, since the authority was implemented in late fiscal year 2009. We reviewed the reliability of the data by interviewing OPM officials, conducting electronic testing, and reviewing relevant documents. We determined that the data were sufficiently reliable for our purposes. The information we present from our analysis of the CPDF is on the number of individuals hired under the noncompetitive authority for military spouses. It does not include military spouses hired by federal agencies without using this authority, and as such does not represent the total number of military spouses hired by federal agencies. Data are not available in the CPDF to identify the total number of military spouses hired by federal agencies. On the other hiring mechanism that we examined, the Military Spouse Preference (MSP) program, DOD’s civilian personnel office provided us with data on the number of spouses placed into civil service positions, including both hires and conversions. We assessed the reliability of this data by reviewing relevant documents and interviewing DOD officials on the processes through which the data are input and validated. We determined that the data were sufficiently reliable for our purposes. With regard to the number of spouses hired into nonappropriated fund positions using the MSP, DOD officials noted that such data are not collected in a consistent manner by the military services’ nonappropriated fund offices so we do not present these data. DOD’s Defense Manpower Data Center provided us with data on the number of spouses in nonappropriated fund positions overall, and we present this information in our report for context. Table 2 provides information on DOD’s expenditures on the two military spouse employment programs for which data were available—MyCAA and MSEP. Data were not available on how much was spent on the Career Center because the center was included in DOD’s broader contract for Military OneSource. According to a DOD official, DOD intends to have an overall SECO budget that encompasses the three spouse employment programs for fiscal year 2013. Data were also unavailable on how much DOD spends on spouse employment activities on local installations because the resources used for military services’ employment programs are embedded in broader budget categories, such as base operations and support. In addition to the contact named above, Lori Rectanus (Assistant Director), Keira Dembowski, and Yunsian Tai made significant contributions to this report. Also contributing to this report were James Bennett, David Chrisinger, Brenda Farrell, Cynthia Grant, Joel Green, Yvonne Jones, Kirsten Lauber, Kathy Leslie, Benjamin Licht, Trina Lewis, James Rebbe, Sarah Veale, and Gregory Wilmoth. | The approximately 725,000 spouses of active duty servicemembers face challenges to maintaining a career, including having to move frequently. Their employment is often important to the financial well-being of their families. For these reasons, DOD has taken steps in recent years to help military spouses obtain employment. Moreover, the federal government has hiring mechanisms to help military spouses obtain federal jobs. The National Defense Authorization Act for Fiscal Year 2012 requires GAO to report on the programs that help military spouses obtain jobs. This report examines: (1) DOD's recent efforts to help military spouses obtain employment, (2) DOD's steps to assess effectiveness of these efforts, and (3) the hiring mechanisms to help military spouses obtain federal jobs. GAO conducted interviews with DOD, the Office of Personnel Management, and two advocacy groups; conducted site visits; analyzed relevant data; and reviewed relevant documents, laws, and regulations. The Department of Defense (DOD) has recently created three new programs to help military spouses obtain employment: (1) the Military Spouse Career Advancement Accounts (MyCAA) tuition assistance program, (2) the Military Spouse Employment Partnership (MSEP), which connects military spouses with employers, (3) and the Military Spouse Career Center, consisting of a call center and a website for military spouses to obtain counseling and information. DOD's goals for these programs are to reduce unemployment among military spouses and close their wage gap with civilian spouses. Aside from these new programs, military spouses can also use employment assistance programs that the military services have long operated on DOD installations. However, GAO's site visits and interviews indicate that there may be gaps in coordination across the various programs that result in confusion for military spouses. Currently, DOD does not have guidance describing its overall strategy and how all of its programs should coordinate to help military spouses obtain employment, but DOD is in the process of developing such guidance. DOD is not yet able to measure the overall effectiveness of its military spouse employment programs and its performance monitoring is limited, but DOD is taking steps to improve its monitoring and evaluation. To determine whether its programs have been effective in reducing unemployment among military spouses and closing their wage gap with civilian spouses, DOD is planning to contract with a research organization for a long-term evaluation. With regard to its performance monitoring for these programs, DOD has performance measures for MSEP and MyCAA, but has no measures for the Career Center. In addition, reliability of the data is questionable on the MSEP performance measure because DOD's data are derived from an informal and inconsistent process. DOD's other measure--the percentage of courses funded by MyCAA tuition assistance that military spouses complete with a passing grade--is a useful interim measure for monitoring how the funds are being used, but it does not provide information on whether the funds help military spouses obtain employment. DOD has efforts underway to improve its performance monitoring, including identifying additional measures it would like to track and collecting additional data on participants' employment and educational outcomes. The federal government has two hiring mechanisms that can provide military spouses who meet the eligibility criteria with some advantages in the federal hiring process. The first mechanism--a non-competitive authority--allows federal agencies the option of hiring qualified military spouses without going through the competitive process. The second mechanism--DOD's Military Spouse Preference program--provides military spouses priority in selection for certain DOD jobs. These hiring mechanisms can increase a military spouse's chances of obtaining federal employment, but they do not guarantee that military spouses will obtain the job they apply for. In fiscal year 2011, agencies used the noncompetitive authority to hire about 1,200 military spouses, which represented approximately 0.5 percent of all federal hires that year. Military spouses represented 0.4 percent of the working-age population in 2010. With regard to the Military Spouse Preference program, DOD has placed about 12,500 military spouses into civil service jobs in the past 10 years, which includes both new hires and conversions of DOD employees. GAO recommends that DOD consider incorporating (1) key collaboration practices as it develops its spouse employment guidance, and (2) key attributes of successful performance measures as it develops and finalizes its performance measures. |
In fiscal year 2009, the federal government spent over $4 billion specifically to improve the quality of our nation’s 3 million teachers through numerous programs across the government. Teacher quality can be enhanced through a variety of activities, including training, recruitment, and curriculum and assessment tools. In turn, these activities can influence student learning and ultimately improve the global competitiveness of the American workforce in a knowledge-based economy. Federal efforts to improve teacher quality have led to the creation and expansion of a variety of programs across the federal government. However, there is no governmentwide strategy to minimize fragmentation, overlap, or potential duplication among these programs. Specifically, GAO identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity, administered across 10 federal agencies. Many of these programs share similar goals. For example, 9 of the 82 programs support improving the quality of teaching in science, technology, engineering, and mathematics (STEM subjects) and these programs alone are administered across the Departments of Education, Defense, and Energy; the National Aeronautics and Space Administration; and the National Science Foundation. Further, in fiscal year 2010, the majority (53) of the programs GAO identified supporting teacher quality improvements received $50 million or less in funding and many have their own separate administrative processes. The proliferation of programs has resulted in fragmentation that can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. For example, eight different Education offices administer over 60 of the federal programs supporting teacher quality improvements, primarily in the form of competitive grants. Education officials believe that federal programs have failed to make significant progress in helping states close achievement gaps between schools serving students from different socioeconomic backgrounds, because, in part, federal programs that focus on teaching and learning of specific subjects are too fragmented to help state and district officials strengthen instruction and increase student achievement in a comprehensive manner. While Education officials noted, and GAO concurs, that a mixture of programs can target services to underserved populations and yield strategic innovations, the current programs are not structured in a way that enables educators and policymakers to identify the most effective practices to replicate. According to Education officials, it is typically not cost-effective to allocate the funds necessary to conduct rigorous evaluations of small programs; therefore, small programs are unlikely to be evaluated. Finally, it is more costly to administer multiple separate federal programs because each program has its own policies, applications, award competitions, reporting requirements, and, in some cases, federal evaluations. While all of the 82 federal programs GAO identified support teacher quality improvement efforts, several overlap in that they share more than one key program characteristic. For example, teacher quality programs may overlap if they share similar objectives, serve similar target groups, or fund similar activities. GAO previously reported that 23 of the programs administered by Education in fiscal year 2009 had improving teacher quality as a specific focus, which suggested that there may be overlap among these and other programs that have teacher quality improvements as an allowable activity. When looking across a broader set of criteria, GAO found that 14 of the programs administered by Education overlapped with another program with regard to allowable activities as well as shared objectives and target groups (see fig. 1). For example, the Transition to Teaching program and Teacher Quality Partnership Grant program can both be used to fund similar teacher preparation activities through institutions of higher education for the purpose of helping individuals from nonteaching fields become qualified to teach. Although there is overlap among these programs, several factors make it difficult to determine whether there is unnecessary duplication. First, when similar teacher quality activities are funded through different programs and delivered by different entities, some overlap can occur unintentionally, but is not necessarily wasteful. For example, a local school district could use funds from the Foreign Language Assistance program to pay for professional development for a teacher who will be implementing a new foreign language course, and this teacher could also attend a summer seminar on best practices for teaching the foreign language at a Language Resource Center. Second, by design, individual teachers may benefit from federally funded training or financial support at different points in their careers. Specifically, the teacher from this example could also receive teacher certification through a program funded by the Teachers for a Competitive Tomorrow program. Further, both broad and narrowly targeted programs exist simultaneously, meaning that the same teacher who receives professional development funded from any one or more of the above three programs might also receive professional development that is funded through Title I, Part A of ESEA. The actual content of these professional development activities may differ though, since the primary goal of each program is different. In this example, it would be difficult to know whether the absence of any one of these programs would make a difference in terms of the teacher’s ability to teach the new language effectively. In addition, our larger body of work on federal education programs has also found a wide array of programs with similar objectives, target populations, and services across multiple federal agencies. This includes a number of efforts to catalogue and determine how much is spent on a wide variety of federally funded education programs. For example: In 2010, we reported that the federal government provided an estimated $166.9 billion over the 3-year period during fiscal years 2006 to 2008 to administer 151 different federal K-12 and early childhood education programs. In 2005, we identified 207 federal education programs that support science, technology, engineering, and mathematics (STEM) administered by 13 federal civilian agencies. In past work, GAO and Education’s Inspector General have concluded that improved planning and coordination could help Education better leverage expertise and limited resources, and to anticipate and develop options for addressing potential problems among the multitude of programs it administers. Generally, GAO has reported that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. GAO identified key practices that can help enhance and sustain collaboration among federal agencies which include establishing mutually reinforcing or joint strategies to achieve the outcome; identifying and addressing needs by leveraging resources; agreeing upon agency roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; developing mechanisms to monitor, evaluate, and report on the results of collaborative efforts; reinforcing agency accountability for collaborative efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through agency performance management systems. In 2009, GAO recommended that the Secretary of Education work with other agencies as appropriate to develop a coordinated approach for routinely and systematically sharing information that can assist federal programs, states, and local providers in achieving efficient service delivery. Education has established working groups to help develop more effective collaboration across Education offices, and has reached out to other agencies to develop a framework for sharing information on some teacher quality activities, but it has noted that coordination efforts do not always prove useful and cannot fully eliminate barriers to program alignment, such as programs with differing definitions for similar populations of grantees, which create an impediment to coordination. However, given the large number of teacher quality programs and the extent of overlap, it is unlikely that improved coordination alone can fully mitigate the effects of the fragmented and overlapping federal effort. In our work we have identified multiple barriers to collaboration, including the conflicting missions of agencies; challenges reaching consensus on priorities; and incompatible procedures, processes, data, and computer systems. As this Subcommittee considers its annual spending priorities, it may be an opportune time to consider options for addressing fragmentation and overlap among federal teacher quality programs and what is known about how well these programs are achieving their objectives. As you consider options for how to address fragmentation, overlap, and potential duplication, I would like to highlight three approaches for you to consider: 1. enhancing program evaluations and performance information; 2. fostering coordination and strategic planning for program areas that span multiple federal agencies; and 3. consolidating existing programs. Information about the effectiveness of programs can help guide policymakers and program managers in making tough decisions about how to prioritize the use of scarce resources and improve the efficiency of existing programs. However, there can be many challenges to obtaining this information. For example, it may not be cost-effective to allocate the funds necessary to conduct rigorous evaluations of the many small programs and, as a result, these programs are unlikely to be evaluated. As we have reported, many programs, especially smaller programs, have not been evaluated, which can limit the ability of Congress to make informed decisions about which programs to continue, expand, modify, consolidate, or eliminate. For example: In 2009, we also reported that while evaluations have been conducted, or are under way, for about two-fifths of the 23 teacher quality programs we identified, little is known about the extent to which most programs are achieving their desired results. In 2010, GAO reported that there were 151 different federal K-12 and early childhood education programs but that more than half of these programs have not been evaluated, including 8 of the 20 largest programs, which together account for about 90 percent of total funding for these programs. Recognizing the importance of program evaluations, as part of its high priority performance goals in its 2011 budget and performance plan, Education has proposed implementation of a comprehensive approach to inform its policies and major initiatives. Specifically, it has proposed to 1) increase by two-thirds the number of its discretionary programs that use evaluation, performance measures, and other program data, 2) implement rigorous evaluations of its highest priority programs and initiatives, and 3) ensure that newly authorized discretionary programs include a rigorous evaluation component. However, Education has noted that linking performance of specific outcomes to federal education programs is complicated. For example, federal education funds often support state or local efforts, making it difficult to assess the federal contribution to performance of specific outcomes, and it can be difficult to isolate the effect of a single program given the multitude of programs that could potentially affect outcomes. There are also governmentwide strategies that may play an important role. Specifically, in January 2011, the President signed the GPRA Modernization Act of 2010 (GPRAMA), updating the almost two-decades- old Government Performance and Results Act (GPRA). Implementing provisions of the new act—such as its emphasis on establishing outcome- oriented goals covering a limited number of crosscutting policy areas— could play an important role in clarifying desired outcomes and addressing program performance spanning multiple organizations. Specifically, GPRAMA requires (1) disclosure of information about the accuracy and reliability of performance data, (2) identification of crosscutting management challenges, and (3) quarterly reporting on priority goals on a publicly available Web site. Additionally, GPRAMA significantly enhances requirements for agencies to consult with Congress when establishing or adjusting governmentwide and agency goals. The Office of Management and Budget (OMB) and agencies are to consult with relevant committees, obtaining majority and minority views, about proposed goals at least once every 2 years. This information can inform deliberations on spending priorities and help re-examine the fundamental structure, operation, funding, and performance of a number of federal education programs. However, to be successful, it will be important for agencies to build the analytical capacity to both use the performance information, and to ensure its quality—both in terms of staff trained to do the analysis and availability of research and evaluation resources. Where programs cross federal agencies, Congress can establish requirements to ensure federal agencies are working together on common goals. For example, Congress mandated—through the America COMPETES Reauthorization Act of 2007—that the Office of Science and Technology Policy develop and maintain an inventory of STEM education programs including documentation of the effectiveness of these programs, assess the potential overlap and duplication of these programs, determine the extent of evaluations, and develop a 5-year strategic plan for STEM education, among other things. In establishing these requirements, Congress put in place a set of requirements to provide information to inform its decisions about strategic priorities. Consolidating existing programs is another option for Congress to address fragmentation, overlap, and duplication. In the education area, Congress consolidated several bilingual education programs into the English Language Acquisition State Grant Program as part of the 2001 ESEA reauthorization. As we reported prior to the consolidation, existing bilingual programs shared the same goals, targeted the same types of children, and provided similar services. In consolidating these programs, Congress gave state and local educational agencies greater flexibility in the design and administration of language instructional programs. Congress has another opportunity to address these issues through the pending reauthorization of the ESEA. Specifically, to minimize any wasteful fragmentation and overlap among teacher quality programs, Congress may choose either to eliminate programs that are too small to evaluate cost effectively or to combine programs serving similar target groups into a larger program. Education has already proposed combining 38 programs into 11 programs in its reauthorization proposal, which could allow the agency to dedicate a higher portion of its administrative resources to monitoring programs for results and providing technical assistance. Congress might also include legislative provisions to help Education reduce fragmentation, such as by giving broader discretion to the agency to move resources away from certain programs. Congress could provide Education guidelines for selecting these programs. For example, Congress could allow Education discretion to consolidate programs with administrative costs exceeding a certain threshold or programs that fail to meet performance goals, into larger or more successful programs. Finally, to the extent that overlapping programs continue to be authorized, they could be better aligned with each other in a way that allows for comparison and evaluation to ensure they are complementary rather than duplicative. In conclusion, removing and preventing unnecessary duplication, overlap, and fragmentation among federal teacher quality programs is clearly challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities. Implementing provisions of GPRAMA—such as its emphasis on establishing priority outcome-oriented goals, including those covering crosscutting policy areas—could play an important role in clarifying desired outcomes, addressing program performance spanning multiple agencies, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Further, by ensuring that Education conducts rigorous evaluations of key programs Congress could obtain additional information on program performance to better inform its decisions on spending priorities. Sustained attention and oversight by Congress will also be critical. Thank you, Chairman Rehberg, Ranking Member DeLauro, and Members of the Subcommittee. This concludes my prepared statement. I would be pleased to answer any questions you may have. For further information on this testimony please contact George A. Scott, Director, Education, Workforce, and Income Security, who may be reached at (202) 512-7215, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs offices may be found on the last page of this statement. This statement will be available at no charge on the GAO Web site at http://www.gao.gov. Opportunities to Reduce Fragmentation, Overlap, and Potential Duplication in Federal Teacher Quality and Employment and Training Programs. GAO-11-509T. Washington, D.C.: April 6, 2011. List of Selected Federal Programs That Have Similar or Overlapping Objectives, Provide Similar Services, or Are Fragmented Across Government Missions. GAO-11-474R. Washington, D.C.: March 18, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-441T. Washington, D.C.: March 3, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. Federal Education Funding: Overview of K-12 and Early Childhood Education Programs. GAO-10-51. Washington, D.C.: January 27, 2010. English Language Learning: Diverse Federal and State Efforts to Support Adult English Language Learning Could Benefit from More Coordination. GAO-09-575. Washington: D.C.: July 29, 2009. Teacher Preparation: Multiple Federal Education Offices Support Teacher Preparation for Instructing Students with Disabilities and English Language Learners, but Systematic Departmentwide Coordination Could Enhance This Assistance. GAO-09-573. Washington, D.C.: July 20, 2009. Teacher Quality: Sustained Coordination among Key Federal Education Programs Could Enhance State Efforts to Improve Teacher Quality. GAO-09-593. Washington, D.C.: July 6, 2009. Teacher Quality: Approaches, Implementation, and Evaluation of Key Federal Efforts. GAO-07-861T. Washington, D.C.: May 17, 2007. Higher Education: Science, Technology, Engineering, and Mathematics Trends and the Role of Federal Programs. GAO-06-702T. Washington: May 3, 2006. Higher Education: Federal Science, Technology, Engineering, and Mathematics Programs and Related Trends. GAO-06-114. Washington, D.C.: October 12, 2005. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Special Education: Grant Programs Designed to Serve Children Ages 0- 5. GAO-02-394. Washington, D.C.: April 25, 2002. Head Start and Even Start: Greater Collaboration Needed on Measures of Adult Education and Literacy. GAO-02-348. Washington, D.C.: March 29, 2002. Bilingual Education: Four Overlapping Programs Could Be Consolidated. GAO-01-657. Washington, D.C.: May 14, 2001. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Education and Care: Early Childhood Programs and Services for Low- Income Families. GAO/HEHS-00-11. Washington: D.C.: November 15, 1999. Federal Education Funding: Multiple Programs and Lack of Data Raise Efficiency and Effectiveness Concerns. GAO/T-HEHS-98-46. Washington, D.C.: November 6, 1997. Multiple Teacher Training Programs: Information on Budgets, Services, and Target Groups. GAO/HEHS-95-71FS. Washington, D.C.: February 22, 1995. Early Childhood Programs: Multiple Programs and Overlapping Target Groups. GAO/HEHS-95-4FS. Washington, D.C.: October 31, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the findings from our recent work on fragmentation, overlap, and potential duplication in federally funded programs that support teacher quality. We recently issued a report addressing fragmentation, overlap, and potential duplication in federal programs that outlined opportunities to reduce potential duplication across a wide range of federal programs, including teacher quality programs. Our recent work on teacher quality programs builds on a long history of work where we identified a number of education programs with similar goals, beneficiaries, and allowable activities that are administered by multiple federal agencies. This work may help inform congressional deliberations over how to prioritize spending given the rapidly building fiscal pressures facing our nation's government. In recent years, the Department of Education (Education) has faced expanded responsibilities that have challenged the department to strategically allocate resources to balance new duties with ongoing ones. For example, we reported the number of grants Education awarded increased from about 14,000 in 2000 to about 21,000 just 2 years later and has since remained around 18,000, even as the number of full-time equivalent staff decreased by 13 percent from fiscal years 2000 to 2009. New programs often increase Education's workload, requiring staff to develop new guidance and provide technical assistance to program participants. Our work examining fragmentation, overlap, and potential duplication can help inform decisions on how to prioritize spending, which could also help Education address these challenges and better allocate scarce resources. In particular, our recent work identified 82 programs supporting teacher quality, which are characterized by fragmentation and overlap. Fragmentation of programs exists when programs serve the same broad area of national need but are administered across different federal agencies or offices. Program overlap exists when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. Given the challenges associated with fragmentation, overlap, and potential duplication, careful, thoughtful actions will be needed to address these issues. This testimony draws upon the results of our recently issued report and our past work and addresses (1) what is known about fragmentation, overlap, and potential duplication among teacher quality programs; and (2) what are additional ways that Congress could minimize fragmentation, overlap, and duplication among these programs? We identified 82 distinct programs designed to help improve teacher quality administered across 10 federal agencies, many of which share similar goals. However, there is no governmentwide strategy to minimize fragmentation, overlap, or potential duplication among these programs. The fragmentation and overlap of teacher quality programs can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. In addition, our larger body of work on federal education programs has also found a wide array of programs with similar objectives, target populations, and services across multiple federal agencies. In past work, GAO and Education's Inspector General have concluded that improved planning and coordination could help Education better leverage expertise and limited resources; however, given the large number of teacher quality programs and the extent of overlap, it is unlikely that improved coordination alone can fully mitigate the effects of the fragmented and overlapping federal effort. Sustained congressional oversight can also play a key role in addressing these issues. Congress could address these issues through legislation, particularly through the pending reauthorization of the Elementary and Secondary Education Act of 1965 (ESEA), and Education has already proposed combining 38 programs into 11 programs in its reauthorization and fiscal year 2012 budget proposals. Further, actions taken by Congress in the past demonstrate ways this Subcommittee can address these issues. However, effective oversight may be challenging as many of the programs we identified, especially smaller programs, have not been evaluated. |
In our 2015 annual report, we identify 12 new areas in which we found evidence of fragmentation, overlap, or duplication, and we present 20 actions to executive branch agencies and Congress to address these issues. As described in table 1, these areas span a wide range of federal functions or missions. We consider programs or activities to be fragmented when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need, which may result in inefficiencies in how the government delivers services. We identified fragmentation in multiple programs we reviewed. For example, in our 2015 annual report, we reported that oversight of consumer product safety involves at least 20 federal agencies, including the Consumer Product Safety Commission (CPSC), resulting in fragmented oversight across agencies. Although agencies reported that the involvement of multiple agencies with various expertise can help ensure more comprehensive oversight by addressing a range of safety concerns, they also noted that fragmentation can result in unclear roles and potential regulatory gaps. Although a number of agencies have a role, no single entity has the expertise or authority to address the full scope of product safety activities. We suggested that Congress consider establishing a formal comprehensive oversight mechanism for consumer product safety agencies to address crosscutting issues as well as inefficiencies related to fragmentation and overlap, such as communication and coordination challenges and jurisdictional questions between agencies. Mechanisms could include, for example, formalizing relationships and agreements among consumer product safety agencies or establishing an interagency work group. CPSC, the Department of Homeland Security (DHS), the Department of Housing and Urban Development, and the Department of Commerce’s National Institute of Standards and Technology agreed with GAO’s matter for congressional consideration, while the remaining agencies neither agreed nor disagreed. Fragmentation can also be a harbinger for overlap or duplication. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We found overlap among federal programs or initiatives in a variety of areas, including nonemergency medical transportation (NEMT). Forty-two programs across six different federal departments provide NEMT to individuals who cannot provide their own transportation due to age, disability, or income constraints. For example, NEMT programs at both Medicaid, within the Department of Health and Human Services (HHS), and the Department of Veterans Affairs (VA) have similar goals (to help their respective beneficiaries access medical services), serve potentially similar beneficiaries (those individuals who have disabilities, are low income, or are elderly), and engage in similar activities (providing NEMT transportation directly or indirectly). We found a number of challenges to coordination for these NEMT programs. For example, Medicaid and VA largely do not participate in NEMT coordination activities in the states we visited, in part because both programs are designed to serve their own populations of eligible beneficiaries and the agencies are concerned that without proper controls payments could be made for services to ineligible individuals. However, because Medicaid and VA are important to NEMT, as they provide services to potentially over 90 million individuals, greater interagency cooperation—with appropriate controls and safeguards to prevent improper payments—could enhance services to transportation- disadvantaged individuals and save money. An interagency coordinating council was developed to enhance federal, state, and local coordination activities, and it has taken some actions to address human service- transportation program coordination. However, the council has not convened since 2008 and has provided only limited leadership. For example, the council has not issued key guidance documents that could promote coordination, including an updated strategic plan. To improve efficiency, we recommended that the Department of Transportation (DOT), which chairs the interagency coordinating council, take steps to enhance coordination among the programs that provide NEMT. In response, DOT agreed that more work is needed to increase coordination activities with all HHS agencies, especially the Centers for Medicare & Medicaid Services (CMS). DOT also said the Federal Transit Administration is asking its technical assistance centers to assist in developing responses to NEMT challenges. In other aspects of our work, we found evidence of duplication, which occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. An example of duplicative federal efforts is the US Family Health Plan (USFHP)—a statutorily required component of the Department of Defense’s (DOD) Military Health System—and TRICARE Prime, which offers the same benefits to military beneficiaries. The USFHP was initially incorporated into the Military Health System in 1982 when Congress enacted legislation transferring ownership of certain U.S. Public Health Service hospitals to specific health care providers, referred to as designated providers under the program. During the implementation of the TRICARE program in the 1990s, Congress required the designated providers to offer the TRICARE Prime benefit to their enrollees in accordance with the National Defense Authorization Act for Fiscal Year 1997. Today, the USFHP remains a health care option required by statute to be available to eligible beneficiaries in certain locations, despite TRICARE’s national presence through the managed care support contractors. However, the USFHP has largely remained unchanged, and its role has not since been reassessed within the Military Health System. DOD contracts with managed care support contractors to administer TRICARE Prime—TRICARE’s managed care option—in three regions in the United States (North, South, and West). Separately, TRICARE Prime is offered through the USFHP by designated providers in certain locations within the same three TRICARE regions that are served by a managed care support contractor. Thus, the USFHP offers military beneficiaries the same TRICARE Prime benefit that is offered by the managed care support contractors across much of the same geographic service areas and through many of the same providers. As a result, DOD has incurred added costs by paying the USFHP designated providers to simultaneously administer the same TRICARE Prime benefit to the same population of eligible beneficiaries in many of the same locations as the managed care support contractors. To eliminate this duplication within DOD’s health system and potentially save millions of dollars, we suggested that Congress terminate the statutorily required USFHP. In addition to areas of fragmentation, overlap, and duplication, our 2015 report identified 46 actions that the executive branch and Congress can take to reduce the cost of government operations and enhance revenue collections for the U.S. Treasury in 12 areas. These opportunities for executive branch or congressional action exist in a wide range of federal government missions (see table 2). Examples of opportunities to reduce costs or enhance revenue collections from our 2015 annual report include updating the way Medicare pays certain cancer hospitals, rescinding unobligated funds, and re-examining the appropriate size of the Strategic Petroleum Reserve. Updating the way Medicare pays certain cancer hospitals: To better control Medicare spending and generate cost savings of almost $500 million per year, Congress should consider changing Medicare’s cost- based payment methods for certain cancer hospitals. Medicare pays the majority of hospitals using an approach known as the inpatient and outpatient prospective payment systems (PPS). Under a PPS, hospitals are paid a predetermined amount based on the clinical classification of each service they provide to beneficiaries. Beginning in 1983, in response to concern that certain cancer hospitals would experience payment reductions under such a system, Congress required the establishment of criteria under which 11 cancer hospitals are exempted from the inpatient PPS and receive payment adjustments under the outpatient PPS. Since these cancer hospitals were first designated in the early 1980s, cancer care and Medicare’s payment system have changed significantly. Advances in techniques and drugs have increased treatment options and allowed for more localized delivery of care. Along with these developments, the primary setting for cancer care has shifted from the inpatient setting to the outpatient setting. In addition, Medicare’s current payment system better recognizes the resource intensity of hospital care than the system put in place in 1983. While most hospitals are paid a predetermined amount based on the clinical classification of each service they provide to beneficiaries, Medicare generally pays these 11 cancer hospitals based on their reported costs, providing little incentive for efficiency. We found that if beneficiaries who received care at the 11 cancer hospitals had received inpatient and outpatient services at nearby PPS teaching hospitals, Medicare might have realized substantial savings in 2012. Specifically, we estimated inpatient savings of about $166 million; we calculated outpatient savings of about $303 million if forgone payment adjustments were returned to the Medicare Trust Fund. Until Medicare pays these cancer hospitals in a way that encourages greater efficiency, Medicare remains at risk for overspending. Rescinding unobligated funds: Congress may wish to consider permanently rescinding the entire $1.6 billion balance of the U.S. Enrichment Corporation (USEC) Fund, a revolving fund in the U.S. Treasury. As part of a 2001 GAO legal opinion, we determined that the USEC Fund was available for two purposes, both of which have been fulfilled: (1) environmental clean-up expenses associated with the disposition of depleted uranium at two specific facilities and (2) expenses of USEC privatization. Regarding the first authorized purpose, the construction of intended facilities associated with the disposition of depleted uranium has been completed. Regarding the second authorized purpose, USEC privatization was completed in 1998 when ownership of USEC was transferred to private investors. In an April 2014 report to Congress, the Department of Energy’s (DOE) National Nuclear Security Administration stated that the USEC Fund was one of two sources of funding that it was exploring to finance research, development, and demonstration of national nuclear security-related enrichment technologies. However, this is not one of the authorized purposes of the USEC Fund. Transparency in budget materials is important for informing congressional decisions, and DOE’s efforts to utilize USEC Fund monies instead of general fund appropriations diminish that transparency. The House of Representatives included language to permanently rescind the USEC Fund in H.R. 4923, Energy and Water Development and Related Agencies Appropriations Act, which passed the House on July 10, 2014. However, the rescission was not included in Public Law 113-235, Consolidated and Further Continuing Appropriations Act, 2015. As of March 2015, legislation containing a similar rescission had not been introduced in the 114th Congress. Re-examining the appropriate size of the Strategic Petroleum Reserve: DOE should assess the appropriate size of the Strategic Petroleum Reserve (SPR) to determine whether excess crude oil could be sold to fund other national priorities. The United States holds the SPR so that it can release oil to the market during supply disruptions to protect the U.S. economy from damage. After decades of generally falling U.S. crude oil production, technological advances have contributed to increasing U.S. production. Monthly crude oil production has increased by almost 68 percent from 2008 through April 2014, and increases in production in 2012 and 2013 were the largest annual increases since the beginning of U.S. commercial crude oil production in 1859, according to the Energy Information Administration (EIA). As of September 2014, the reserve had 106 days of imports, which DOE estimated was valued at about $45 billion as of December 2014. In addition, as of September 2014, private industry held reserves of 141 days. As a member of the International Energy Agency, the United States is required to maintain public and private reserves of at least 90 days of net imports and to release these reserves and reduce demand during oil supply disruptions. We found in September 2014 that DOE had taken steps to assess aspects of the SPR but had not recently reexamined its size. Without such a reexamination, DOE cannot be assured that the SPR is holding an appropriate amount of crude oil. If, for example, DOE found that 90 days of imports was an appropriate size for the SPR, it could sell crude oil worth $6.7 billion and use the proceeds to fund other national priorities. In addition, by reducing the SPR to 90 days, DOE may be able to reduce its operating costs by about $25 million DOE concurred with our recommendation, stating that a per year.broad, long-range review of the SPR is needed and that it has initiated a process for conducting a comprehensive re-examination of the appropriate size of the SPR. In addition to the 66 new actions identified for this year’s annual report, we have continued to monitor the progress that executive branch agencies or Congress have made in addressing the issues we identified in our 2011-2014 annual reports. The executive branch and Congress have made progress in addressing a number of the approximately 440 actions we previously identified (fig. 1). In total, as of March 6, 2015, the date we completed our audit work, we found that overall 169 (37 percent) were addressed, 179 (39 percent) were partially addressed, and 90 (20 percent) were not addressed. An additional 46 actions have been assessed as addressed over the past year; these include 13 actions identified in 2011, 14 actions identified in 2012, 11 actions identified in 2013, and 8 identified in 2014. Executive branch and congressional efforts from fiscal years 2011 through 2014 have resulted in over $20 billion in realized cost savings to date, with another approximately $80 billion in additional benefits projected to be accrued through 2023.the progress that has been made over the last 4 years. Combat Uniforms: In our 2013 annual report, we found that DOD’s fragmented approach could lead to increased risk on the battlefield for military personnel and increased development and acquisition costs. In response, DOD developed and issued guidance on joint criteria to help ensure that future service-specific uniforms will provide equivalent levels of performance and protection. In addition, a provision in the National Defense Authorization Act for Fiscal Year 2014 established as policy that the Secretary of Defense shall eliminate the development and fielding of service-specific combat and camouflage utility uniforms in order to adopt and field common uniforms for specific environments to be used by all members of the armed forces. Most recently, the Army chose not to introduce a new family of camouflage uniforms into its inventory, in part because of this legislation, resulting in a cost avoidance of about $4.2 billion over 5 years. Employment and Training: Congress and executive branch agencies have taken actions to help address the proliferation of certain employment programs and improve the delivery of benefits. Specifically, in June 2012, we reported on 45 programs administered by nine federal agencies that supported employment for people with disabilities and found these programs were fragmented and often provided similar services to similar populations. The Workforce Innovation and Opportunity Act, enacted in July 2014, eliminated three programs that supported employment for people with disabilities, including the Veterans’ Workforce Investment Program, administered by the Department of Labor, and the Migrant and Seasonal Farmworker Program and Projects with Industry, administered by the Department of Education. In addition, the Office of Management and Budget (OMB) worked with executive agencies to propose consolidating or eliminating two other programs, although Congress did not take action and both programs continued to receive funding. The Workforce Innovation and Opportunity Act also helped to promote efficiencies for some of the 47 employment and training programs that support a broader population (including people with and without disabilities), which we reported on in 2011. In particular, this law requires states to develop a unified state plan that covers all designated core programs in order to receive certain funding. As a result, states’ implementation of the requirement may enable them to increase administrative efficiencies in employment and training programs—a key objective of our prior recommendations. In addition, the House Budget Resolution for fiscal year 2016streamlining and consolidating federal job training programs and empowering states with the flexibility to tailor funding and programs to specific needs of their workforce, consistent with our recommendations in this area. Farm Program Payments: We reported in our 2011 annual report that Congress could save up to $5 billion annually by reducing or eliminating direct payments to farmers. These are fixed annual payments based on a farm’s history of crop production. Farmers received them regardless of whether they grew crops and even in years of record income. Direct payments were expected to be transitional when first authorized in 1996, but subsequent farm bills continued these payments. Congress passed the Agricultural Act of 2014, which eliminated direct payments to farmers and should save approximately $4.9 billion annually from fiscal year 2015 through fiscal year 2023, according to the Congressional Budget Office. Although Congress and executive branch agencies have made progress toward addressing the actions we have identified, further steps are needed to fully address the remaining actions, as shown in table 3. More specifically, 57 percent of the actions addressed to executive branch agencies and 66 percent of the actions addressed to Congress identified in our 2011-2014 reports remain partially or not addressed. As our work has shown, committed leadership is needed to overcome the many barriers to working across agency boundaries, such as agencies’ concerns about protecting jurisdiction over missions and control over resources or incompatible procedures, processes, data, and computer systems. Without increased or renewed leadership focus, opportunities will be missed to improve the efficiency and effectiveness of programs and save taxpayers’ dollars. In our 2013 annual report, we reported that federal agencies could achieve significant cost savings annually by expanding and improving their use of strategic sourcing—a contracting process that moves away from numerous individual procurement actions to a broader aggregated approach. In particular, DOD, DHS, DOE, and VA accounted for 80 percent of the $537 billion in federal procurement spending in fiscal year 2011, but reported managing about 5 percent, or $25.8 billion, through strategic sourcing efforts. In contrast, leading commercial firms leverage buying power by strategically managing 90 percent of their spending— achieving savings of 10 percent or more of total procurements costs. While strategic sourcing may not be suitable for all procurement spending, we reported that a reduction of 1 percent from procurement spending at these agencies would equate to over $4 billion in savings annually—an opportunity also noted in the House Budget Resolution for fiscal year 2016. However, a lack of clear guidance on metrics for measuring success has hindered the management of ongoing strategic sourcing efforts across the federal government. Since our 2013 report, OMB has made progress by issuing guidance on calculating savings for government-wide strategic sourcing contracts, and in December 2014 it issued a memorandum on category management that, among other things, identifies federal spending categories suitable for strategic sourcing. These categories cover some of the government’s largest spending categories, including information technology and professional services. According to OMB, these categories accounted for $277 billion in fiscal year 2013 federal procurements. This level of spending suggests that by using smarter buying practices the government could realize billions of dollars in savings. In addition, the administration has identified expanded use of high-quality, high-value strategic sourcing solutions as one of its cross-agency priority goals, which are a limited set of outcome-oriented, federal priority goals. However, until OMB sets government-wide goals and establishes metrics, the government may miss opportunities for billions in cost savings through strategic sourcing. Our work on defense has highlighted opportunities to improve efficiencies, reduce costs, and address overlapping and potentially duplicative services that result from multiple entities providing the same service, including the following examples. Combatant Command Headquarters Costs: Our body of work has raised questions about whether DOD’s efforts to reduce headquarters overhead will result in meaningful savings. In 2013, the Secretary of Defense directed a 20 percent cut in management headquarters spending throughout DOD, to include the combatant commands and service component commands. In June 2014, we found that mission and headquarters-support costs for the five geographic combatant commands and their service component commands we reviewed more than doubled from fiscal years 2007 through 2012, to about $1.7 billion. We recommended that DOD more systematically evaluate the sizing and resourcing of its combatant commands. If the department applied the 20 percent reduction in management headquarters spending to the entire $1.7 billion DOD used to operate and support the five geographic combatant commands in fiscal year 2012, we reported that DOD could achieve up to an estimated $340 million in annual savings. Electronic Warfare: We reported in 2011 that all four military services in DOD had been separately developing and acquiring new airborne electronic attack systems and that spending on new and updated systems was projected to total more than $17.6 billion during fiscal years 2007-2016. While the department has taken steps to better inform its investments in airborne electronic attack capabilities, it has yet to assess its plans for developing and acquiring two new expendable jamming decoys to determine if these initiatives should be merged. More broadly, we identified multiple weaknesses in the way DOD acquires weapon systems and the actions that are needed to address these issues, which we recently highlighted in our high-risk series update in February 2015. For example, further progress must be made in tackling the incentives that drive the acquisition process and its behaviors, applying best practices, attracting and empowering acquisition personnel, reinforcing desirable principles at the beginning of programs, and improving the budget process to allow better alignment of programs and their risks and needs. The House Budget Resolution for fiscal year 2016 encourages a continued review to improve the affordability of defense acquisitions. Addressing the issues that we have identified could help DOD improve the returns on its $1.4 trillion investment in major weapon systems and find ways to deliver capabilities for less than it has in the past. The federal government annually invests more than $80 billion on information technology (IT). The magnitude of these expenditures highlights the importance of avoiding duplicative investments to better ensure the most efficient use of resources. Opportunities remain to reduce or better manage duplication and the cost of government operations in critical IT areas, many of which require agencies to work together to improve systems, including the following examples. Information Technology Investment Portfolio Management: To better manage existing IT systems, in March 2012 OMB launched the PortfolioStat initiative. PortfolioStat requires agencies to conduct an annual, agency-wide review of their IT portfolios to reduce commodity IT spending and demonstrate how their IT investments align with their missions and business functions, among other things. In 2014, we found that while the 26 federal agencies required to participate in PortfolioStat had made progress in implementing OMB’s initiative, weaknesses existed in agencies’ implementation of the initiative, such as limitations in the Chief Information Officer’s authority. In the President’s Fiscal Year 2016 Budget submission, the administration proposes to use PortfolioStat to drive efficiencies in agencies’ IT programs. As noted in our recent high-risk series update, we have made more than 60 recommendations to improve OMB and agencies’ implementation of PortfolioStat and provide greater assurance that agencies will realize the nearly $6 billion in savings they estimated they would achieve through fiscal year 2015. Federal Data Centers: In September 2014, we found that consolidating federal data centers would provide an opportunity to improve government efficiency and achieve cost savings and avoidances of about $5.3 billion by fiscal year 2017. Although OMB has taken steps to identify data center consolidation opportunities across agencies, weaknesses exist in the execution and oversight of the consolidation efforts. Specifically, we reported many agencies are not fully reporting their planned savings to OMB as required; GAO estimates that the savings have been underreported to OMB by approximately $2.2 billion. It will continue to be important for agencies to complete their inventories and implement their plans for consolidation to better ensure continued progress toward OMB’s planned consolidation, optimization, and cost-savings goals. Information Technology Operations and Maintenance: Twenty-seven federal agencies plan to spend about $58 billion—almost three- quarters of the overall $79 billion budgeted for federal IT in fiscal year 2015—on the operations and maintenance of legacy investments. Given the magnitude of these investments, it is important that agencies effectively manage them to better ensure the investments (1) continue to meet agency needs, (2) deliver value, and (3) do not unnecessarily duplicate or overlap with other investments. Accordingly, OMB developed guidance that calls for agencies to analyze (via operational analysis) whether such investments are continuing to meet business and customer needs and are contributing to meeting the agency’s strategic goals. In our 2013 annual report, we reported that agencies did not conduct such an analysis on 52 of the 75 major existing information technology investments we reviewed. As a result, there was increased potential for these information technology investments in operations and maintenance—totaling $37 billion in fiscal year 2011—to result in waste and duplication. To avoid wasteful or duplicative investments in operations and maintenance, we recommended that agencies analyze all information technology investments annually and report the results of their analyses to OMB. Agencies have made progress in performing some operational analyses; however, until the agencies fully implement their policies and ensure complete and thorough operational analyses are being performed on their multibillion-dollar operational investments, there is increased risk that these agencies will not know whether these investments fully meet their intended objectives, therefore increasing the potential for waste and duplication. Geospatial Investments: In a 2013 report, we found that 31 federal departments and agencies invested billions of dollars to collect, maintain, and use geospatial information—information linked to specific geographic locations that supports many government functions, such as maintaining roads and responding to natural disasters. We found that federal agencies had not effectively implemented policies and procedures that would help them identify and coordinate geospatial data acquisitions across the government, resulting in duplicative investments. In a 2015 report, we reported that federal agencies had made progress in implementing geospatial data-related policies and procedures. However, critical items remained incomplete, such as coordinating activities with state governments, which also use a variety of geospatial datasets—including address data and aerial imagery—to support their missions. We found that a new initiative to create a national address database could potentially result in significant savings for federal, state, and local governments. To foster progress in developing such a national database, we suggested that Congress consider assessing existing statutory limitations on address data. We also recommended that the interagency coordinating body for geospatial information (1) establish subcommittees and working groups to assist in furthering a national address database and (2) identify discrete steps to further a national imagery program benefitting governments at all levels. Finally, we recommended that the Director of OMB require agencies to report on their efforts to implement policies and procedures before making new investments in geospatial data. OMB generally agreed with this recommendation. In addition, in March 2015, the Geospatial Data Act of 2015 was introduced and includes provisions to improve oversight and help reduce duplication in the management of geospatial data, consistent with our recommended actions. Fully addressing the actions in our two reports could help reduce duplicative investments and the risk of missing opportunities to jointly acquire data, potentially saving millions of dollars. The federal IT acquisition reforms enacted in December 2014 reinforced a number of the actions that we have recommended to address IT management issues. It established that the Chief Information Officer in each agency has a significant role in the decision processes for planning, programming, management, governance and oversight related to information technology, as well as approval for IT budget requests. In addition, the law containing these reforms codifies federal data center consolidation, emphasizing annual reporting on cost savings and detailed metric reporting and OMB’s PortfolioStat process, focusing on reducing duplication, consolidation, and cost savings. If effectively implemented, this legislation should improve the transparency and management of IT acquisitions and operations across the government. Over the years, we have identified a number of actions that have the potential for sizable cost savings through improved fiscal oversight in the Medicare and Medicaid programs. For example, CMS could save billions of dollars by improving the accuracy of its payments to Medicare Advantage programs, such as through methodology adjustments to account for diagnostic coding differences between Medicare Advantage and traditional Medicare. In addition, we found that federal spending on Medicaid demonstrations could be reduced by billions of dollars if HHS were required to improve the process for reviewing, approving, and making transparent the basis for spending limits approved for Medicaid demonstrations. In particular, our work between 2002 and 2014 has shown that HHS approved several demonstrations without ensuring that they would be budget neutral to the federal government. To address this issue, we suggested that Congress could require the Secretary of Health and Human Services to improve the Medicaid demonstration review process, through steps such as improving the review criteria, better ensuring that valid methods are used to demonstrate budget neutrality, and documenting and making clear the basis for the approved limits. We concluded in August 2014 that HHS’s approval of $778 million dollars of hypothetical costs (i.e., expenditures the state could have made but did not) in the Arkansas demonstration spending limit and the department’s waiver of its cost-effectiveness requirement is further evidence of our long-standing concerns that HHS is approving demonstrations that may not be budget-neutral. HHS’s approval of the Arkansas demonstration suggests that the Secretary may continue to approve section 1115 Medicaid demonstrations that raise federal costs, inconsistent with the department’s policy of budget neutrality. We maintain that enhancing the process HHS uses to demonstrate budget neutrality of its demonstrations could save billions in federal expenditures. In our February 2015 high-risk series update, we reported that while CMS had taken positive steps to improve Medicare and Medicaid oversight in recent years, in several areas, CMS had still to address some issues and recommendations, and improper payment rates have remained We reported that to achieve and demonstrate unacceptably high. reductions in the estimated $60 billion dollars in Medicare improper payments in 2014, CMS should fully exercise its authority related to strengthening its provider and supplier enrollment provisions and address our open recommendations related to prepayment and postpayment claims review activities. Similarly, in the area of Medicaid for which the federal share of estimated improper payments was $17.5 billion in 2014, we have made recommendations targeted at (1) improving the completeness and reliability of key data needed for ensuring effective oversight, (2) implementing effective program integrity processes for managed care, (3) ensuring clear reporting of overpayment recoveries, and (4) refocusing efforts on program integrity approaches that are cost- effective. These recommendations, if effectively implemented, could improve program management, help reduce improper payments in these programs, and achieve cost savings. Over the last 4 years, our work identified multiple opportunities for the government to increase revenue collections. For example, in 2014, we identified three actions that Congress could authorize that could increase tax revenue collections from delinquent taxpayers by hundreds of millions of dollars over a 5-year period: limiting issuance of passports to applicants, levying payments to Medicaid providers, and identifying security clearance applicants. For example, Congress could consider requiring the Secretary of State to prevent individuals who owe federal taxes from receiving passports. We found that in fiscal year 2008, passports were issued to about 16 million individuals; about 1 percent of these collectively owed more than $5.8 billion in unpaid federal taxes as of September 30, 2008. According to a 2012 Congressional Budget Office estimate, the federal government could save about $500 million over a 5- year period by revoking or denying passports to those with certain federal tax delinquencies. We have also identified opportunities to implement program benefit offsets, in which certain program benefits for individuals are reduced in recognition of other benefits received. Examples include the following: Social Security Offsets: In our 2011 annual report, we reported that the Social Security Administration (SSA) needs data from state and local governments on retirees who receive pensions from employment not covered under Social Security to better enforce offsets and ensure benefit fairness. In particular, SSA needs this information to fairly and accurately apply the Government Pension Offset, which generally applies to spouse and survivor benefits, and the Windfall Elimination Provision, which applies to retired worker benefits. The Social Security’s Government Pension Offset and Windfall Elimination Provision take noncovered employment into account when calculating Social Security benefits. While information on receipt of pensions from noncovered employment is available for federal pension benefits from the federal Office of Personnel Management, it is not available to SSA for many state and local pension benefits. The President’s Fiscal Year 2016 Budget submission re-proposed legislation that would require state and local governments to provide information on their noncovered pension payments to SSA so that the agency can apply the Government Pension Offset and Windfall Elimination Provision. The proposal includes funds for administrative expenses, with a portion available to states to develop a mechanism to provide this information. Also, we continue to suggest that Congress consider giving the Internal Revenue Service the authority to collect the information that SSA needs to administer these offsets. Providing information on the receipt of state and local noncovered pension benefits to SSA could help the agency more accurately and fairly administer the Government Pension Offset and Windfall Elimination Provision and could result in an estimated $2.4 billion— $6.5 billion in savings over 10 years if enforced both retrospectively and prospectively. If Social Security enforced the offsets only prospectively, the overall savings still would be significant. Disability and Unemployment Benefits: In our 2014 annual report, we found that 117,000 individuals received concurrent cash benefit payments in fiscal year 2010 from the Disability Insurance and Unemployment Insurance programs totaling more than $850 million because current law does not preclude the receipt of overlapping benefits. Individuals may be eligible for benefit payments from both Disability Insurance and Unemployment Insurance due to differences in the eligibility requirements; however, in such cases, the federal government is replacing a portion of lost earnings not once, but twice. The President’s Fiscal Year 2016 Budget submission proposes to eliminate these overlapping benefits, and during the 113th Congress, bills had been introduced in both the U.S. House of Representatives and the Senate containing language to reduce Disability Insurance payments to individuals for the months they collect Unemployment Insurance benefits. According to CBO, this action could save $1.2 billion over 10 years in the Social Security Disability Insurance program. Congress should consider passing legislation to offset Disability Insurance benefit payments for any Unemployment Insurance benefit payments received in the same period. Table 4 highlights some of our suggested actions within these and other areas that could result in tens of billions of dollars in cost-savings or revenue-enhancement opportunities, according to estimates from GAO, executive branch agencies, the Congressional Budget Office, or the Joint Committee on Taxation. For GAO’s most recent work on GPRAMA, see GAO, Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories, GAO-15-83 (Washington D.C.: Oct. 31, 2014); Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service, GAO-15-84 (Washington D.C.: Oct. 24, 2014); and Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions, GAO-14-747 (Washington D.C.: Sept. 26, 2014). In addition, information on GAO’s work on GPRAMA can be found at http://www.gao.gov/key_issues/managing_for_results_in_government/issue_summary. a greater focus on expenditures and outcomes are essential to improving the efficiency and effectiveness of federal efforts. To help analysts and decision makers better assess the extent of fragmentation, overlap and duplication, GAO has developed an evaluation and management guide (GAO-15-49SP), which is being released concurrently with our 2015 annual report. The guide includes two parts. Part one provides four steps for analysts—including federal, state, and local auditors; congressional staff; and researchers—to identify and evaluate instances of fragmentation, overlap or duplication. Each step includes examples that illustrate how to implement suggested actions or consider different types of information. Part two provides guidance to help policymakers reduce or better manage fragmentation, overlap, and duplication. In recognition that the pervasiveness of fragmentation, overlap, and duplication may require attention beyond the program level, the guide also includes information on a number of options Congress and the executive branch may consider to address these issues government- wide. Some of these options are executive branch reorganization, special temporary commissions, interagency groups, automatic sunset provisions, and portfolio or performance-based budgeting. These options can be used independently or together to assist policymakers in evaluating and addressing fragmentation, overlap, and duplication beyond the programmatic level. Congress can also use its power of the purse and oversight powers to incentivize executive branch agencies to act on our suggested actions and monitor their progress. In particular, the Senate Budget Resolution for fiscal year 2016 directs committees to review programs and tax expenditures within their jurisdiction for waste, fraud, abuse, or duplication and to consider the findings from our past annual reports. Also, the accompanying report for the House Budget Resolution for fiscal year 2016 proposes that the Department of Justice (DOJ) streamline grants into three categories—first responder, law enforcement, and victims—which is consistent with our prior work recommending that DOJ better target its grant resources. The resolution also highlights a number of the issues presented in our annual reports (including the multiple programs that support Science, Technology, Engineering, and Mathematics education, housing assistance, homeland security preparedness grants, and green building initiatives); notes the number of programs that will need to be reauthorized in fiscal year 2016; and states that our findings should result in programmatic changes in both authorizing statutes and program funding levels. Congressional use of our findings in its decision making for the identified areas of fragmentation, overlap, and duplication will send an unmistakable message to agencies that Congress considers these issues a priority. Through its budget, appropriations, and oversight processes, Congress can also shift the burden to the agencies to demonstrate the effectiveness of their programs to justify continued funding. We will continue to conduct further analysis to look for additional or emerging instances of fragmentation, overlap, and duplication and opportunities for cost savings or revenue enhancement. Likewise, we will continue to monitor developments in the areas we have already identified in this series. We stand ready to assist this and other committees in further analyzing the issues we have identified and evaluating potential solutions. Chairman Johnson, Ranking Member Carper, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer questions. For further information on this testimony or our April 14, 2015, reports, please contact Orice Williams Brown, Managing Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or [email protected], and A. Nicole Clowers, Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or [email protected]. Contact points for the individual areas listed in our 2015 annual report can be found at the end of each area in GAO-15-404SP. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the fiscal pressures facing the government continue, so too does the need for executive branch agencies and Congress to improve the efficiency and effectiveness of government programs and activities. Such opportunities exist throughout government. To bring these opportunities to light, Congress included a provision in statute for GAO to annually identify federal programs, agencies, offices, and initiatives (both within departments and government-wide) that are fragmented, overlapping, or duplicative. As part of this work, GAO also identifies additional opportunities to achieve cost savings or enhanced revenue collection. GAO's 2015 annual report is its fifth in this series ( GAO-15-404S P). This statement discusses (1) new opportunities GAO identifies in its 2015 report; (2) the status of actions taken to address the opportunities GAO identified in its 2011-2014 reports; and (3) existing and new tools available to help executive branch agencies and Congress reduce or better manage fragmentation, overlap, and duplication. To identify what actions exist to address these issues and take advantage of opportunities for cost savings and enhanced revenues, GAO reviewed and updated prior work, including recommendations for executive action and matters for congressional consideration. GAO's 2015 annual report identifies 66 new actions that executive branch agencies and Congress could take to improve the efficiency and effectiveness of government in 24 areas. GAO identifies 12 new areas in which there is evidence of fragmentation, overlap, or duplication. For example, GAO suggests that Congress repeal the statutorily required US Family Health Plan—a decades-old component of the Department of Defense's (DOD) Military Health System—because it duplicates the efforts of DOD's managed care support contractors by providing the same benefit to military beneficiaries. GAO also identifies 12 areas where opportunities exist either to reduce the cost of government operations or enhance revenue collections. For example, GAO suggests that Congress update the way Medicare has paid certain cancer hospitals since 1983, which could save about $500 million per year. The executive branch and Congress have made progress in addressing the approximately 440 actions government-wide that GAO identified in its past annual reports. Overall, as of March 6, 2015, 37 percent of these actions were addressed, 39 percent were partially addressed, and 20 percent were not addressed. Executive branch and congressional efforts to address these actions over the past 4 years have resulted in over $20 billion in financial benefits, with about $80 billion more in financial benefits anticipated in future years from these actions. Although progress has been made, fully addressing all the remaining actions identified in GAO's annual reports could lead to tens of billions of dollars of additional savings. Addressing fragmentation, overlap, and duplication within the federal government is challenging due to, among other things, the lack of reliable budget and performance information. If fully and effectively implemented, the GPRA Modernization Act of 2010 and the Digital Accountability and Transparency Act of 2014 could help to improve performance and financial information. In addition, GAO has developed an evaluation and management guide ( GAO-15-49SP ), which is being released concurrently with the 2015 annual report. This guide provides a framework for analysts and decision makers to identify and evaluate instances of fragmentation, overlap and duplication and consider options for addressing or managing such instances. |
The United Nations is comprised of six principal bodies: the General Assembly, Security Council, Economic and Social Council, Trusteeship Council, International Court of Justice, and the Secretariat. The United Nations system also encompasses funds and programs, such as UNDP, and specialized agencies, such as UNESCO. These funds, programs, and specialized agencies have their own governing bodies and budgets, but follow the guidelines of the UN charter. Article 101 of the UN Charter calls for staff to be recruited on the basis of “the highest standards of efficiency, competence, and integrity” as well as from “as wide a geographical basis as possible.” Each UN agency also has its own personnel policies, procedures, and staff rules. The Secretariat and several specialized agencies have quantitative formulas that establish targets for equitable geographical representation in designated professional positions. Other agencies have negotiated informal targets with the United States, while some agencies do not have formal or informal targets. Agencies with formal quantitative targets for equitable representation do not apply these targets to all professional positions. Instead, these organizations set aside positions that are subject to geographic representation from among the professional and senior positions performing core agency functions, funded from regular budget resources. Positions that are exempted from being counted geographically include linguist and peacekeeping positions, and those funded by extra-budgetary resources. In addition, UN agencies employ staff in short-term positions that also are not geographically counted. Nongeographic staff members include employees tied to specific projects (L-staff), employees in assignments of limited duration (ALD) contracted for 4 years or less, temporary employees contracted for less than 1 year, and gratis personnel, such as JPOs, who are funded by member states. In addition, these organizations utilize various nonstaff positions, such as contractors and consultants. Of the five agencies we reviewed, three—the Secretariat, IAEA, and UNESCO—have designated positions subject to geographic distribution. The Secretariat and UNESCO have established formulas to determine member states’ targets for equitable representation, which consider three factors: membership status, financial contribution, and population size. IAEA informally calculates a member state to be underrepresented if its geographic representation is less than half of its percent contribution to the budget. Using this method, we calculated a U.S. target. UNHCR has not established a quantitative formula or positions subject to geographic representation, but has agreed to an informal target for equitable U.S. representation. UNDP generally follows the principle of equitable geographic representation, but has not adopted formal or informal targets. The Department of State is the U.S. agency primarily responsible for leading U.S. efforts toward achieving equitable U.S. representation in UN organizations. In doing so, State cooperates with at least 17 federal agencies that have interests in specific UN organizations. A 1970 executive order assigns the Secretary of State responsibility for leading and coordinating the federal government’s efforts to increase and improve U.S. participation in international organizations through transfers and details for federal employees. The order further calls for each agency in the executive branch to cooperate “to the maximum extent feasible” to promote details and transfers through measures such as (1) notifying well- qualified agency employees of vacancies in international organizations and (2) providing international organizations with detailed assessments of the qualifications of employees being considered for specific positions. In addition, under legislation enacted in 1991, the Secretary of State is required to report to Congress on whether each international organization with a geographic distribution formula is making “good faith steps to increase the staffing of United States citizens and has met its geographic distribution formula.” State’s Bureau of International Organization Affairs is responsible for implementing these requirements. While State is responsible for promoting and seeking to increase U.S. representation in the UN, the UN entities themselves are ultimately responsible for hiring their employees and achieving equitable representation. We previously reviewed U.S. representation in UN organizations and found that, between 1992 and 2001, Americans were not equitably represented in the UN system, given the agencies’ own targets. In addition, the UN agencies lacked long-range workforce planning strategies to improve the geographic distribution imbalance. We also reported that State’s efforts to improve U.S. representation in the UN system did not reflect its high priority status, particularly relative to other member countries. We recommended that the Secretary of State (1) develop a comprehensive strategy that specifies performance goals and time frames for achieving equitable representation of Americans in the UN System and include efforts to foster interagency coordination, (2) work with human resources directors of UN organizations to develop plans and strategies for achieving equitable geographic representation within specified time frames, and (3) provide copies of State’s annual report to Congress on UN progress to the heads of UN organizations for appropriate attention and action. State has subsequently implemented these recommendations, including adding a performance indicator on the UN’s employment of Americans to its performance and accountability documents. We also recommended that State develop guidelines defining its goal of obtaining an equitable share of senior-level and policy-making positions for U.S. citizens and that it use these guidelines to assess whether the United States is equitably represented in high-level positions in UN organizations. State did not agree with this final recommendation and has not implemented it. U.S. citizens are underrepresented at three of the five UN agencies we reviewed: IAEA, UNESCO, and UNHCR. Given projected staff levels, retirements, and separations for 2006-2010, these agencies need to hire more Americans than they have in recent years to meet their minimum targets for equitable U.S. representation in 2010. Relative to UN agencies’ formal or informal targets for equitable geographic representation, U.S. citizens are underrepresented at three of the five agencies we reviewed—IAEA, UNESCO, and UNHCR. U.S. citizens are equitably represented at the UN Secretariat, though at the lower end of its target range, while the fifth agency—UNDP—has not established a target for U.S. representation. U.S. citizens fill about 11 percent of UNDP’s professional positions. Table 1 provides information on U.S. representation at the five UN agencies as of 2005. Table 1 also shows that the percentage of U.S. citizens employed in nongeographic positions (or nonregular positions in the case of UNHCR and UNDP) is higher at IAEA, UNHCR, and UNDP and lower at the Secretariat and UNESCO compared to the percentage of geographic (or regular) positions held by U.S. citizens. The most notable difference is at the IAEA, where the percentage of U.S. citizens employed in nongeographic positions is considerably higher than the percentage employed in geographic positions due to the high percentage of temporary, JPO, and consultant and contractor positions held by Americans. (See app. II for details on the composition of Americans in geographic and nongeographic positions.) As shown in table 2, U.S. citizen representation in geographic positions in “all grades” between 2001 and 2005 has been declining at UNHCR and displays no clear trend in “all grades” at the other four UN agencies. U.S. representation in policymaking and senior-level positions increased at two agencies -–IAEA and UNDP—and displayed no overall trend at the Secretariat, UNESCO and UNHCR over the full five years. At the Secretariat, although no trend is indicated, U.S. representation has been decreasing in policy-making and senior-level positions since 2002. At UNESCO, the data for 2001 to 2004 did not reflect a trend, but the overall percentage of Americans increased in 2005, reflecting increased recruiting efforts after the United States rejoined UNESCO in 2003. At UNHCR, the representation of U.S. citizens in these positions grew steadily from 2001 through 2004, but declined in 2005. Regarding entry-level positions, U.S. representation in these positions increased at UNESCO, decreased at IAEA, UNHCR and UNDP, and showed no trend at the Secretariat. (See app. III for more detailed information on the trends in geographic employment.) We estimate that each of the four agencies with geographic targets—the Secretariat, IAEA, UNESCO, and UNHCR—would need to hire U.S. citizens in greater numbers than they have in recent years to achieve their minimum targets by 2010, given projected staff levels, retirements, and separations; otherwise, with the exception of UNESCO, U.S. geographic representation will decline further. As shown in table 3, IAEA and UNHCR would need to more than double their current average hiring rate to achieve targets for U.S. representation. The Secretariat could continue to meet its minimum geographic target for U.S. citizens if it increased its annual hiring of U.S. citizens from 20 to 23. UNESCO could achieve its minimum geographic target by increasing its current hiring average of 4.5 Americans to 6 Americans. Although the fifth agency, UNDP, does not have a target, it would have to increase its annual hiring average of U.S. citizens from 17.5 to 26 in order to maintain its current ratio of U.S. regular professional staff to total agency regular professional staff. If current hiring levels are continued through 2010, two of the five agencies—IAEA and UNHCR—would fall substantially below their minimum targets. In only one agency—UNESCO—would the percentage of geographic positions filled by U.S. citizens increase under current hiring levels due in part, to the recent increased hiring of U.S. citizens. (See app. I for a discussion of our hiring projection methodology.) A combination of barriers, including some common factors as well as agency-specific factors, adversely affects recruitment and retention of professional staff, including Americans, at each of the five UN agencies. Barriers common to most UN agencies we reviewed include nontransparent human resource practices, a limited number of positions open to external candidates, lengthy hiring processes, comparatively low or unclear compensation, required mobility and rotation, and limited U.S. government support. These barriers combine with distinct agency-specific factors to impede recruitment and retention. For example, candidates serving in professional UN positions funded by their member governments are more likely to be hired by the Secretariat than those who take the Secretariat’s entry-level exam; however, the United States has not funded such positions at the Secretariat. IAEA has difficulty attracting U.S. employees because the pool of American nuclear specialists is decreasing. At UNESCO, U.S. representation is below the negotiated target, in part, because the United States was not a member for 19 years. UNHCR has difficulty retaining staff, particularly at the mid-career level, because it has more hardship duty stations than any other agency. UNDP faces several barriers that are also present at other UN agencies, such as limited U.S. government support, and is also seeking to increase the hiring of senior staff from developing countries. We identified six barriers that commonly affect U.S. representation in the UN agencies we reviewed, although often to differing degrees. Nontransparent Human Resource Practices: According to Americans employed at UN organizations, a key barrier to American representation across the five UN agencies we reviewed was the lack of transparent human resource management practices. For example, some UN managers circumvent the competitive hiring process by employing individuals on short-term contracts—positions that are not vetted through the regular, competitive process—for long-term needs. In addition, some Americans at each of the agencies, except IAEA, said that “cronyism” exists and that certain individuals only hire their fellow nationals. Others said that the perception of U.S. overrepresentation hinders managers from hiring or promoting U.S. citizens regardless of their skills. In response, UN human resource officials expressed concern about U.S. employees’ perception of “cronyism” and lack of transparent practices. UN human resource officials said that hiring processes include rigorous reviews involving the personnel division; managers; and appointment, promotion and review boards. However, the UN Secretary-General also acknowledged in a report to the General Assembly that management systems, including human resources, lacked transparency. Limited External Opportunities: Recruiting U.S. candidates is difficult because agencies offer a limited number of posts to external candidates. Each of the organizations we reviewed, except IAEA, advertises professional, or P-level, vacancies to current employees before advertising them externally in order to provide career paths for their staff and to motivate staff. Furthermore, the definition of external candidates used at the Secretariat, IAEA, UNESCO, and UNHCR is quite broad and may include current staff with temporary appointments, JPOs, former staff, or staff in other agencies in the UN Common System. In reviewing hiring data, we found that three of the five agencies—UNESCO, UNHCR and UNDP—filled 50 percent or more of new appointments by promotions or other internal candidates rather than by hiring external candidates. (See fig. 1. For definitions of promotion, internal hire, and external hire, see app. IV.) IAEA fills a large percentage of its positions with external candidates because, in addition to not giving internal candidates hiring preference, the agency employs the majority of its staff members for 7 years or less. Although the data indicate that the Secretariat hires a significant percentage of external candidates, the Secretariat’s definition of “external candidates,” as describe above, includes staff on temporary contracts and individuals who have previous experience working at the agency. Lengthy Hiring Process: For positions that are advertised externally, the agencies’ lengthy hiring processes can deter candidates from accepting UN employment. For example, a report from the Secretary General states that the average hiring process is too slow, taking 174 days from the time a vacancy announcement is issued to the time a candidate is selected, causing some qualified applicants to accept jobs elsewhere. One American at UNESCO noted that his hiring process took about 9 months, while another said it took about 1 year. At UNHCR, even its “fast-track” system used to staff emergency situations takes five months, on average. Many Americans that we interviewed concurred with the report’s sentiment, saying that it is difficult to plan a job move when there is a long delay between submitting an application and receiving an offer. In March 2006, the Secretary General proposed cutting the average recruitment time in half. Comparatively Low or Unclear Compensation: Comparatively low salaries and benefits that were not clearly explained were among the most frequently mentioned deterrents to UN employment for Americans. American employees we interviewed noted that UN salaries, particularly for senior and technical posts, are not comparable with U.S. government and private sector employment. The International Civil Service Commission also reported that remuneration across the UN common system is not competitive in the international labor market. When candidates consider current UN salaries in tandem with UN employee benefits, such as possible reimbursement for U.S. taxes and school tuition allowances through college, UN compensation may be more attractive. However, U.S. citizens employed at IAEA and UNESCO said that their agency did not clearly explain the benefits, or explained them only after the candidate accepted a position. Incomplete or late information hampers a candidate’s ability to decide in a timely manner whether a UN position is in his or her best interest. In addition, difficulty securing spousal employment can decrease family income and may also affect American recruitment since many U.S. families have two wage earners. At many overseas UN duty stations, work permits can be difficult to obtain, the local economy may offer few employment opportunities, and knowledge of the local language may be required. In addition, Americans with whom we spoke said that an unemployed spouse might not be happy as such, and might prefer to return to the United States where he or she can continue a career. U.S. employees at IAEA (located in Vienna) noted that difficulty in securing spousal employment is a significant problem for recruiting and retaining U.S. professionals at their agency. Required Mobility or Rotation: UNHCR and UNDP require their staff to change posts at least every 3 to 6 years with the expectation that staff serve the larger portion of their career in the field; the UN Secretariat and UNESCO are implementing similar policies. While IAEA does not require its employees to change posts, it generally only hires employees for 7 years or less. Such policies dissuade some Americans from accepting or staying in a UN position because of the disruptions to personal or family life such frequent moves can cause. Limited U.S. Government Support: At four of the five agencies we reviewed, all except IAEA, a number of American employees said that they did not receive U.S. government support during their efforts to obtain a UN job or to be promoted at the job they held. The U.S. government currently supports candidates applying for director-level, or higher, posts, and puts less emphasis on supporting candidates seeking lower-level professional posts. State said that only on an exceptional basis is assistance given in support of promotions because the U.S.’s general policy is not to intervene in internal UN matters, such as promotions. State said the UN’s “code of conduct” makes it clear that it is improper for international civil servants to lobby or seek support from governments to obtain advancement for themselves and that governments should not accede to such requests nor intervene in such matters. Although UN employees are international civil servants directly hired by UN agencies, some countries facilitate the recruitment of their nationals by referring qualified candidates, conducting recruitment missions, and sponsoring JPOs or Associate Experts. At the entry level, hiring for professional positions is limited to an average of 2 percent of individuals invited to take the Secretariat’s National Competitive Recruitment Exam (NCRE), while in contrast, the Secretariat hired an average of 65 percent of Associate Experts sponsored by their national government; however, the U.S. government has not sponsored any Associate Experts at the Secretariat between 2001 and 2005. In addition, a lack of career development opportunities affects retention. Our review of the Secretariat’s data shows individuals who take the NCRE have a lower probability of being hired than do Associate Experts sponsored by their national government at the end of their tenure. Of the 3,398 individuals invited to take the NCRE each year, the UN Secretariat hired an average of 71 individuals, or 2 percent, per year from 2001 through 2004. Though U.S. citizens fare slightly better than the general population (the Secretariat hires an average of 4 percent of Americans invited to take the exam), the UN Secretariat hires just an average of 7 Americans through the NCRE each year. Employees hired from the exam fill geographic posts and count toward country representation. Human resource officials noted that individuals who are hired through the exam process may be on the roster for 1 year or more before being hired. Figure 2 shows the number of applicants between 2001 and 2004 at various stages of the exam from all nationalities and from the United States, respectively. In contrast, the Secretariat hires an average of 83 individuals each year who have finished their tenure as Associate Experts. Given that donor countries together sponsor an average of 128 associate experts each year, 65 percent, on average, have been hired when they finish their tenure. However, individuals hired at the end of their Associate Expert service may or may not fill geographic posts. An average of 16 countries sponsor young professionals in this program each year. The United States has not sponsored any Associate Experts at the Secretariat since at least 2001; therefore, no Americans have been hired in this manner between then and July 2006. The lack of career and promotion opportunities is one of the two most “demotivating” factors for UN employees, according to a 2005 survey of 5,320 UN staff. Fifteen of the 19 American employees we interviewed at the UN Secretariat also cited a lack of career development opportunities as a factor negatively affecting U.S. retention. Staff also mentioned that contract distinctions limit career development, as individuals with short- duration contracts have difficulty obtaining regular posts. Peacekeepers, for example, work under an assignment of limited duration that can last up to 4 years. Although they have actual experience working for the Secretariat, they are considered external candidates and cannot apply as an internal candidate. Moreover, their time working in field posts does not count toward promotion eligibility. In recognition of this seeming inequity, the Secretary General has proposed instituting a single contract type to expand career opportunities. Continuing U.S. underrepresentation at IAEA has been described by U.S. government officials as a “supply-side issue,” with the pool of American candidates with the necessary education and experience decreasing, as nuclear specialists are aging and few young people have entered that field. For those candidates that are qualified, IAEA may not be a particularly attractive place to work owing to its rotation policy. IAEA’s Director General reported that the recruitment of staff, particularly in the scientific and technical areas, is becoming increasingly difficult because the nuclear workforce is aging and retiring. Similarly, a discussion paper from DOE’s Brookhaven National Laboratory stated that American experts in the nuclear industry are aging and retiring while fewer U.S. citizens are seeking relevant technical degrees. For example, according to the Nuclear Energy Institute, nearly half of nuclear industry employees are over age 47 and less than 8 percent of such employees are younger than age 32. The institute states further that over the next 5 years nuclear companies may lose an estimated 23,000 workers, representing 40 percent of all jobs in the sector. IAEA, as with all UN agencies, has a mandatory retirement age of 62, and according to State officials, the agency generally will not consider applicants above age 57 because they will not be able to complete the average 5-year contract. IAEA said it prefers to hire staff who can fulfill the normal five-year appointment but recently hired a staff member who would reach the mandatory retirement age in two years. Disqualifying nuclear specialists over age 57 dramatically limits the already small pool of qualified Americans able to work at IAEA. For candidates who are qualified, IAEA may not be an attractive place to work owing to its rotation policy, particularly given that the agency tends to hire individuals at the mid-career level. American employees and U.S. and UN officials we interviewed cited IAEA’s 7-year rotation policy as a disincentive to recruiting and retaining staff. The agency usually offers international professionals a 3-year contract that can be extended up to 7 years. While IAEA is forthright about not being a “career” agency, the prospect of working only 3 to 7 years dissuades some Americans who are unsure if they can find meaningful employment at the end of their IAEA tenure. According to U.S. government officials who recruit candidates for IAEA, working at IAEA for a relatively short amount of time is not worth the risk to Americans already well-established in their careers. While the U.S. government guarantees its civil servants reemployment rights after working with an international organization, federally contracted national laboratories have inconsistent reemployment policies, which can be negotiated on an individual basis. Private sector firms may not offer any expectation of reemployment. U.S. government agencies who do rehire employees may not make use of the IAEA experience or may offer a salary that does not compensate for the intervening years of work experience, according to U.S. officials. Moreover, regarding retirement, some Americans working at IAEA told us that U.S. government agencies do not count their years at IAEA toward their years in U.S. government service. In addition, individuals may have to give up their U.S. security clearance to work at IAEA, which can take more than a year to reinstate. The United States’ 19-year withdrawal from UNESCO contributed to its current underrepresentation. Increasing American representation in the future may be complicated by budget restrictions. The number of Americans employed at UNESCO declined during the 19 years that the United States was not a member. In 1984, the United States—accompanied by the United Kingdom in 1985—withdrew from the organization over concerns about the agency’s management and other issues. During the intervening years, in part because funding decreased considerably with the withdrawal of these two countries, UNESCO’s staff decreased in size by about one-third. When the United States left UNESCO in 1984, Americans comprised 9.6 percent of the organization’s geographic professional staff. When it rejoined in 2003, Americans comprised only 2.9 percent. By 2005 that number had increased to 4.1 percent—the third largest group of nationals UNESCO employed, although still below the minimum geographic target. Although UNESCO did employ American citizens during that time, it was not held to any geographic target for Americans because the United States was not a member. UNESCO must hire Americans in greater numbers to meet its minimum target for U.S. representation, which may be difficult in part because UNESCO may have limited hiring in the future. Vacancies available to external candidates may decrease given current budget restrictions, as UNESCO has applied a zero-nominal-growth policy to its regular budget. The organization thus plans to limit hiring for regular budget positions— which include all geographic positions—to filling vacancies created by retirement and other attrition. The difficult conditions that accompany much of UNHCR’s work, coupled with the requirement to change duty stations every 4 years, causes attrition at the mid-career levels. Moreover, various human resource peculiarities, including the predominance of indefinite contracts and staff- in-between-assignments, complicate the staffing process. UNHCR’s requirement that employees change duty stations every 4 years was one of the most frequently cited barriers to retaining staff among the American employees we interviewed. UNHCR’s mission to safeguard the rights and well-being of refugees necessitates work in hardship and high- risk locations. As such, UNHCR has twice as many hardship duty stations as any other UN agency. At least one-third of its international professional staff works in posts where, in some cases, their family may not be allowed to accompany them. To alleviate the burden of serving in hardship posts, the majority of international professionals are expected to rotate between different categories of duty stations. However, a UN Joint Inspection Unit report found that the staffing system may not always allow staff to rotate out of the more difficult locations. For example, employees who serve in hardship locations, especially in Africa, are less likely to rotate to headquarters and other nonhardship locations than other staff. Aside from possibly having to serve in hardship locations, moving frequently creates an unstable environment for staff and their families. UNHCR officials acknowledged that the organization faces a challenge in balancing its staff’s personal and career goals with UNHCR’s operational requirements. Several U.S. government officials noted that attrition among Americans has counteracted efforts UNHCR has made to hire U.S. citizens. For example, in 2004 and 2005, UNHCR hired 24 Americans, but in the same years 14 Americans left the agency, leaving a net gain of only 10 U.S. citizens. UNHCR’s policy to fill vacancies first with internal candidates coupled with the reality that most employees have indefinite contracts limits its external hiring, particularly given its number of staff-in-between- assignments. Given that UNHCR’s workforce requirements regularly expand and contract, the agency typically has a number of staff-in- between-assignments for whom it does not have assignments corresponding to their grades and skills. As of July 2006, UNHCR had 135 such staff. However, human resource officials said that some individuals have remained in between assignments for an extended period of time— some as long as 2 years. Because all staff-in-between-assignments have indefinite, rather than fixed-term, contracts, management has difficulty terminating those that refuse assignments or who lack needed skills, and the agency gives these staff placement preference over hiring external candidates. The priority given to placing staff-in-between-assignments limits the type of open external recruitment needed to ensure that UNHCR maintains an optimally skilled, dynamic, competitive, and gender-balanced workforce. In November 2003, to ensure that UNHCR adequately meets its workforce requirements, management created a policy to terminate the indefinite appointments of staff members who remained without a post for a protracted period. UNHCR human resource officials said that new rules entering into force in September 2006 are intended to reduce the protracted period from 12-18 months to 6 months. As of July 2006, UNHCR had terminated one staff who had remained in between assignments for an excessive period of time. Despite high mid-level attrition, UNHCR currently limits recruitment to entry level positions, filling posts with candidates who must now pass an entry exam. Before introducing the exam in 2004, employees in JPO positions were allowed to apply to posts as an internal candidate. However, individuals that have served as JPOs or in other temporary assignments now must pass the entry exam and be added to the roster of qualified candidates before being eligible to apply for regular staff positions. U.S. citizens employed at UNHCR expressed strong concern about this policy because UNHCR recently hired 67 percent of Americans at the end of their JPO service into an agency position. Having to take a test may increase the time it takes to get a post, as the exam is only given once a year, and could decrease JPO retention. UNHCR positions offered to external candidates will be further limited due to budgetary restrictions. As with UNESCO, UNHCR is planning to freeze hiring from the regular budget this year in order to limit the growth of the organization and realign the size of the workforce with the budget. One official estimated that there will be about 30 percent less recruitment this year because of the hiring freeze. Several barriers to increasing U.S. representation that are also present at other UN agencies are the leading factors at UNDP, according to American employees and other officials with whom we spoke. For example, many American UNDP employees told us that they did not receive support from the U.S. government during their hiring process or the course of their careers. Several of these employees stated that their discussions with us were the first time they had been contacted by U.S. government officials during their UNDP careers, and that both they and the U.S. mission would benefit from increased communication. U.S. staff also discussed UNDP’s nontransparent hiring and personnel management policies, and the limited opportunities for external candidates, as barriers for increasing U.S. representation. In addition, UNDP’s Executive Board has traditionally managed the organization with the understanding that its staff be equally represented from northern (mostly developed) and southern (mostly developing) countries, and has recently focused on improving the north- south balance of staff at management levels by increasing the hiring of candidates from southern countries. While this is a worthy goal, some American staff at UNDP commented that the organization’s recent attention to increasing the hiring of senior staff from southern countries could increase the difficulty for American candidates seeking these positions. A senior UNDP official stated that he did not see the increased hiring of U.S. nationals (to maintain current representation levels) as a realistic and attainable target, given the above focus as well as profile, donor, program country, gender, and diversity considerations. State targets its recruitment efforts for senior and policy-making UN positions, and, although it is difficult to directly link State’s efforts to UN hiring decisions, U.S. representation in these positions has either improved or displayed no trend in the five UN agencies we reviewed. State also has increased its efforts to improve overall U.S. representation, including adding staff to its UN employment office and increasing coordination with other U.S. agencies; however, despite these efforts, U.S. representation in entry-level positions has declined or did not reflect a trend in four of the five UN agencies we reviewed. Additional steps to target potential pools of candidates for these positions include maintaining a roster of qualified American candidates; expanding marketing and outreach activities; and conducting an assessment of the costs and benefits of sponsoring JPOs. In 2001, we reported that State focused its recruiting efforts for U.S. citizen employment at UN agencies on senior-level and policy-making positions, and State officials told us that this focus has continued. Although it is difficult to directly link State’s efforts to UN hiring decisions, the percentage of U.S. representation in senior and policy-making positions either increased or did not display a trend at each of the five UN agencies we reviewed between 2001 and 2005 (see fig. 3). At all five UN agencies, the percentage of Americans employed in senior and policymaking positions was higher in 2005 than in 2001, but the trends and magnitude varied somewhat across the agencies, as figure 3 shows. The U.S. share of senior and policy-making positions has increased at IAEA and UNDP. The U.S. share of these positions at the other three UN agencies displayed no trend over the period. At the Secretariat, the U.S. share of senior and policymaking positions was slightly higher in 2005 than in 2001, although the number and percentage of Americans in these positions has decreased since 2002. At UNHCR, the number and percentage of U.S. citizens in these positions grew between 2001 and 2004, but declined in 2005. At UNESCO, the data for 2001 to 2004 did not reflect a trend, but the percentage of Americans increased in 2005. Overall, Americans hold over 10 percent of senior and policymaking positions at four of the five agencies we reviewed. (App. III contains more detailed information on U.S. citizens employed in all professional positions, by grade, at the five UN agencies.) A U.S. mission official told us that the mission focuses its efforts on vacancies for critical senior jobs because of the influence that these positions have within the organization. If an American makes the short list for one of these positions, the U.S. Ambassador or another high-ranking U.S. mission official contacts the UN agency on behalf of that candidate. Officials at one of the U.S. missions we visited told us that the ambassador called UN agency officials on behalf of American candidates almost weekly. Several senior UN agency positions have recently been filled by Americans, including the UN Under-Secretary-General for Management and the Executive Director of UNICEF. UNESCO also recently hired U.S. citizens for the positions of Assistant Director-General for Education and the Deputy Assistant Director-General for External Relations. As a part of this effort to recruit for high-level positions, State’s UN employment office added a senior advisor in 2004 focused on identifying and recruiting American candidates for senior-level positions at UN organizations. This official works closely with the U.S. missions and U.S. agencies to identify senior-level UN vacancies and assist in the recruitment and support of Americans as candidates for these positions. The advisor also focuses on UN senior-level positions that may soon become vacant, including positions currently held by Americans, as well as by other nationals. Officials from one U.S. mission told us that it is critical to find out about vacancies before they become open because of the lead time needed to find qualified candidates. For those positions determined to be of particular interest to the United States, the senior advisor works with mission and agency counterparts to identify appropriate candidates to apply for the position when it becomes vacant. Since 2001, State has devoted additional resources and has undertaken several new initiatives in its role as the lead U.S. agency for supporting and promoting the employment of Americans in UN organizations, including adding staff to its UN employment office. State also has begun sharing its U.S. representation reports with UN officials. Additionally, State has increased coordination with other U.S. agencies. However, despite these efforts, U.S. representation in entry-level positions has declined or did not display a trend in four of the five UN agencies we reviewed. In 2001, State had two staff members working in its UN employment office, and since that time has increased the number of staff positions to five, plus a sixth person who works part-time on UN employment issues. The new staff positions include the official focused on senior-level positions at UN organizations referred to earlier. According to State, the other staff in this office recruit candidates for professional positions at career fairs and in other venues; however, a large portion of their work focuses on providing information to potential applicants and disseminating information on UN vacancies and opportunities. A key part of this effort is the publication and distribution of a biweekly list of UN vacancy announcements. State officials publish these announcements on the department’s UN employment Web site and also distribute the vacancies to agency contacts. With this list, potential applicants are able to view externally advertised professional and senior level vacancies throughout the UN system in one location. Additionally, State recently coordinated with the Office of Personnel Management to add a link to its UN employment Web site from the USAJOBS Web site. State’s Web site also includes a brochure with general information on UN employment opportunities and requirements, and a fact sheet requesting that candidates who have made a short list for a UN position contact the department for information and assistance. State’s UN employment office staff also attend career fairs and other outreach activities at universities and professional associations to discuss UN employment opportunities. For example, State officials reported that they attended 15 events in 2005, including a nuclear technology expo and a conference on women in international security. State also has increased outreach for the Secretariat’s annual National Competitive Recruitment Exam for entry-level candidates by advertising for the exam in selected newspapers. The number of Americans invited to take the exam increased from 40 in 2001 to 277 in 2004. According to State and UN officials, in 2005, State placed one-day advertisements publicizing application procedures for the exam in five newspapers across the country. Another of State’s responsibilities is to collect U.S. employment data from UN agencies and compile these data in annual reports to Congress. These reports include State’s assessment of U.S. representation at select UN organizations and these organizations’ efforts to hire more Americans. State now provides these reports to UN agencies, as we recommended in 2001, and does so by sending them to U.S. missions, which share them with UN officials. U.S. mission officials told us that they periodically meet with UN officials to discuss U.S. representation and upcoming vacancies. For example, officials from the U.S. mission in Geneva regularly meet with UNHCR’s Director of Human Resources to discuss efforts to increase U.S. representation. One outcome of these efforts was that, in 2005, UNHCR representatives conducted a recruiting mission to the United States, visiting five graduate schools. In addition, the U.S. mission in Vienna meets with IAEA’s director of human resources on a biweekly basis to discuss U.S. staffing issues. In 2003, State established an inter agency task force to address the low representation of Americans in international organizations. According to State, the initial meeting was intended as a first step to coordinate and re- energize efforts to identify Americans for international organization staff positions. Since then, task force members have met annually to discuss U.S. employment issues. Task force participants told us that at these meetings, State officials reported on their outreach activities and encouraged agencies to promote the employment of Americans at UN organizations. One of the topics discussed by task force members was how to increase support for details and transfers of U.S. agency employees to UN organizations. In May 2006, the Secretary of State sent letters to the heads of 23 Federal agencies urging that they review their policies for transferring and detailing employees to international organizations to ensure that these mechanisms are positively and actively promoted. Transferring and detailing federal employees to UN organizations for fixed-term assignments could allow Americans to gain UN experience while providing UN organizations with technical and managerial expertise. While the Secretary’s letters may help to spur U.S. agencies to clarify their support for these initiatives, agency officials told us that their offices lack the resources for staff details, which involve paying the salary of the detailed staff as well as “backfilling” that person’s position by adding a replacement. State also periodically meets one-on-one with U.S. agencies to discuss in more detail strategies for increasing U.S. representation at specific organizations. A State official told us that State’s UN employment office holds a few of these one-on-one meetings per year. For example, in 2005, State met with the Federal Aviation Administration to discuss U.S. underrepresentation at the International Civil Aviation Organization. State also participated in a network of agencies and National Laboratories that work with IAEA, which has discussed ideas to address declining U.S. representation at that agency. The U.S. mission in Vienna conducts periodic video-conference meetings with State, other U.S. agencies, and the U.S. national laboratories to discuss upcoming IAEA vacancies and identify U.S. candidates for these positions. Despite the new and continuing activities undertaken by State, U.S. representation in entry-level positions declined or displayed no trend in four of the five agencies we reviewed. U.S. representation in these positions declined at IAEA, UNHCR, and UNDP. The representation of Americans in entry-level positions at the Secretariat displayed no trend during the time period. At UNESCO, U.S. representation increased from 1.3 percent in 2003 to 2.7 percent in 2004, reflecting the time period when the United States rejoined the organization (See fig. 4). We identified several additional steps to target U.S. representation in professional positions. These steps include maintaining a roster of qualified candidates, expanding marketing and outreach activities, increasing and improving UN employment information on U.S. agency Web sites, and analyzing the costs and benefits of sponsoring JPOs. In 2001, we reported that State had ended its practice of actively recruiting Americans for UN employment in professional positions. As an example, we noted that State had previously maintained a roster of qualified American candidates for professional and technical positions, but discontinued its use of this roster. State officials told us that the office has not maintained a professional roster, or the prescreening of candidates, despite its recent increase in staff resources, because maintaining such a roster had been resource intensive and because the office does not actively recruit for UN professional positions at the entry- and mid-levels. However, State acknowledged that utilizing new technologies, such as developing a Web-based roster, may reduce the time and cost of updating a roster. A State official added that it is difficult to make a direct causal link between current or proposed efforts by the department and the number of Americans ultimately hired by the UN because of the many factors at work that State cannot control. Other U.S. government and UN officials told us that some other countries maintain rosters of prescreened, qualified candidates for UN positions and that this practice is an effective strategy for promoting their nationals. For example, some countries prescreen candidates for positions at the UN Department of Peacekeeping Operations (DPKO) and thus are able to provide names of well-qualified applicants when openings arise that need to be filled quickly. An official also emphasized that peacekeeping in particular is a “growth area,” and the Secretary-General recently reported that the Peacekeeping budget has increased from $1.25 billion to more than $5 billion between 1996 and 2005. As discussed earlier, peacekeeping positions are not counted toward geographic representation targets and thus the increased hiring of Americans in these positions would not directly improve the United States’ representation status. However, these positions, along with other nongeographic positions, do provide an entry point into the UN system. Although State has increased staff resources in its UN employment office, it has not taken steps that could further expand the audience for its outreach efforts. For example, State has increased its coordination with other U.S. agencies on UN employment issues and distributes the biweekly vacancy announcements to agency contacts. However, some U.S. agency officials that receive these vacancy announcements told us that they lacked the authority to distribute the vacancies beyond their particular office or division. For example, one official commented that the vacancies were distributed within his nine-person office, but the office is not able to distribute the vacancies throughout the agency. An official from another agency commented that State has not established the appropriate contacts to facilitate agency-wide distribution of UN vacancies, and that the limited dissemination has neutralized the impact of this effort. Several inter- agency task force participants also stated that no specific follow-up activities were discussed or planned between the annual meetings, and they could not point to any tangible results or outcomes resulting from the task force meetings. As discussed earlier, State officials attend career fairs and other conferences to discuss UN employment opportunities with attendees, but they have not taken advantage of some opportunities to expand the audience for their outreach activities. For example, State does not work with the Association of Professional Schools of International Affairs (APSIA), which has 19 U.S. member schools. A representative of APSIA told us that the association does not receive vacancy announcements or have contact with State on UN employment opportunities but would welcome the opportunity to do so. Although State employees have attended Peace Corps career fairs to discuss UN employment, officials told us that State does not advertise in other outlets that reach the population of current and returned Peace Corps volunteers, such as the Peace Corps jobs hotline newsletter or the National Peace Corps Association’s quarterly magazine, Worldview. By contrast, a State official told us that the department’s Office of Recruitment, Examination and Employment—which recruits candidates for the U.S. Foreign Service Exam—has worked with an advertising firm to develop a marketing strategy and campaigns focused on targeted pools of candidates for this exam. This official said that State has had a major emphasis on increasing the diversity of applicants for the U.S. Foreign Service, and its marketing campaigns have targeted schools with diverse student bodies and diversity-focused professional associations. State’s recruiting office also has established an e-mail subscription service on its Web site that allows individuals to sign up to receive e-mail updates pertaining to their specific areas of interest. A State recruiting official commented that targeted campaigns are more effective than general vacancy announcements or print advertisements, and that the e-mail subscription service has been worth the cost of implementation. The official said that the cost of maintaining this service, for which she stated 100,000 people have signed up thus far, is about $44,000 per year. State’s UN vacancy list and its UN employment Web site also have limitations. For example, the list of vacancies is not organized by occupation, or even organization, and readers must search the entire list for openings in their areas of interest. Further, State’s UN employment Web site has limited information on other UN employment programs and does not link to U.S. agencies that provide more specific information, such as the Department of Energy’s Brookhaven National Laboratory Web site. In addition, the Web site provides limited information or tools to clarify common questions, such as those pertaining to compensation and benefits. For example, the Web site does not provide a means for applicants to obtain more specific information on their expected total compensation, including benefits and U.S. income tax. As mentioned earlier, some American staff in UN agencies told us that in considering whether to apply for a UN position, information on benefits was not clear. Incomplete or late information hampers a candidate’s ability to decide in a timely manner whether a UN position is in his or her best interest. Including State, we reviewed 22 U.S. mission and U.S. agency Web sites, and they revealed varying, and in many cases limited, information on UN employment opportunities. Overall, nine of the 22 U.S. Mission and agency Web sites we reviewed did not have links to UN employment opportunities, and only seven had links to UN recruiting Web sites. In addition, only six of the Web sites provided links to State’s webpage on UN employment opportunities. Only three of the Web sites had information on details and transfers, six had information on JPO or Associate Expert programs, and 13 had no link to information on UN internships. Nearly 60 percent of the missions and agencies provided some information or links to information on salaries and benefits. The U.S. government currently sponsors JPOs at two of the five UN agencies that we reviewed, but has not assessed the overall costs and benefits of supporting JPOs as a mechanism for increasing U.S. representation across UN agencies. Among the five agencies, State has funded a long-standing JPO program only at UNHCR, sponsoring an average of 15 JPOs per year between 2001 and 2005. According to State officials, the JPO program at UNHCR is funded by State’s Bureau of Population, Refugees, and Migration (PRM), which has a separate budget from State’s UN employment office in the International Organization (IO) Affairs Bureau. State officials told us that the Department’s IO Bureau does not fund JPOs or Associate Experts at any UN agencies, including the Secretariat, which hires an average of 65 percent of Associate Experts following the completion of their programs. The other JPO sponsorship program among the five agencies we reviewed is run by the Department of Energy’s Brookhaven National Laboratory, which has supported two JPOs at IAEA since 2004. Table 4 provides data on the average number of JPOs and Associate Experts sponsored by the United States and by leading contributors to JPO programs at the five UN agencies we reviewed. For four of the five agencies we reviewed, the percentage of individuals that were hired for regular positions upon completion of the JPO program ranged from 34 to 65 percent. In some cases, former JPOs were offered regular positions and did not accept them, or took positions in other UN organizations, according to officials with whom we spoke. The estimated annual cost for these positions to the sponsoring government ranges from $100,000 to $140,000 at the five UN agencies. (See table 5.) This cost can include salary, benefits, and moving expenses. A PRM official told us that the goals of its JPO program at UNHCR are both to help the organization accomplish its mission in the field and to help Americans gain employment at the agency. This official stated that of the 24 American JPOs who completed their service at UNHCR between 2002 and 2005, 16 (or 67 percent) were hired back by the agency, and 1 was hired by UN Office for the Coordination of Humanitarian Affairs. This official also told us that because the JPO program is actively used by other countries as a means of getting their nationals into the organization, not supporting JPOs at UNHCR would put the United States at a disadvantage. As shown above, funding JPOs has a cost that must be considered together with other funding priorities. PRM and IO have acted independently in their determinations of whether or not to fund JPOs, with the overall result that State has funded an average of 15 JPOs at one UN agency and none at any of the other agencies. State has not conducted an assessment to determine which UN agencies the United States should prioritize in terms of increasing U.S. employment by funding JPOs. Such an assessment would also involve weighing the trade-offs between funding JPOs and other agency programs. Achieving equitable U.S. representation will be an increasingly difficult hurdle to overcome at UN organizations. Four of the five UN organizations we reviewed, all except UNESCO, will have to hire Americans in increasing numbers merely to maintain the current levels of U.S. representation. Failure to increase such hiring will lead the four UN organizations with geographic targets to fall below or stay below the minimum thresholds set for U.S. employment. As the lead department in charge of U.S. government efforts to promote equitable American representation at the UN, the Department of State will continue to face a number of barriers to increasing the employment of Americans at these organizations, most of which are outside the U.S. government’s control. For example, lengthy hiring processes and mandatory rotation policies can deter qualified Americans from applying for or remaining in UN positions. Nonetheless, if increasing the number of U.S. citizens employed at UN organizations remains a high priority for State, it is important that the department facilitate a continuing supply of qualified applicants for UN professional positions at all levels. State focuses much of its recruiting efforts on senior and policy-making positions, and U.S. citizens hold over ten percent of these positions at four of the five agencies we reviewed. While State increased its resources and activities in recent years to support increased U.S. representation overall, additional actions to facilitate the employment of Americans in entry- and mid-level professional positions are needed to overcome declining U.S. employment in these positions and meet employment targets. Because equitable representation of Americans employed at UN organizations has been a high priority for U.S. interests, we recommend that the Secretary of State take the following three actions: provide more consistent and comprehensive information about UN employment on the State and U.S. mission Web sites and work with U.S. agencies to expand the UN employment information on their Web sites. This could include identifying options for developing a benefits calculator that would enable applicants to better estimate their potential total compensation based on their individual circumstances; expand targeted recruiting and outreach to more strategically reach populations of Americans that may be qualified for and interested in entry- and mid-level UN positions; and conduct an evaluation of the costs, benefits, and trade-offs of: maintaining a roster of qualified candidates for professional and senior positions determined to be a high priority for U.S. interests; funding Junior Professional Officers, or other gratis personnel, where Americans are underrepresented or in danger of becoming underrepresented. We received comments from State, which are reprinted in appendix V. State concurred with and agreed to implement all of our recommendations. State said it attaches high priority to increasing the number of Americans at all professional levels in the United Nations and other international organizations. We received technical comments from State, IAEA, UNESCO, UNHCR, and UNDP, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees. We also will provide copies of this report to the Secretary of State; the United Nations Secretariat; the International Atomic Energy Agency; the United Nations Educational, Scientific, and Cultural Organization; the United Nations Office of the High Commissioner for Refugees; and the United Nations Development Program. We will also make copies available to others upon request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In this report we reviewed (1) U.S. representation status and employment trends at five United Nations (UN) organizations, (2) factors affecting these organizations’ ability to meet U.S. representation targets, and (3) the U.S. Department of State’s current efforts to improve U.S. representation and additional steps that could be taken. We reviewed five UN organizations: the UN Secretariat; International Atomic Energy Agency (IAEA); UN Educational, Scientific, and Cultural Organization (UNESCO); Office of the United Nations High Commissioner for Refugees (UNHCR); and UN Development Program (UNDP). Technically, the IAEA is an independent international organization that has a relationship agreement with the UN. For the purposes of this report, we refer to the IAEA as a UN agency or organization. We selected these agencies based on a range of factors such as: funding mechanisms (including agencies funded through assessed contributions as well as those funded primarily through voluntary contributions); methods for calculating geographic representation status (including agencies using formal geographic distribution formulas and those without formal targets for U.S. representation); agency size; agency location (including U.S.- based and overseas-based organizations); and agencies with varying levels of U.S. employment. These five agencies together comprise approximately 50 percent of total UN organizations’ professional staff. To determine the U.S. representation status, identify the trends in the number of professional positions held by U.S. citizens, and calculate hiring projections, we analyzed employment data for 2001 through 2005 that we obtained from the five UN organizations. Data generally refer to end of the calendar year, except for the Secretariat, which is for the year ending June 30. We had extensive communications with staff from each organization’s personnel and budget departments to clarify details regarding the data. We determined the data were sufficiently reliable for the purposes of this review. To determine U.S. representation at the three U.N. agencies with geographic targets (the Secretariat, IAEA, and UNESCO), we calculated the percentage of U.S. citizens employed in geographic positions and compared this percentage with the agency’s target. We calculated the geographic target for the Secretariat and UNESCO as a percentage range, in which the minimum and maximum number of national staff, as provided by the agency, is divided by the actual (full-time equivalent) geographic staff in the agency. The two agencies—the Secretariat and UNESCO—that set geographic targets consider three key factors to varying degrees in establishing the targets: membership status, financial contribution, and population size. The Secretariat and UNESCO both use formulas for calculating geographic targets that take into account all three factors. For the Secretariat, the factors are 55 percent for contribution, 40 percent for membership, and 5 percent for population. UNESCO’s formula consists of a membership factor of 65 percent, a contribution factor of 30 percent, and a population factor of 5 percent. IAEA informally calculates a member state to be underrepresented if its geographic representation is less than half of its percent contribution to the budget. Using this method, we calculated a U.S. target. The remaining two agencies—UNHCR and UNDP—have not adopted formal geographic representation targets. However, UNHCR has established an informal target with the United States. To determine U.S. representation at UNHCR in comparison to this target, we calculated the percentage of regular professional positions (100-series contracts) filled by U.S citizens. Similarly, at UNDP, we calculated U.S. representation as a percentage of regular professional positions (100 and 200-series contracts) filled by U.S. citizens. For all five UN organizations, we also calculated U.S. citizen representation at each grade—policy-making and senior level (such as USG/ASG, D1/D2), mid-level (P4/P5), and entry level (P1-P3)—as well as for all grades combined. U.S. grade level employment representation is calculated by dividing the number of U.S. staff at that grade level by the organization’s total employment for the corresponding grade level. We also calculated U.S. citizen representation in nongeographic positions (for the Secretariat, IAEA, and UNESCO), and in the nonregular professional positions (UNHCR and UNDP) as a percentage of nongeographic (or nonregular) employment, respectively in these positions. To determine whether there was a trend in U.S. representation between 2001 and 2005, we determined whether or not the slope of the best fitting line through these points would have a computed confidence level of 90 percent or more. If there is a trend, the sign of the slope (i.e., the coefficient) determines whether the trend is increasing or decreasing. A designation of no trend means that the confidence level does not reach 90 percent; however, the percentage representation of U.S. citizens may have fluctuated during the period. We cannot say these trends are statistically significant because of the small number of observations, the fact that these numbers are the actual population and not a sample, and because these numbers are not independent over time. Thus the 90 percent computation is not an objective criterion indicating statistical significance. Our methodology assumed a gradual approach to the target. We calculated the minimum average number of U.S. citizens that each agency would need to hire each year between 2006 and 2010 to reach their respective percentage targets in 2010. The 2005 U.S. staff percentage representation was the starting point and an annual percentage increment (or decrement) was added to reach the minimum target in 2010. We then projected the required number of U.S. staff for each year as that year’s percentage target multiplied by the projected size of the total staff for that year. The estimated number of U.S. staff in the agency in each year, before additional hiring of Americans, was based on the prior year’s employment, less the projected retirements and separations for that year. If the projected number of Americans required to meet that year’s target is greater than the estimated number of Americans in the agency, based on the prior year’s employment and given departures in that year, then the number of Americans the agency has to hire is positive; otherwise, it is zero. Summing each year’s number of Americans required to be hired to achieve each year’s target, and then dividing by 5, yielded the minimum average number of U.S. citizens that the agency would have to hire to achieve the 2010 target. We made three assumptions to calculate the hiring projections. First, for the Secretariat, IAEA, and UNESCO, we based our 2006 to 2010 staff projections on the recent growth rate (2001 through 2005) of each agency’s staff. We calculated the future staff growth rate based on an ordinary least squares growth rate of staff during 2001 through 2005. UNHCR provided us with an official agency projected growth rate of zero percent, and UNDP provided a 6 percent growth rate that we used in our analysis. Second, we projected staff separations for 2006 through 2010 based on an average of the separation data that the agencies provided for 2001 through 2005. Third, we projected U.S. staff separations for 2006 through 2010 based on the average of U.S. staff separations to total staff separations during 2001 through 2005. We did not project future retirements because each agency provided their official retirement projections for total staff and for Americans. In addition, we performed sensitivity analyses by varying the staff growth and separation rates. We found that minor changes did not produce major differences in the results. To review the factors affecting organizations’ ability to meet the employment targets, we reviewed UN agency documents and interviewed UN human resources officials, over 100 Americans employed at the five UN agencies, and U.S. officials. At each of the five agencies covered in our review, we met with human resources officials to discuss efforts taken to achieve equitable U.S. representation, the agency’s hiring process, personnel policies and procedures, types of contracts and positions, and factors affecting U.S. representation. These officials also provided documents with further explanations of agency human resources policies and practices. We also met with State and U.S. mission officials and officials from other U.S. agencies that interact with the five UN agencies to discuss their views on factors impacting U.S. employment at these agencies. In addition, we received the views of a total of 112 Americans employed across the five agencies on various UN employment issues. We gathered information from these employees through individual interviews, interviews in a small-group setting, or through group discussions. We also received written comments from some American employees. We met with employees in a range of grade levels (G, P, D, and ASG), contract types (such as temporary, assignment-of-limited-duration, fixed-term, indefinite, permanent), and with varying levels of experience at the agency. We did not select representative samples of American employees at any agency. Some individuals invited to participate in our review were unable to due to scheduling conflicts; some did not respond to our invitation. The American employees we interviewed as a percent of the total number of Americans employed varied at each agency. We asked each employee common open- ended questions about their background and experience, the hiring process, the extent of U.S. government assistance they received, and factors affecting U.S. representation. Using the information gathered from the American employees, we coded comments about the factors affecting American recruitment and retention at the UN agencies into about 30 categories. As in any exercise of this type, the categories developed can vary when produced by different analysts. To address this issue, two independent GAO analysts reviewed and verified categorization of comments for each agency and suggested new categories. We then rectified differences. We then compiled a summary of factors across the five agencies and ranked them by the frequency they were mentioned. Another independent GAO analyst then reviewed and verified the summary compiled of all agency comments. We selected the factors affecting U.S. representation discussed in the body of this review by analyzing this ranked list in conjunction with information we gathered from UN and U.S. officials and our analysis of UN employment data. To assess strategies that the Department of State is using to improve U.S. representation and additional efforts that could be taken, we reviewed documents and interviewed officials from State’s UN employment office. We discussed activities that State has taken since our 2001 report on U.S. representation at UN organizations, in response to recommendations made in that report, and reviewed State’s documentation of these activities. We also reviewed other State documents, including its annual reports to Congress, U.S. Representation in United Nations Agencies and Efforts Made to Employ U.S. Citizens. In addition, we reviewed State’s performance and accountability plans and reports, including State’s fiscal year 2007 performance summary and the Bureau of International Organization Affairs fiscal year 2007 performance plan. In addition to meeting with State officials, we also met with U.S. agency officials that have participated in State’s inter agency task force on UN employment or received UN vacancy announcements from State, as well as other U.S. agency officials. In these meetings, we discussed the activities and outcomes of the task force and these officials’ views on efforts to increase the UN employment of Americans. We also discussed U.S. strategies and efforts for increasing U.S. representation with UN personnel officials and American employees of UN organizations. We also analyzed 22 U.S. agency and U.S. mission Web sites to review information that they made available on UN employment opportunities. We conducted our work in Washington, D.C., New York; Geneva, Switzerland; Vienna, Austria; and Paris, France, from August 2005 to July 2006 in accordance with generally accepted government auditing standards. This appendix provides information on the number and percentage of U.S. citizens employed in professional positions at the five UN agencies we reviewed. For the three agencies that have geographic targets, we provide information on the number and percentage of U.S. citizens employed in geographic positions as well as in nongeographic positions. For the 2 agencies that do not have geographic targets, we provide information on the number and percentage of U.S. citizens employed in regular and all other professional positions. At two of the three UN agencies (the Secretariat and UNESCO) with geographic targets, the percentage of geographic positions filled by U.S. citizens is slightly higher than the percentage of nongeographic positions filled by U.S. citizens. The variation is more significant at IAEA where U.S. citizens fill 11.5 percent of the geographic positions and 17.1 percent of the nongeographic positions. Table 6 shows the number and percentage of U.S. citizens employed in geographic and nongeographic positions at the three UN agencies with geographic targets. Table 7 shows that, on average at the three UN agencies with geographic positions and targets (the Secretariat, IAEA, and UNESCO), the percentage of U.S. citizens employed in all professional positions was fairly evenly divided between geographic positions (51.6 percent) and nongeographic positions (48.4 percent). However, the representation of U.S. citizens in geographic and nongeographic positions was close to the average only at IAEA, where the percentage of U.S. citizens was 55 and 45 percent, respectively. As shown in table 8, in the two agencies without geographic positions, UNHCR and UNDP, the percentage of regular professional positions filled by U.S. citizens is lower than the percentage of “all other” professional positions filled by U.S. citizens. As shown in table 9, at UNHCR and UNDP, the percentage of U.S. citizens in regular professional positions (staff under contracts of longer fixed terms) averaged 65.4 percent of the total U.S. professional staff compared with 34.6 percent for U.S. representation in all other, or more temporary, professional positions. That is, there are relatively more Americans in regular professional positions, 65.4 percent, than there are Americans in all other professional positions, 34.6 percent. At IAEA and UNESCO, over 80 percent of the policy-making and senior- level positions are geographic. However, these positions are more evenly divided at the Secretariat, with 54 percent subject to geographic designation and 46 percent not subject to geographic designation. (See table 10). In “all grades,” U.S. citizen representation in geographic positions at the Secretariat, IAEA, and UNESCO and in regular professional positions at UNDP displays no trend at the 90 percent confidence level. However, at UNHCR, U.S. representation decreased in regular professional positions. Figure 5 shows the trends in U.S. representation, by grade, at each agency. As shown in figure 5, U.S. citizen representation in policy making and senior level positions increased at IAEA and UNDP and increased in entry level positions at UNESCO. However, U.S. citizen representation in entry level positions decreased at IAEA, UNHCR, and UNDP. In addition, U.S. citizen representation decreased in mid-level positions at UNHCR, as well as over “all grades.” The 90 percent confidence interval does not imply statistical significance. Refer to our methodology for calculating trends in app. I. Table 11 provides further information on the terms promotion, internal hire, and external hire, as provided by each of the five UN agencies we reviewed for the purposes of this report. In addition to the person named above Cheryl Goodman, Assistant Director; Jeremy Latimer; Miriam Carroll; Roberta Steinman; Barbara Shields; and Joe Carney made key contributions to this report. Martin De Alteriis, Bruce Kutnick, Anna Maria Ortiz, and Mark Speight provided technical assistance. | The U.S. Congress continues to be concerned about the underrepresentation of U.S. professionals in some UN organizations and that insufficient progress has been made to improve U.S. representation. In 2001, GAO reported that several UN agencies fell short of their targets for U.S. representation and had not developed strategies to employ more Americans. This report reviews (1) U.S. representation status and employment trends at five UN agencies, (2) factors affecting these agencies' ability to meet employment targets, and (3) the U.S. Department of State's (State) efforts to improve U.S. representation and additional steps that can be taken. We reviewed five UN agencies that together comprise about 50 percent of total UN organizations' professional staff. The United States is underrepresented at three of the five United Nations (UN) agencies we reviewed, and increased hiring of U.S. citizens is needed to meet employment targets. The three agencies where the United States is underrepresented are the International Atomic Energy Agency; UN Educational, Scientific, and Cultural Organization; and the Office of the UN High Commissioner for Refugees. U.S. citizens are equitably represented at the UN Secretariat, though close to the lower end of its target range. The UN Development Program has not established a target for U.S. representation, although U.S. citizens fill about 11 percent of its professional positions. Given projected staff levels, retirements, and separations, IAEA, UNESCO, and UNHCR would need to increase hiring of Americans to meet their minimum targets for U.S. representation in 2010. While the five UN agencies face some common barriers to recruiting and retaining professional staff, including Americans, they also face their own distinct challenges. Most of these barriers and challenges are outside of the U.S. government's control. The common barriers include nontransparent human resource practices, limited external hiring, lengthy hiring processes, comparatively low or unclear compensation, required mobility, and limited U.S. government support. UN agencies also face distinct challenges. For example, at the Secretariat, candidates serving in professional UN positions funded by their governments are more likely to be hired than those who take the entry-level exam; however, the United States has not funded such positions. Also, IAEA has difficulty recruiting U.S. employees because the number of U.S. nuclear specialists is decreasing. Since 2001, State has increased its efforts to achieve equitable U.S. representation at UN agencies, and additional options exist. State has targeted efforts to recruit U.S. candidates for senior and policymaking UN positions, and although it is difficult to link State's efforts to UN hiring decisions, U.S. representation in these positions has improved or displayed no trend in the five UN agencies. U.S. representation in entry-level positions, however, has declined or did not reflect a trend in four of the five UN agencies despite State's increased efforts. Additional steps include maintaining a roster of qualified U.S. candidates, expanding marketing and outreach activities, increasing UN employment information on U.S. agency Web sites, and assessing the costs and benefits of sponsoring entry-level employees at UN agencies. |
Today, the Library is at an important crossroads in its long history. Its efficiency, effectiveness, and continued relevance may depend on its ability to address key issues about its future mission. The Library’s mission and activities have continued to grow since its creation in 1800, and the growth of its mission has been matched or exceeded by the growth of its collections. Booz-Allen found that the Library’s staff, management structure, and resources are in danger of being overwhelmed by this growth. Booz-Allen identified three alternative missions that could be considered to shape the Library’s future. The three missions can be used to characterize the potential scope of activity and the customers the Library might serve: (1) Congress; (2) Congress and the nation; and (3) Congress, the nation, and the world community of libraries, publishers, and scholars. The current Library mission and activities fall somewhere between the latter two alternatives. Under the first mission alternative, the Library would refocus its functions on the original role of serving Congress. Collections would be limited to broadly defined congressional and federal government needs, and Congressional Research Service-provided information would continue to support legislative functions. There would be no national library, and leadership of the information/library community would be missing unless assumed by other organizations. Booz-Allen concluded that the Library would require significantly fewer staff and financial resources to carry out this mission. The second mission alternative would emphasize the Library’s national role, and current activities of a global nature would be deemphasized. The national library role would be formally acknowledged, and the Library’s leadership and partnering roles would be strengthened. This mission would require increased interaction with national constituencies. Booz-Allen concluded that the Library would require somewhat fewer staff and financial resources to carry out this mission. Under the third mission alternative, the Library would continue and perhaps broaden its activities to serve the worldwide communities of libraries, publishers, and scholars. Collections would expand substantially with accompanying translation and processing consequences. Booz-Allen concluded that this expanded mission would require increased staff and financial resources. After determining whom the Library will serve, the next step should be to decide how the Library will serve them. Booz-Allen identified two role options: (1) independent archive/knowledge developer and (2) information/knowledge broker. Within the role of independent archive/knowledge developer, the Library would continue to develop and manage collections independently in Library and other government facilities. Traditional, original cataloging and research or development functions would be performed primarily by Library components and staff. Library collections and facility requirements would continue to expand based on collection strategy and policy. Traditional areas of Library expertise, such as acquisitions, cataloging, and preservation, would continue to grow in importance and would drive future staffing and resource requirements. Within the role of information/knowledge broker, the Library’s principal role would change from being a custodian of collections with an independent operational role to that of a comprehensive broker or referral agency. The Library would initiate collaborative and cooperative relationships with other libraries and consortia. It would use information technology to tell inquirers which library in the nation or the world has the specific information. Under this scenario, the Library’s collections would be selectively retained and/or transferred to other institutions with arrangements for appropriate preservation. Other institutions would need to demonstrate their willingness and capability to participate in such a system. Booz-Allen assessed each of these mission and role options and discussed them during focus groups with Library management, congressional staff, external customers, and others. Many focus group participants perceived a need to systematically limit and consolidate the Library’s global role. On the basis of these discussions as well as its other findings from the overall management review of the Library, Booz-Allen recommended that the Library’s mission be focused within the Congress/nation alternative, and planning should begin toward a future mission of serving Congress and performing the role of a national information/knowledge broker. should include a thorough consideration of the appropriate role of technology in supporting the Library’s operation. Third, the Library should initiate and guide this examination and debate. And fourth, at the end of the process, the mission of the Library should be affirmed by Congress, and resources should be provided at a level that would enable the Library to effectively fulfill the chosen mission. Regardless of what Congress ultimately affirms regarding the future mission of the Library, Booz-Allen also identified a number of management and operational issues that should be addressed. Booz-Allen reported that the Library’s management processes could be more effective. First, it concluded that the Library should institute a more comprehensive planning and program execution process that provides for better integration of key management elements, such as strategic and operational planning, budget development, program execution, performance measurement, and evaluation. Second, Booz-Allen noted that the Library should improve the capability to make decisions and solve problems that cut across organizational lines primarily by clarifying roles, responsibilities, and accountability. Third, it pointed out that the Library should reengineer its support services, particularly in the areas of information resource management, facilities, security, and human resources, to improve the capability of its infrastructure to support the mission. Additionally, Booz-Allen noted that the Library does not manage its operations from a process management approach but instead uses a functional approach. For example, the Library has different groups to acquire, catalog, preserve, and service each collection. Under this functional approach, the Library is not in a good position to routinely consider such factors as current arrearage status or requirements for preservation, cataloging, and storage when coordinating and planning for acquisitions of large collections. These factors could be considered more effectively under a process management approach, because one group would perform these functions for each collection. This approach also would permit the information technology function to support one Library-wide infrastructure rather than its current duplicative and poorly integrated systems. One major benefit of using a process management approach and integrated information technology infrastructure is that it provides a better understanding of how to control, manage, and improve how the organization delivers its products and services. Booz-Allen made a number of specific recommendations targeted directly at improving the Library’s management and operational processes. It emphasized that three organization-related recommendations are key to the Library’s overall success in improving its management and operations. Booz-Allen recommended that the Library clarify the role of the Deputy Librarian to serve as the Library’s Chief Operating Officer and vest the individual occupying that position with Library-wide operational decisionmaking authority; elevate the Chief Financial Officer’s position to focus greater attention on improving the Library’s financial systems and controls; and establish a Chief Information Officer position to provide leadership in technology across the organization, which should help the Library function more effectively in the electronic information age. The effective allocation and use of human and financial resources are paramount to support the day-to-day activities of the Library. However, Booz-Allen found that a variety of weaknesses hamper the Library’s ability to maintain the intellectual capital of its workforce and that the Library has opportunities for increasing revenue. Booz-Allen made several recommendations to improve the Library’s ability to deal with these important issues. The success of the Library’s mission depends heavily on its human resources. Whether the mission is to serve Congress, the nation, or the world, its ultimate achievement rests with the quality of the Library staff. However, Booz-Allen found that the human resource function at the Library has some significant problems that may hamper the Library’s ability to maintain its intellectual capital. First, the Library does not have a coordinated training program. Second, human resources’ personnel and processes are not equipped to handle changes to recruitment, training, or selection requirements that may result from technology, changes to the Library’s mission, or staff turnover. Third, the human resources services unit is not able to strategically plan for workload and staffing requirements because of its poor coordination among the Library service units. Fourth, ongoing problems in communications between managers and the unions inhibit their ability to plan together for future directions of the Library. Finally, the personnel management operations, particularly competitive selection and training, inhibit the Library’s ability to bring on new staff members and get them trained quickly. Currently, it takes about 6 months to recruit and hire new employees. Booz-Allen recognized that improving the Library’s operations would require additional funding. Thus, as part of its review, Booz-Allen looked for opportunities through which the Library could generate revenue to help offset the costs of improvements. It found that opportunities to significantly increase revenues exist in the copyright registration and cataloging areas. By fully recovering copyright registration costs, Booz-Allen estimated that the Library could receive additional revenue annually ranging from $12-$29 million, depending on different assumptions. The potential revenue to be generated from charging publishers a fee for cataloging could be about $7.5 million annually. Booz-Allen recognized that these additional potential revenue opportunities must be reviewed in light of past efforts to increase revenues and the Library’s mission. For example, Congress decided in 1948 and 1989 not to recover full cost of copyright registration, and the perception in the library community is that cataloging is at the heart of what the Library does and forms an integral part of its mission. Consequently, both of these revenue opportunities need to be considered as part of reexamining the Library’s mission with a view towards better balancing its mission and available resources. In order for the Library to have success with the implementation of any revenue opportunities, an appropriate support structure will be required. Therefore, Booz-Allen suggested that the Library needs to develop a legislative strategy that will provide it with the financial mechanisms and authority needed to implement new fee-based services. To date, Congress has not provided the Library with legislation authorizing fee-based services and all the different financial mechanisms needed to pursue a range of fee-based service opportunities. and services that are not consistent with a newly established mission. Booz-Allen interviews and focus groups identified the following Library products and services as possible candidates for reduction: selected special collections acquisitions, foreign acquisitions, selected English language acquisitions, original cataloging, exhibits, displays, and performances. As a part of the review of the Library’s management, Price Waterhouse (1) audited the Library’s fiscal year 1995 consolidated statement of financial position, (2) examined assertions made by Library management concerning the effectiveness of internal controls over financial reporting, (3) reviewed compliance with selected laws and regulations, and (4) examined assertions made by Library management concerning the safeguarding of the Library’s collection. This was the first financial statement audit of the Library since our audit of the Library’s fiscal year 1988 financial statements. Price Waterhouse found that the Library had mixed results in implementing GAO’s recommendations made in its 1991 report. The Library made improvements including resolution of significant compliance and control problems in the Federal Library and Information Network (FEDLINK) program and implementation of a new financial management system in fiscal year 1995. Price Waterhouse also found that the Library established accounting policies and procedures to address many of the problems we found in our audit of the Library’s 1988 financial statements. However, the Library had not supplemented that system with the processes necessary to generate complete, auditable financial statements. For example, the Library’s new system had not been configured to generate the detailed trial balances necessary for an audit, and the system did not track significant account balances, including property and equipment and advances from others. Further, the Library did not record significant accounting entries, including those converting balances from the old system, in sufficient detail to permit effective audit analysis of the accounts. Price Waterhouse stated that this latter deficiency, coupled with the lack of comparable prior year information and audited opening balances, precluded it from auditing the Library’s fiscal year 1995 operating statement. “. . . except for the effects of such adjustments, if any, as might have been determined to be necessary had (Price Waterhouse) been able to examine evidence regarding property and equipment balances, the Consolidated Statement of Financial Position presents fairly, in all material respects, the Library’s financial position as of September 30, 1995, in conformity with the basis of accounting described in Note 1 to the Consolidated Statement of Financial Position.” Price Waterhouse concluded that the Library’s financial internal controls in place as of September 30, 1995, were not effective in safeguarding assets from material loss and in ensuring that there were no material misstatements in the Consolidated Statement of Financial Position. In addition to the material weaknesses over property and equipment that led Price Waterhouse to qualify its opinion on the Consolidated Statement of Financial Position, Price Waterhouse reported that the Library had material weaknesses in its financial reporting preparation process, reconciliations of cash accounts with the Department of the Treasury and of various general ledger balances with those in subsidiary records, and information technology security practices over its computer operations. Price Waterhouse concluded that the Library’s internal controls in place on September 30, 1995, were effective in ensuring material compliance with relevant laws and regulations. However, Price Waterhouse reported that the Library continued to accumulate surpluses in certain gift funds that it operates as revolving funds, even though the Library does not have the statutory authority to do so. GAO previously reported this noncompliance in its audit of the Library’s 1988 financial statements. GAO recommended that the Library obtain the statutory authority necessary to continue operating the revolving gift funds but it has not received such authority. Also, Price Waterhouse found one instance where the Library violated 2 U.S.C. 158a, which prohibits the Library from investing or reinvesting a gift of securities offered to the Library until acceptance of the gift has been approved by the Joint Committee on the Library. The Library believes this was an isolated error and is holding the proceeds pending approval by the committee. financial report preparation process, reconciliations of accounting records, accounting for property and equipment, computer security practices, enhancing information that is provided to management, financial services staffing, controls over the general ledger and reporting system, internal self-assessment of internal controls, computer operations disaster recovery plan, controls over cash handling and check processing, and trust fund accounting. Price Waterhouse concluded that the Library’s management lacked reasonable assurance that the Library’s internal control structure over safeguarding of collection assets against unauthorized acquisition, use, or disposition was generally effective as of September 30, 1995. Price Waterhouse found that the Library has not completed a comprehensive risk assessment and collection security plan to identify the risks to the collection, the proposed or established control activities to address the risks, the required information management needs to carry out its responsibilities, and the methods by which management could monitor the effectiveness of control procedures. Price Waterhouse concluded that without these practices and procedures, Library managers do not have reasonable assurance that the risk of unanticipated loss (theft, mutilation, destruction, or misplacement) of materials with significant market value, cultural or historical importance, or with significant information content is reduced to an acceptable level. Booz-Allen had similar findings in its review of how the Library managed security. procedures to periodically inventory key items in the collection; when staff are precluded from bringing personal items into storage areas; when it has reduced the number of non-emergency exits in the collections areas of the Library’s buildings; when it has regular reporting, tracking, and follow-up of missing materials; when it has a coordinated approach to access by its own maintenance personnel and those of the Architect of the Capitol; and when it has sufficient surveillance cameras in areas where high-value materials are stored. Environmental risks would be effectively controlled when the Library has determined that high-value, irreplaceable items have been protected from possible fire and water damage and that its preservation program is targeting and treating its highest priority items in a timely fashion. Although the Library has been striving to improve the safeguarding of its collection since 1991, the findings of Price Waterhouse and Booz-Allen confirm that the Library continues to have a number of significant weaknesses in safeguarding the collection materials that the Library relies upon to serve Congress and the nation. Mr. Chairman, that concludes the overall summary of the review of the management of the Library of Congress. We would be pleased to answer any questions that you or other Members may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed two independent management and financial reviews of the Library of Congress. GAO noted that: (1) the management review found that the Library's mission needed to be reassessed because its enormous growth threatens to overwhelm its staff, structure, and resources; (2) alternative missions include service to Congress exclusively, Congress and the nation, or Congress, the nation, and the world community; (3) the first two mission options would require fewer staff and resources, but the third mission would require staff and funding increases; (4) after determining its mission, the Library's service role options include being an independent archive-knowledge developer or an information-knowledge broker; (5) the review recommended that the Library focus on the Congress-nation mission alternative and become a national information-knowledge broker; (6) the review also recommended that the Library improve its human and financial resources management and operational processes; (7) the financial review found that the Library had mixed results in implementing previous GAO recommendations and certain financial management weaknesses remain; (8) except for property and equipment accounts, the Library's financial statements presented fairly, in all material respects, its financial position as of September 30, 1995; (9) the Library's internal controls were not effective in safeguarding assets, ensuring that material misstatements did not occur, and ensuring that gifts complied with applicable laws and regulations; and (10) the review made several recommendations for safeguarding assets and improving accounting processes. |
The Commission acts as a regional partner with state and local governments focusing on basic infrastructure needs and promoting economic growth for rural Alaska. Since the Commission’s inception in 1998, programs focused on developing transportation, energy, health facilities, economic development, training, and community facilities have received funding for infrastructure projects and to promote economic growth. Although congressional priorities have changed recently, as have funding levels, four major programs—energy, transportation, health facilities, and training—continued to receive grant funds. The Commission has historically received federal funding from several sources, including an annual appropriation, and is a party to allocation transfers with other federal agencies, such as transfers from the Federal Highway Administration under the Department of Transportation (DOT). The Commission also receives funds from the Trans-Alaska Pipeline Liability Fund. The Commission implements its major programs and operations by awarding grants for implementing specific projects in rural Alaska. In fiscal year 2013, the Commission’s energy program—which is focused on bulk fuel storage tank upgrades; community power generation and rural power systems upgrades; energy cost reduction projects; renewable, alternative, and emerging energy technologies; and power line interties—received approximately $14 million in federal funding, or 78 percent of the Commission’s budgetary authority. The purpose of the program is to provide code-compliant bulk fuel storage and electrification with a goal of improving energy efficiency and decreasing energy costs. In fiscal year 2013, the energy program funded the completion of three bulk fuel facilities, two rural power system upgrades, energy efficiency upgrades in 13 communities, and one emerging energy technology project. The transportation program divides funds between the roads and waterfront components of the program. One major objective of the roads component is to improve roads between rural communities. The waterfront component addresses port and harbor needs, such as regional port reconstruction and boat launch ramp construction. Since its establishment in fiscal year 2005, the transportation program has completed 86 road projects and 97 waterfront development projects. In addition, the Commission reported that as of March 2014, 24 road and waterfront development projects were in the planning, design, or construction phase. The transportation program was not included in the fiscal year 2013 Commission budget; however, approximately $15 million in previously awarded program grants were disbursed in fiscal year 2013. The health facilities program provides technical assistance as well as business planning for the facilities. This program was initially established to improve Alaska’s health infrastructure through investments in renovations, repairs, and replacement of health facilities. Since the program’s inception in fiscal year 1999, the Commission reported that in conjunction with the Department of Health and Human Services, it has contributed to 140 primary care clinics, 20 elder supportive housing facilities, 49 primary care projects, and 20 behavioral health facilities. The health facilities program was not included in the fiscal year 2013 Commission budget; however, approximately $7 million in previously awarded program grants were disbursed in fiscal year 2013. The training program was established to provide training and employment opportunities to rural residents employed in the construction, maintenance, and operation of Commission projects. Program funds paid for courses, books, tools, tuition, lodging, and transportation. In fiscal year 2013, the Commission reported that 137 people completed training courses or received certificates in construction, maintenance, and operation of Commission projects; 53 obtained certificates in construction education; and 17 were placed in construction apprenticeships. In addition, the Commission partnered with the University of Alaska to assist 402 students in completing course work in community health aide, dental assistance, medical office/health care reimbursement, and medical lab-related skills. The training program was not included in the fiscal year 2013 Commission budget; however, approximately $1 million in previously awarded program grants were disbursed in fiscal year 2013. The IG Act establishes that one of the primary responsibilities of a federal agency’s OIG is to keep the agency head and the Congress informed about problems and deficiencies related to the administration of the agency’s programs and operations, corrective actions needed, and the progress of those corrective actions. The IG Act created independent IG offices at major departments and agencies with IGs who are appointed by the President, are confirmed by the Senate, and may be removed only by the President with advance notice to the Congress stating the reasons. In 1988, the IG Act was amended to establish IG offices in DFEs. OIGs of DFEs have many of the same authorities and responsibilities as the OIGs originally established by the IG Act, but with the distinction that IGs are appointed by and may be removed by their agency heads rather than by the President and that their appointment is not subject to Senate confirmation. The IG Act addresses the qualifications and expertise of the IGs, specifying that each IG appointment is to be made without regard to political affiliation and solely on the basis of integrity and demonstrated ability in accounting, auditing, financial analysis, law, management analysis, public administration, or investigation. The fields in which an IG can have experience are intended to be sufficiently diverse so that many qualified people could be considered but are also limited to areas relevant to the tasks considered necessary. The Inspector General Reform Act of 2008 (Reform Act) amended the IG Act by adding requirements related to OIG independence and effectiveness. The Reform Act includes a provision intended to provide additional OIG independence through the transparent reporting of OIG budget requests. This provision requires an agency’s submission for the President’s budget to separate the OIG’s budget request from the agency’s and include any comments provided by the OIG with respect to the proposal. The Dodd-Frank Wall Street Reform and Consumer Protection Act of specifying that for 2010 (Dodd-Frank Act) further amended the IG Act, DFEs with a board or commission, the board or commission is the head of the DFE for purposes of IG appointment, general supervision, and reporting under the IG Act. Furthermore, if the DFE has a board or commission, the IG Act requires the OIG to report organizationally to the entire board or commission as the head of the DFE. In addition, the Dodd-Frank Act requires the written concurrence of a two-thirds majority of the board or commission to remove an IG. Prior to this provision, most OIGs at commission or board led DFEs reported to, and were subject to removal by, the individual serving as head of the commission or board. Our analysis of the budget information provided by the Commission’s Chief Financial Officer (CFO) showed that the Commission allocated budgetary funds for the OIG of approximately $1 million over the 3-year period from fiscal years 2011 through 2013. The total budgetary resources of the Commission OIG increased from fiscal years 2011 through 2013, from $310,000 to $331,000, for an increase of about 7 percent (see fig. 1). During this 3-year period, the OIG consisted of one full-time employee, the IG, who obtained additional support through contracts with both auditors and others to assist with his oversight responsibilities, such as interviews related to ongoing inspections, and to mediate disputes between Commission officials and grant recipients regarding grant payments. Based on the budget and expenditure information we received from the Commission, we found that during fiscal years 2011 through 2013, the OIG spent an average of 84 percent per fiscal year of the budgetary resources provided to his office. The Commission reported that the budgeted amounts not used by the OIG within the fiscal year to which they were allocated were returned to the Commission and were available for the Commission’s use. The OIG did not carry over unused funding into the next fiscal year. Our review of the OIG’s use of the resources provided in fiscal years 2011 through 2013 showed that about 59 percent of the OIG’s annual budget was for salary and benefits for the IG. The rest of the annual budget was for the Commission’s annual financial statement audit (13.4 percent), travel (12.4 percent), other contract services (11.7 percent), training (3.1 percent), supplies (0.3 percent), and the CIGIE assessment (0.2 percent). (See fig. 2.) During fiscal years 2011 through 2013, the Commission OIG issued six semiannual reports to the Congress, as required by the IG Act, and conducted 12 inspections. The 12 inspections conducted by the Commission OIG reviewed various issues, such as management policies and practices and compliance with applicable laws. The OIG did not perform any audits or investigations. The IG told us that for the 12 inspections he conducted, he used the following methods to communicate the results of completed inspections: written inspection reports available on the OIG’s website, inspection results included in the semiannual reports to the Congress, and inspection results included in the Commission’s annual agency financial reports (see fig. 3). During fiscal years 2011 through 2013, the OIG provided limited oversight of the Commission’s major programs and operations. Per the IG Act, OIG oversight includes assessing the effectiveness and efficiency of agency programs and operations; providing leadership and coordination to detect fraud and abuse; and making recommendations to management to promote the economy, efficiency, and effectiveness in the administration of these programs. It also includes providing a means for keeping the head of the agency and the Congress informed about problems and deficiencies relating to program administration and agency operations and the necessity for, and progress of, corrective action. The 12 OIG inspections provided oversight of less than 1 percent of the total grant dollars the Commission awarded during fiscal years 2011 through 2013. The OIG contracted with an independent public accountant (IPA) to conduct the Commission’s annual financial statement audit but did not follow up on the IPA’s concerns related to grant monitoring. Furthermore, the OIG did not have a risk-based annual work plan or policies and procedures to identify the Commission’s major programs and operations that needed OIG oversight. Without adequate OIG oversight of the Commission’s programs and operations, including grants, the OIG is unable to reasonably ensure accountability over federal funds. The OIG is also limited in its ability to minimize the Commission’s risk of fraud, waste, and abuse occurring in its major programs and operations. The Commission’s OIG oversight covered a small percentage of the Commission’s programs. During fiscal years 2011 through 2013, the Commission’s major programs were energy, transportation, health facilities, and training. These programs represent approximately 84 percent of funds granted by all Commission programs. According to the Commission’s CFO, during fiscal years 2011 through 2013, the agency awarded grants totaling $56 million and disbursed $167 million on both new and previously awarded grants. Our analysis of the 12 inspections completed by the OIG over that period found that 5 of these inspections focused on the Commission’s grant administration and 7 focused on the agency’s operations. Of the 5 grant- related inspections, only 2 of these inspections clearly identified specific grant amounts disbursed by the Commission that were examined by the OIG. In these 2 inspections, the OIG provided oversight for $150,000 of grant funds disbursed for training programs, all of which were reported in fiscal year 2012. The $150,000 of grant funds inspected by the OIG represented less than 1 percent of the total grants awarded by the Commission during fiscal years 2011 through 2013. We found that 3 of the OIG’s inspection reports examined various complaints and issues related to the grants process, such as assessing whether a grant applicant was improperly denied a subaward (grant preaward stage), assessing whether certain agency policy resulted in the unfair treatment of a grantee (grant implementation stage), and determining whether a grantee was treated unfairly because of a specific Commission policy and legal requirements that were attached to a grant (grant implementation stage). However, the OIG did not conduct any inspections that assessed the effectiveness and efficiency of the agency’s other major programs— energy, transportation, and health facilities—or make recommendations to management promoting the economy, efficiency, and effectiveness in the administration of these programs, which is one of the major OIG goals established in the IG Act. The other 7 inspections completed by the OIG over that period focused on (1) whether agency operations complied with applicable laws and regulations, (2) the Commission’s authority for accepting funds from nonfederal sources, and (3) potential agency restructuring. According to the IG, his workload was driven by requests from four sources: the Federal Cochair, aided by the CFO; Office of Management and Budget (OMB) officials; the House Committee on Oversight and Government Reform; and three Senate Oversight Committees (Finance, Budget, and Homeland Security and Governmental Affairs). The training grants he inspected were based on referrals from the Federal Cochair. Under the IG Act, OIGs are responsible for coordinating audits and investigations. Further, OIGs are required by the IG Act to adhere to professional standards developed by CIGIE, to the extent permitted by law and not inconsistent with applicable auditing standards. The Commission OIG’s primary vehicle for oversight was the inspection. CIGIE’s Quality Standards for Inspection and Evaluation defines an inspection as a systematic and independent assessment of the design, implementation, or results of an agency’s operations, programs, or policies. The inspection function at each agency is tailored to its unique mission; is not overly prescriptive; and may be used by the agency to provide factual and analytical information, measure performance, identify savings and funds put to better use, or assess allegations of fraud, waste, abuse and mismanagement. The Commission OIG’s policies and procedures for inspections specifically stated that the Commission OIG “will conduct its interviews of agency issues through an ‘inspection’ methodology that conforms to the CIGIE quality standards for that procedure.”12 inspections. As part of its oversight duties, the OIG is responsible for selecting and overseeing the IPA responsible for performing the Commission’s annual financial statement audit. These responsibilities include providing technical advice, serving as the agency liaison to the IPA, and ensuring that the audit was completed timely and in accordance with generally accepted government auditing standards. As the agency’s Contracting Officer’s Technical Representative (COTR) for the Commission’s annual financial statement audit, the IG developed detailed policies and procedures and completed a detailed audit monitoring plan documenting the OIG’s oversight activities. The IG also reviewed the IPA’s workpapers at key phases during the audit process to determine whether the fieldwork completed supported the IPA’s conclusions. We found that the OIG had practices in place to effectively monitor the annual financial statement audit conducted by the IPA. In fiscal years 2011 through 2013, the IPA reported several concerns to the Commission about the agency’s grants monitoring activities. A sample of grant-funded projects reviewed by the IPA found that the Commission did not (1) have a follow-up process to determine whether grants were used as intended; (2) include the review of the grantee’s single audit reports as part of its grants monitoring practices; (3) review past performance (and current status of previous projects) to ensure that the grant was used as intended, prior to approval of new grants; and (4) assess the extent to which it could recapture grant amounts from grantees as a result of substantial changes in the use of these grants. Although the OIG effectively monitored the IPA performing the Commission’s annual financial statement audit, we found that the OIG did not focus its oversight efforts after the audit had been completed to ensure that the Commission addressed the IPA’s concerns with the agency’s grants monitoring practices. We found that the OIG issued inspections related to some of the Commission’s major programs and operations; however, the OIG did not conduct any performance audits related to these same programs and operations. There are fundamental differences between inspections and audits. Inspections are narrower and more focused in scope than audits and they are also significantly less rigorous than an audit conducted in accordance with Government Auditing Standards. Audits provide essential accountability and transparency over government programs. According to the IG, he leveraged his resources (one full-time employee) to do the most good. The IG stated that his office was not staffed at a level that would support audits; he decided inspections were an effective method for leveraging what he had and responding to very specific issues that were often complaint driven. However, because the OIG did not conduct audits of the agency’s programs and operations, the Commission did not have the benefit of the broader scope and more rigorous standards of audits to help ensure effective grant oversight, accountability for grant funds, and the proper use of taxpayer dollars. We also found that the OIG did not conduct investigations. OIG investigations help federal agency managers strengthen program integrity and use funds more effectively and efficiently. Investigations vary in purpose and scope and may involve alleged violations of criminal or civil laws as well as administrative requirements. The focus of an investigation can include the integrity of programs, operations, and personnel in agencies at federal, state, and local levels of government. According to CIGIE’s Quality Standards for Investigations, areas investigated by the OIG may also focus on issues related to procurement and grant fraud schemes; environment, safety, and health violations; benefits fraud; the background and suitability of individuals for employment or a security clearance designation; whistle-blower retaliation; and other matters involving alleged violations of law, rules, regulations, and policies. Some investigations address allegations of both civil and criminal violations, ranging in significance from a misdemeanor to a felony, while others could involve administrative misconduct issues. CIGIE’s Quality Standards for Investigations also state that investigations can lead to criminal prosecutions and program exclusions; recovery of damages and penalties through criminal, civil, and administrative proceedings; and corrective management actions. Without conducting investigations, the Commission OIG was limited in its ability to identify criminal, civil, and administrative activities of fraud or misconduct related to Commission programs and operations. Our review of the OIG’s policies and procedures in place during fiscal years 2011 through 2013 found that the OIG did not document its policies and procedures for its management and operations as an OIG. CIGIE’s Quality Standards for Federal Offices of Inspector General sets forth the overall quality framework to which OIGs must adhere, to the extent permitted under law. Although the OIG did not document its policies and procedures for managing and operating its office, we identified some quality standards that were implemented, while others were not. Our review of the OIG’s inspections issued during fiscal years 2011 through 2013 found that the inspections did not fully adhere to CIGIE’s Quality Standards for Inspection and Evaluation. While the OIG had documented policies and procedures for inspections, we found that the design and implementation of the inspection policies and procedures did not fully adhere to professional standards. Our review also found that the semiannual reports submitted to the Congress by the IG did not include required information in accordance with the reporting requirements of the IG Act. The Commission OIG did not have documented policies and procedures for conducting office operations that adhered to CIGIE’s Quality Standards for Federal Offices of Inspector General. These quality standards are used as guidance by OIGs in the operation of federal OIGs and consist of (1) ethics, independence, and confidentiality; (2) professional standards; (3) ensuring internal control; (4) maintaining quality assurance; (5) planning and coordinating; (6) communicating the results of OIG activities; (7) managing human capital; (8) reviewing legislation and regulations; and (9) receiving and reviewing allegations. Although the Commission OIG did not document its policies and procedures for its operations and management, we found that the OIG did implement, to some extent, certain standards in CIGIE’s Quality Standards for Federal Offices of Inspector General. Specifically, the Commission OIG implemented, to some extent, the following five quality standards: ethics, independence, and confidentiality; professional standards; communicating results of OIG activities; managing human capital; and reviewing legislation and regulations. However, the Commission OIG did not implement the following four quality standards that are critical for the management and operations of the OIG: planning and coordinating, maintaining quality assurance, ensuring internal control, and receiving and reviewing allegations. The extent to which the IG implemented these quality standards is discussed below. Ethics, independence, and confidentiality. The CIGIE quality standard for ethics, independence, and confidentiality states that the IG and OIG staff shall adhere to the highest ethical principles by conducting their work with integrity. Objectivity, independence, professional judgment, and confidentially are all elements of integrity. We found no evidence to indicate that the IG did not adhere to CIGIE’s quality standard to ethically conduct his work and no evidence to indicate the IG did not adhere to CIGIE’s quality standards for independently performing his duties. We also found no evidence to indicate that the IG did not safeguard the identity of confidential sources and protect privileged, confidential, and national security or classified information in compliance with applicable laws, regulations, and professional standards. Professional standards. The CIGIE quality standard for professional standards states that each OIG shall conduct, supervise, and coordinate its audits, investigations, and inspections in compliance with applicable professional standards. We found that the Commission OIG provided some evidence for adhering to professional standards. Although the Commission OIG’s inspection reports did not always adhere to CIGIE’s Quality Standards for Inspection and Evaluation, we found that the IG did complete inspections. Also, the OIG’s monitoring of the contract with the IPA hired to conduct the Commission’s annual financial statement audit documented the OIG’s detailed oversight and coordination of this agency requirement, providing evidence of adherence to Government Auditing Standards. Ensuring internal control. The CIGIE standard for ensuring internal control states that each IG and OIG staff shall direct and control OIG operations consistent with Standards for Internal Control in the Federal Government. These standards require that internal control be part of the OIG’s management infrastructure, serve as a continuous built-in component of operations effected by people, and provide reasonable assurance that the OIG’s objectives are met. The internal control structure includes the control environment, risk assessment, control activities, information and communication, and monitoring. Control activities are policies, procedures, techniques, and mechanisms that help ensure that the OIG’s directives are carried out. Effective internal control also assists the OIG in managing change to cope with shifting environments and evolving demands. An internal control structure is continually assessed and evaluated to ensure that it is well designed and operated, is appropriately updated to meet changing conditions, and provides reasonable assurance that objectives are being achieved. The OIG should design internal control activities to contribute to its mission, goals, and objectives. Specifically, control activities include a wide range of diverse activities, such as approvals, authorizations, verifications, reconciliations, performance reviews, security activities, and the production of records and documentation. We found that the Commission OIG lacked critical elements of an effective internal control structure. For example, the OIG did not conduct a risk assessment to determine which agency programs or operations to evaluate. Instead, the IG relied on the input from Commission officials and congressional staff to determine which programs and operations to evaluate. The OIG also lacked policies and procedures for managing and operating its office, which would have provided the needed guidance to ensure that the OIG’s directives were carried out efficiently and effectively. While we acknowledge that the OIG is an office of one, independently determining which programs or agency operations to evaluate as well as developing policies and procedures for the OIG’s management and operations are elements of internal control that are still achievable by a small office. Without an effective internal control structure, it is difficult for an OIG to ensure its own effective and efficient management and operations and safeguard its assets. Maintaining quality assurance. The CIGIE standard for maintaining quality assurance states that each OIG shall establish and maintain a quality assurance program to ensure that work performed adheres to established OIG policies and procedures; meets applicable professional standards; and is carried out economically, efficiently, and effectively. Because OIGs evaluate how well agency programs and operations are functioning, they have a special responsibility to ensure that their own operations are as effective as possible. The OIG quality assurance program is an evaluative effort conducted by reviewers external to the units or personnel being reviewed to ensure that the overall work of the OIG meets appropriate standards. The quality assurance program has an internal and an external component. Furthermore, organizations that perform audits are subject to a peer review at least once every 3 years. Audits performed in accordance with Government Auditing Standards must provide reasonable assurance that the audit organization and its personnel are consistent with professional standards and applicable legal and regulatory requirements. Internal quality assurance reviews can include reviews of all aspects of the OIG’s operations and are distinct from regular management and supervisory activities, comparisons, and other activities by OIG staff performing their duties. External quality assurance reviews provide OIGs with added assurance regarding their adherence to prescribed standards, regulations, and legislation through a formal objective assessment of OIG operations. OIGs are strongly encouraged to have external quality assurance reviews of audits, investigations, inspections, evaluations, and other OIG activities. While the nature and extent of an OIG’s quality assurance program depends on a number of factors—such as the OIG size, the degree of operating autonomy allowed its personnel and its offices, the nature of its work, its organization structure, and appropriate cost-benefit considerations—CIGIE standards state that each OIG shall establish and maintain a quality assurance program. The Commission OIG did not have a quality assurance program and had not developed policies and procedures to help ensure quality assurance. The Commission IG told us that peer reviews were only required if an OIG had conducted audits, and because his office did not perform audits, it was not subject to this quality assurance requirement. Conversely, the Commission OIG inspection and semiannual reports could have been subjected to an internal quality assurance review, an external quality assurance review, or both. The Commission OIG provided draft inspection and semiannual reports to the Federal Cochair and the other commissioners, providing management an opportunity to comment on the drafts prior to final issuance. However, the Federal Cochair and commissioners do not qualify as external quality assurance reviewers because they are directly involved in the activities or programs being reviewed. In addition, they may not be familiar with applicable professional standards that govern OIG-issued work products. Without documented policies and procedures for maintaining a quality assurance program, the OIG could not ensure that its management and operations adhered to the CIGIE standards or complied with the IG Act. In addition, the risk is significantly increased that issued work will not meet established standards of performance, including applicable professional standards, or be carried out economically, efficiently, and effectively. While we acknowledge that the Commission OIG is an office of one and maintaining quality assurance under these circumstances presents challenges, adherence to this quality standard is required. Planning and coordinating. The CIGIE standard for planning and coordinating states that each OIG shall maintain a planning system assessing the nature, scope, and inherent risks of agency programs and operations. strategic and performance plans, including goals, objectives, and performance measures to be accomplished by the OIG within a specific time period. Some of the elements of the planning process include (1) using a strategic planning process that carefully considers current and emerging agency programs, operations, risks, and management challenges; (2) developing a methodology and process for identifying and prioritizing agency programs and operations as potential subjects for audit, investigation, inspection, or evaluation; and (3) using an annual performance planning process that identifies the activities to audit, investigate, inspect, or evaluate and translates these priorities into outcome-related goals, objectives, and performance measures. Strategic and annual work plans are useful tools in documenting the IG’s strategic vision for providing leadership for activities designed to promote economy, efficiency, and effectiveness for an entity’s programs and operations. Council of the Inspectors General on Integrity and Efficiency, Quality Standards for Federal Offices of Inspector General (Washington, D.C.: August 2012). We found that the OIG did not prepare an annual work plan or strategic plan. Instead, an informal and undocumented planning process was used by the IG and Federal Cochair that involved routine meetings, e-mails, and conversations. Without an annual work plan or strategic plan, the Commission OIG is limited in its ability to ensure that the oversight it provided was relevant, timely, and responsive to the priorities to the Commission. Further, without a risk-based approach for oversight that includes identifying and prioritizing agency programs and operations as potential subjects for audit, investigation, inspection, or evaluation, the OIG did not have a road map to help guide the general direction and focus of its work to ensure appropriate oversight of the Commission’s major programs. Communicating results of OIG activities. The CIGIE quality standard related to communicating the results of OIG activities states that the OIG shall keep agency management, program managers, and the Congress fully and currently informed about appropriate aspects of OIG operations and findings. The OIG should also assess and report to the Congress, as appropriate, the OIG’s strategic and annual performance, as well as the performance of the agency it oversees. Furthermore, the OIG is responsible for reporting promptly to the Attorney General whenever the IG has reasonable grounds to believe there has been a violation of federal criminal law. The IG and Federal Cochair told us that they did discuss the areas the IG planned to inspect. The OIG communicated the results of its activities by submitting semiannual reports to the Congress, ensuring that inspection reports were available on the OIG’s website, and meeting with congressional staff to discuss various issues. Managing human capital. The CIGIE quality standard for managing human capital states that the OIG should have a process to ensure that OIG staff possess the core competencies needed to accomplish the OIG’s mission. Because the Commission OIG consisted of the IG and no staff, standards for managing human capital are applicable only to the Commission IG. The IG provided documentation verifying that as a certified public accountant and attorney in the state of Alaska, he had met the continuing education requirements for these designations and possessed the core competencies needed to accomplish the OIG’s mission. Because the Commission OIG was an office of one, the IG used the services of others to assist with his oversight duties. As discussed earlier, he contracted with an IPA for the agency’s annual financial statement audit and contracted with a retired investigator to assist with inspections. We found that the OIG had a process to ensure that these contractors possessed the needed skills for the services they provided. Reviewing legislation and regulations. The CIGIE quality standard for reviewing legislation and regulations states that the OIG shall establish and maintain a system for reviewing and commenting on existing and proposed legislation, regulations, and directives that affect both the program and operations of the OIG’s agency or the mission and functions of the OIG. While the OIG had not established a documented system for the steps it followed for reviewing legislation and regulations, we found an assessment of relevant Commission-related legislation and regulations in the OIG’s semiannual reports to the Congress. Receiving and reviewing allegations. The CIGIE quality standard for receiving and reviewing allegations states that the OIG shall establish and follow policies and procedures for receiving and reviewing allegations. This process should ensure that appropriate disposition, including appropriate notification, is made for each allegation. Furthermore, the IG Act requires each OIG to establish a direct link on the OIG website for individuals to anonymously report fraud, waste, and abuse. The Commission OIG did not have an OIG hotline link on its website to serve as a mechanism for receiving and reviewing allegations, as appropriate. The IG provided his e-mail address and telephone number on the Commission’s OIG website. He reported that there was no OIG hotline link on the website because the Commission only had about 15 employees and a tip through an OIG hotline was not necessarily how employees made contact with the OIG. According to the IG, contact with the Commission’s small workforce was primarily through e-mails, phone calls, and group teleconferences. OIG hotlines exist to elicit information from federal employees, contractors, and the general public that furthers an OIG’s mission to (1) promote effectiveness, efficiency, and economy in its organization’s programs and operations and (2) prevent and detect fraud, waste, and abuse in such programs and operations. Accordingly, hotlines play a critical role in the work of OIGs, because an OIG can only investigate, refer, or otherwise handle matters of which it is aware. Agency employees, contractors, and members of the public who make reports to an OIG via its hotline are an important resource because they can provide the OIG with notification of or insider information about potential problems. Hotlines have been used in organizations as a means for individuals fearing retaliation to seek remedies for problems anonymously within the organization. In recent years, there has been increased interest in the use of OIG hotlines as the principal mechanism for reporting and detecting fraud, waste, and abuse. Entities both within and outside the IG community have studied OIG hotlines and their important impact on the effectiveness of the IG community. In addition to detecting fraud, waste, and abuse, hotlines are used by some OIGs to identify agency programs or operations as potential subjects for audit or investigation. However, the Commission OIG did not conduct any investigations for criminal prosecution, and there was no supporting evidence of the disposition of referrals or tips received. Without an established OIG hotline, with its protection of anonymity, it may be difficult for agency employees, contractors, and the general public to report insider information about potential problems at the Commission. We reviewed the OIG’s work products, which consisted of inspection reports and semiannual reports issued during fiscal years 2011 through 2013, and their associated policies and procedures. Our evaluation of the OIG’s written policies and procedures for inspections found that they did not include guidance for all of the 14 CIGIE inspection standards and that there were deficiencies in the guidance that was included. In addition, we found that the inspection reports the OIG issued during fiscal years 2011 through 2013 did not fully adhere to applicable CIGIE inspection standards. Finally, we found that the semiannual reports issued by the OIG during fiscal years 2011 through 2013 did not fully comply with the reporting requirements per the IG Act. CIGIE’s Quality Standards for Inspection and Evaluation promulgates 14 sets of criteria for performing inspections: (1) competency; (2) independence; (3) professional judgment; (4) quality control; (5) planning; (6) data collection and analysis; (7) evidence; (8) records maintenance; (9) timeliness; (10) fraud, other illegal acts, and abuse; (11) reporting; (12) follow-up; (13) performance management; and (14) working relationships and communication. CIGIE inspection standards state that it is the responsibility of each OIG that conducts inspections to develop internal written policies and procedures to ensure that all work adheres to the standards and is in compliance with the IG Act. The IG Act requires OIGs to adhere to these standards to the extent permitted under law and not inconsistent with applicable auditing standards. The Commission OIG had established written policies and procedures that provide guidance for 7 of the 14 CIGIE standards; however, our review of the guidance found deficiencies. Regarding implementation of the CIGIE inspection standards, we reviewed the OIG’s 12 inspections reported from fiscal years 2011 through 2013 and found documentary evidence that some CIGIE standards, including some that were not included in the OIG’s policies and procedures, were implemented. However, inspections were not conducted in full accordance with the standards. The following standards were not included in the OIG’s policies and procedures but were implemented to some extent in the conduct of inspections. Data collection and analysis. CIGIE inspection standards state that the collection of information and data focuses on the function being inspected, consistent with inspection objectives and sufficient to provide a reasonable basis for reaching conclusions. The Commission OIG did not have policies and procedures for data collection and analysis that adhered to CIGIE’s standards for inspections. However, the supporting documentation for the inspections we reviewed did have information to support data collection for the inspections. Specifically, we found that 9 of the 12 inspections completed had supporting documentation sufficient in detail for reaching the identified findings in the inspection reports. We also found that the methods used to collect supporting documentation for the inspections were reliable and valid. The supporting documentation collected consisted of source documents such as interview write-ups by the contracted investigator, relevant excerpts from the laws and regulations referenced in the inspection reports, and other information. Supporting documentation for 5 of the 12 inspections showed evidence that the information had been reviewed for accuracy and reliability, and another 4 of 12 inspections showed evidence of partial review by the Commission IG. The remaining 3 inspection reports did not show evidence of supporting documentation being reviewed for accuracy and reliability. Evidence. CIGIE’s standards for inspections state that evidence to support findings, conclusions, and recommendations should be sufficient, competent, and relevant and should provide a basis for bringing a reasonable person to the reported conclusions and findings. Furthermore, evidence may take many forms, such as physical, testimonial, documentary, and analytical, which includes computations, comparisons, and rational arguments. The Commission OIG did not have policies and procedures for evidence that adhered to CIGIE’s standards for inspections. Although the OIG’s policy and procedure stated that “the Denali IG’s basic documentation will include the inspection plan, a cross-referenced copy to work papers, and detailed footnotes,” the policy and procedure did not adhere to the CIGIE inspection standard. Additionally, we found no documented evidence in the OIG’s workpapers to support the inspection conclusions and recommendations for its reports. For example, we did not find any workpapers containing the Commission IG’s analysis of the supporting documentation or that linked the Commission IG’s processes or methods used to the reported findings, conclusions, or recommendations for all 12 of the inspection reports we reviewed. Records maintenance. CIGIE inspection standards state that all relevant documentation generated, obtained, and used in supporting inspection findings, conclusions, and recommendations should be retained for an appropriate amount of time. The Commission OIG did not have policies and procedure for records maintenance that adhered to CIGIE’s standards for inspections. Although the Commission OIG’s policies and procedures did not address records maintenance, the OIG did maintain supporting documentation in its workpaper files. We found that the OIG retained documentation for 9 of the 12 inspection reports completed. However, for the 2 inspection reports included in the Commission’s agency financial report, the OIG did not have any workpapers. For the remaining inspection report, the supporting documentation that was maintained was incomplete. Timeliness. CIGIE inspection standards state that inspections should strive to deliver significant information to appropriate management officials and customers in a timely manner. The Commission OIG did not have policies and procedures for timeliness that adhered to CIGIE’s standards for inspections. Although the Commission OIG’s policies and procedures did not address timeliness, we found no evidence to suggest that the inspection reports were not in accordance with the timeliness standard. This is based on the time the inspections began and the inspection report dates, which ranged from 1 month to about 2 years. Fraud, other illegal acts, and abuse. CIGIE standards for inspections state that inspectors should be alert to any indicator of fraud, other illegal acts, or abuse. They also state that inspectors should be aware of vulnerabilities to fraud and abuse associated with the area under review to facilitate identifying potential or actual illegal acts or abuse that may have occurred. The Commission OIG did not have policies and procedures for considering fraud, other illegal acts, and abuse that adhered to CIGIE’s standards for inspections. We also found that the OIG did not conduct a fraud assessment for any of the 12 inspections the OIG conducted. Follow-up. CIGIE standards for inspections state that appropriate follow- up will be performed to ensure that any inspection recommendations made to department or agency officials are adequately considered and appropriately addressed. The Commission OIG did not have policies and procedures related to following up on report recommendations to determine whether corrective actions had been taken. We found that the OIG did not perform follow-up for any of the 12 inspection reports. Of the 5 published inspection reports, the OIG did not follow up on the three recommendations made in those reports. In addition, of the 5 inspections mentioned in the OIG’s semiannual reports to the Congress, the OIG did not follow up on the 13 recommendations made as a result of those inspections. The remaining 2 inspections published in the agency financial report did not contain any recommendations. Performance measurement. CIGIE standards for inspections state that mechanisms should be in place to measure the effectiveness of inspection work. CIGIE standards describe the importance of being able to demonstrate the positive results that inspections contribute to the more effective management and operation of federal programs. Performance measures for OIG inspections, for example, could focus on the number of implemented recommendations and outcomes or changes in policy. The Commission OIG did not have policies and procedures related to performance measurement that adhered to CIGIE’s standards for inspections. We also found that the OIG did not establish performance measures to determine the effectiveness of inspections completed. The following standards were included in the OIG’s policies and procedures and were implemented to some extent in the conduct of inspections. Competency. CIGIE’s competency standard states that inspection organizations need to ensure that the personnel conducting an inspection collectively have the knowledge, skills, abilities, and experience necessary for the assignment. The Commission OIG’s policies and procedures for competency adhered to CIGIE’s standards for inspections. They state that the Commission IG will, as a condition of employment, maintain his or her competency to multitask as a one-person OIG. In addition, the OIG’s policies and procedures state that the IG will take a minimum of 40 hours of training per fiscal year, which is in accordance with CIGIE standards. The Commission IG was a licensed attorney and certified public accountant, and he provided us documents of his current continuing professional education credits. Thus, we considered the Commission IG’s qualifications to be consistent with CIGIE inspection standards. Independence. The CIGIE inspection standard for independence states that in all matters relating to inspection work, the inspection organization and each individual inspector should be free both in fact and appearance from personal, external, and organizational impairments to independence. The Commission OIG’s policies and procedures adhered to CIGIE inspection standards for independence. They state the Commission IG will maintain strict political neutrality and an appropriate level of social detachment from the Commission’s management and beneficiaries as a critical element of OIG independence. We did not find any impairment, in fact or appearance, with the independence of the Commission OIG. Professional judgment. The CIGIE inspection standard for professional judgment states that due professional judgment should be used in planning and performing inspections and in reporting the results. The Commission OIG’s policies and procedures addressed professional judgment but did not address the broader intent of the CIGIE inspection standard for professional judgment. The OIG’s policy states that it will conduct interviews of agency officials through an inspection methodology that conforms to the CIGIE quality standards for that inspection procedure, which is in accordance with the CIGIE inspection standard for professional judgment. The OIG’s policy only addresses the intent to interview agency officials in accordance with these standards instead of the OIG’s intent to use professional judgment when performing all aspects of inspection procedures. This would include the intent to use professional judgment in selecting the type of inspections to perform, defining the scope and methodology, and determining the type and amount of evidence to gather. In addition, the problems with the OIG’s inspection plans and lack of evidence and analysis in the workpapers, as discussed in this report, are indications that the OIG’s professional judgment did not adhere to CIGIE standards. Quality control. CIGIE standard for quality control states that each OIG organization that conducts inspections should have internal quality controls for its processes and work. The Commission OIG’s policies and procedures addressed quality control but did not fully adhere to CIGIE’s inspection standards. The Commission OIG’s policy for quality control states that the OIG will arrange for feedback from an external expert for at least 50 percent of its published reports. However, the Commission OIG did not have procedures established to provide for an independent assessment of its inspection processes or inspection reports. Consequently, none of the 12 inspection reports we reviewed had an independent assessment for quality control completed. While the Commission OIG is an office of one full-time employee, which created challenges in instituting extensive quality control, the IG did not take the necessary steps to mitigate this challenge by implementing control procedures that provide an independent assessment of inspection processes and work. Planning. The CIGIE standard states that inspection planning is intended to ensure that appropriate care is given to selecting inspection topics and should be developed to clearly define the inspection objective, scope, and methodology. It may also include time frames and work assignments. Additionally, the CIGIE inspection standard for planning states that research, work planning, and coordination should be thorough enough to ensure that the inspection objectives are met. The Commission OIG’s policies and procedures addressed planning but did not fully adhere to CIGIE’s inspection standards. We found that the Commission OIG’s policy for planning inspections did not adhere to the CIGIE standards for inspections related to planning. The Commission OIG’s policy for planning states that the basic documentation for an inspection will include (1) an inspection plan, (2) a copy of the report with cross-references to the evidence workpapers, and (3) detailed footnotes in the report itself. This policy does not address the purpose or contents of the plan as described in the CIGIE inspection standard. Regarding implementation, we found that the OIG’s inspection plans were not adequately developed. Specifically, we found that none of the 12 inspections included clearly defined descriptions of the objective, scope, and methodology. In addition, 9 of the 12 inspection plans were not planned sufficiently to reach reasonable conclusions about the topic inspected because of a lack of detailed procedures in the inspection plan to perform the inspection. The remaining 3 inspections plans, despite not having documented the objective, scope, and methodology, did have sufficient planned steps to reach reasonable conclusions as reported in the inspection report. Reporting. The CIGIE standard states that inspection reporting shall present factual data accurately, fairly, and objectively, and present findings, conclusions, and recommendations in a persuasive manner. Additionally, the standard states that inspection reports must include the objective, scope, and methodology of the inspection and a statement that the inspection was conducted in accordance with CIGIE standards for inspection. The Commission OIG’s policies and procedures addressed reporting but did not fully adhere to CIGIE’s inspection standards. The Commission OIG’s policy for reporting states that published inspection reports will emphasize plain language, readability to a nationwide audience, and usefulness to decision makers. However, the OIG’s policies and procedures do not require that reports include the objective, scope, and methodology of the inspection or a statement that the inspection was conducted in accordance with CIGIE standards for inspections. Despite these omissions in the Commission OIG’s policies and procedures, we found that 1 of the 12 inspections clearly listed the objective, scope, and methodology, and 4 of 12 reports stated that the inspection was conducted in accordance with CIGIE standards for inspections. Working relationships and communication. The CIGIE standard for inspections related to working relationships and communication states that each inspection organization should seek to facilitate positive working relationships and effective communication with those entities inspected and other interested parties. The Commission OIG’s policies and procedures adhered to CIGIE’s inspection standards for working relationships and communication. The Commission OIG policy states that its key inspection procedure is management’s feedback regarding the draft report, which the Commission OIG seeks at several levels: (1) oral conversation, (2) e- mailed comments, and (3) a formal response letter for publication with the OIG’s final report. We found evidence of OIG communication with the Commission through e-mail correspondence for all published inspection reports. In addition, the OIG reported and communicated the results of OIG activities related to issued work products to agency management officials and the Congress. Section 5 of the IG Act requires that each IG shall, not later than April 30 and October 31 of each year, prepare and submit to the Congress semiannual reports summarizing the activities of the office during the immediately preceding 6-month periods ending March 31 and September 30. These reports are intended to keep the Congress informed by highlighting, among other things, the OIG’s review of existing and proposed legislation and regulations affecting an agency’s programs and operations to foster economy and efficiency and detect fraud, waste, and abuse. These reports are also intended to keep the Congress informed about significant problems, abuses, and deficiencies in an agency’s programs and operations and the status of recommendations for corrective actions. While the IG Act requires that semiannual reports include a summary of matters referred to prosecutive authorities and resulting convictions, the Commission IG told us that he is not aware of anyone who has been charged in a criminal court case as a result of his work. Section 5 of the IG Act also establishes a uniform set of statistical categories under which OIGs must report the quantitative results of their audit, investigation, inspection, and evaluation activities. The statistical information reported in an OIG’s semiannual report must show the total dollar value of questioned costs and the dollar value of recommendations that funds be put to better use. The Commission OIG submitted semiannual reports as required by the IG Act; however, we found that the reports did not fully comply with the reporting requirements of the IG Act. Specifically, we found that for the six semiannual reports we reviewed, the OIG did not provide statistical information showing the dollar value of recommendations that funds be put to better use or the total value of questioned costs (including a separate category for the dollar value of unsupported costs). We understand that that there may not have been any amounts identified by the OIG of funds that could be put to better use or questioned costs for the reporting period. However, if the OIG does not state this in the semiannual reports to the Congress, both management and the Congress do not have the necessary information to take appropriate actions to enhance management practices and procedures, which would result in more efficient and effective use of Commission funds. Furthermore, this statistical information is required by the IG Act and should be included in the OIG’s semiannual reports to the Congress. We also found that for five of the semiannual reports we reviewed, the OIG did not identify the significant recommendations described in previous semiannual reports for which corrective action had not been completed by agency management. While the OIG provided this information in its May 2011 semiannual report, the OIG did not provide the status of the 48 open recommendations identified in this report in subsequent semiannual reports. The IG Act requires the OIG to identify each significant recommendation described in previous semiannual reports on which corrective action has not been completed by management. Not knowing the current status of the recommendations for which corrective actions are needed limits both the agency’s and the Congress’s awareness of outstanding actions that may still need to be taken. We found that the OIG did not have written policies and procedures to guide the preparation of its semiannual reports to the Congress. We did find that for one of the semiannual reports we tested (the report for the first half of fiscal year 2011) at the request of the Federal Cochair, the OIG included an appendix that identified and provided the status of recommendations from all the semiannual reports issued by the OIG in fiscal year 2006 through the first half of fiscal year 2011. The information in the appendix identified 159 recommendations made by the OIG during fiscal years 2006 through 2010 and the first half of fiscal year 2011. While the IG provided the status of recommendations in fiscal year 2011, he did not provide updated information on the status of these recommendations in the semiannual reports issued going forward, in compliance with the IG Act. According to the IG, he received a request at least annually from the House Committee on Oversight and Government Reform requesting an update on the status of open recommendations. The IG also told us that a common focus of his meetings with OMB and congressional committee staff was to discuss the status of open recommendations. As we recently testified, GAO has long supported the creation of independent IG offices in appropriate federal departments, agencies, and entities, and we continue to believe that significant federal programs and entities should be subject to oversight by independent IGs. At the same time, we have reported some concerns about creating and maintaining small IG offices with limited resources, where an IG might not have the ability to obtain the technical skills and expertise needed to provide adequate and cost-effective oversight. Although the limitations of a single- person office can create challenges to developing and implementing policies and procedures to ensure effective oversight, if corrective actions are taken to address the issues identified in this report, the current DFE OIG structure can provide a viable option for oversight of the Commission. Nevertheless, there are alternative structures that may also facilitate effective OIG oversight of the Commission. We identified examples of alternative approaches that exist in other federal agencies that may also provide effective OIG oversight for the Commission. Three alternative IG oversight structures and their respective advantages and disadvantages are summarized in figure 4 and more fully described in the paragraphs that follow. The Commission OIG could be consolidated into a larger IG office. Specifically, OIGs with presidentially appointed IGs would assume the operational responsibilities of the Commission OIG as established under the IG Act. This includes reporting to the Congress semiannually; performing audits, investigations, inspections, and evaluations of program areas; as well as conducting and overseeing the agency’s annual financial statement audit. This alternative could strengthen the quality of work and use of resources through the implementation of best practices usually employed at larger, presidentially appointed and Senate- confirmed IGs and their related offices. This oversight structure exists at the Department of State OIG. For example, the Department of State OIG has oversight authority over the Broadcasting Board of Governors (BBG), which had a budget of $712 million for fiscal year 2013. The Department of State OIG had an average annual budget of $61 million for fiscal years 2011 through 2013 and employed approximately 270 full-time and 16 part-time employees. The Department of State OIG conducts independent performance and financial statement audits, inspections, and investigations that advance the missions of the Department of State and BBG. The Department of State OIG prepares an annual performance plan (including audits, inspections, and evaluations) and a 5-year strategic plan for oversight of the Department of State and BBG using Department of State management challenges as a baseline, along with input collected from the Department of State, BBG management, and other sources of information. The Department of State OIG also uses a risk-based approach to determine which posts and bureaus should be inspected based on the most recent inspection and other data collected during the course of its oversight work. In addition, when possible, the Department of State OIG performs a review of BBG foreign offices during Department of State site visits, allowing it to leverage efficiencies and resources when performing other oversight work. In another example, the U.S. Agency for International Development Office of Inspector General (USAID OIG) provides oversight to several small entities, including the Millennium Challenge Corporation, U.S. African Development Foundation, Inter-American Foundation, and Overseas Private Investment Corporation, with budgets of $898 million, $30 million, $22 million, and approximately $75 million to 100 million, respectively, for fiscal year 2013. USAID OIG has approximately 230 employees and had an average budget of approximately $45.6 million for fiscal years 2011 through 2013. USAID OIG prepares annual performance (i.e., audit) plans for oversight of these entities that are aligned with its 5-year strategic plan following consultations with stakeholders and OIG personnel. In addition to these consultations, annual performance plans are developed based on a risk assessment of the portfolios they monitor. USAID OIG audits activities relating to the worldwide foreign assistance programs and agency operations of these entities and considers several factors when assessing agency program risk, such as inherent risk, fraud and corruption risk, and control risk. Audit activities include performance audits and reviews of programs and management systems, financial statement audits, and audits related to financial accountability of grantees and contractors. The USAID OIG also investigates allegations of fraud, waste, and abuse relating to the foreign assistance programs and operations. The quality of an OIG’s work is a critical element of IG effectiveness. Consolidation with a larger OIG could improve the quality of work at the Commission OIG. This could be accomplished by using a strategic, risk- based approach for auditing and increasing staffing resources with the requisite technical auditing and accounting expertise necessary to improve program efficiency and effectiveness. As we noted earlier, audits performed in accordance with generally accepted government auditing standards (GAGAS) provide information used for oversight, accountability, transparency, and improvements of government programs and operations. When auditors comply with GAGAS in reporting the results, their work can lead to improved management, better decision making and oversight, effective and efficient operations, accountability, and transparency for resources. In addition, consolidation with a larger OIG could increase the OIG’s ability to effectively plan for work, including implementing a strategic and risk-based approach to auditing agency programs and operations of high risk. Routine access to staff resources with the requisite subject matter expertise, such as information technology personnel, payroll services personnel, and a highly trained financial management workforce, could also be an advantage of consolidating with a larger OIG. However, consolidation with larger OIGs could also result in disadvantages, such as limited contact with agency program management officials who have the institutional knowledge pertaining to agency missions and priorities. There may also be management challenges in determining the appropriate amount of resources to dedicate toward performing sufficient oversight of the Commission’s programs. For example, the Commission may not be a material entity when compared to the larger agency; therefore, when using a risk-based approach, the Commission may not get the necessary OIG oversight with respect to its critical programs and operations from the larger OIG. Consolidation with a single regional commission OIG could serve as another alternative structure. This option would consolidate the Commission OIG with a regional commission OIG. As under the consolidation with a larger IG office alternative, the regional commission OIG would assume the oversight responsibilities of the Commission OIG. There are currently seven regional commissions; however, only the Appalachian Regional Commission (ARC) and the Denali Commission have their own OIGs. Legislation enacted in 2008 directed that a single IG be appointed by the President, in accordance with the IG Act, for three of the other regional commissions, but it has not been implemented. The regional commissions are as follows: (1) Northern Border Regional Commission, (2) Southwest Border Regional Commission, (3) Southeast Crescent Regional Commission, (4) Delta Regional Authority, (5) Appalachian Regional Commission, (6) Northern Great Plains Regional Authority, and (7) Denali Commission. Regional commissions are regional development agencies that focus on developing infrastructure and targeting new resources to promote wealth generation and economic growth to distressed portions of specific geographical areas within their regions. For example, the ARC is a regional economic development agency that represents a partnership of federal, state, and local governments. Established by the Congress in the Appalachian Regional Development Act of 1965, the ARC was established to assist the region in promoting economic development and establishing a framework for joint federal and state efforts to provide the basic facilities essential to its growth on a coordinated and concerted regional basis. The ARC is composed of the governors of the 13 Appalachian states and a Federal Cochair, who is appointed by the President. Local participation is provided through multicounty local development districts. The ARC OIG reported that it has three full-time employees, has an annual budget of approximately $634,000, and has performed 81 audits and inspections during fiscal years 2011 to 2013. According to the ARC OIG website, the ARC OIG provides independent and objective audits, inspections, and evaluations relating to agency programs and operations. The ARC prepares a 5-year strategic plan and annual work plans to identify grant audits that represent the most significant aspect of the ARC’s programs. The ARC OIG’s grant audits are based on factors such as the value of the grant, location, type of grant, and prior history. The ARC OIG also provides a means for keeping the ARC Federal Cochair, the other commissioners, and the Congress fully informed about problems and deficiencies at the ARC. Consolidation of the Commission OIG with another regional commission OIG could serve to (1) strengthen institutional knowledge regarding agency programs and operations and (2) achieve economies of scale. Since regional commissions are focused on building the infrastructure and targeting economic growth to distressed areas in specific rural geographic locations, consolidation of the Commission OIG with another regional OIG could improve institutional knowledge at the Commission OIG. Given the similarities in their scope and mission, efficiencies may be achieved by leveraging resources between the two regional commissions. In addition, consolidation could serve to increase the availability of investigative resources to detect fraud, waste, and abuse while achieving other efficiencies. A disadvantage to this approach could be that resources become strained, limiting the effectiveness of the OIG to perform its duties for both agencies. The Commission IG stated that he spent approximately 25 percent of his time overseeing the contracted auditor for the Commission annual financial statement audit and therefore used inspections to leverage the time he had to perform oversight. Another alternative is to divide OIG oversight responsibilities for the agency performance audits, investigations, and inspections and the agency financial statement audits between two separate federal OIGs, such as a regional commission OIG or a larger OIG. The regional commission OIG would perform the audits, investigations, and inspections of agency programs and operations based on its similar mission and scope. The larger OIG would conduct and oversee the agency’s annual financial statement audit. A current example of this structure exists at the Department of Transportation (DOT) OIG. The DOT OIG has the authority to review the financial statement audit, property management, and business operations of the National Transportation Safety Board (NTSB), including internal accounting and administrative control systems, to determine whether they comply with applicable laws, rules, and regulations. GAO conducts broad management reviews on behalf of the NTSB. In addition, Amtrak is a DFE under the IG Act and has an OIG, but Amtrak itself, rather than the OIG, is required to engage an IPA to audit its annual financial statements. In fiscal year 2011, the Amtrak OIG began monitoring the IPA that performed the financial statement audit for Amtrak. Further, the DOT OIG is required by statute to conduct certain oversight of Amtrak operations, including an annual review of Amtrak’s budget and 5-year financial plan. This divided approach could reduce the strain of oversight responsibilities on one OIG by providing a shared responsibility between two OIGs while potentially providing sufficient agency oversight. In addition, dividing responsibilities between two OIGs would serve to leverage the OIGs’ expertise (i.e., similar mission, subject matter experts, etc.) in conducting performance audits, investigations, inspections, and evaluations for one OIG assuming oversight responsibilities. The other OIG’s expertise could also be leveraged for conducting the annual financial statement audit. However, disadvantages in this approach could be a lack of effective communication and coordination between the two OIGs. For example, internal control deficiencies and recommendations resulting from the financial statement audit may not be communicated in a timely manner to the OIG with program and operational oversight responsibilities of the agency. This could delay the implementation and preparation of corrective action plans to address and correct deficiencies found during the financial statement audit in a timely manner, which could also have a programmatic or operational impact. In addition, this approach could require the agencies to coordinate activities such as requests for financial statement audit documents and requests for documentation for performance audits and investigations. This could put additional stress on the smaller OIG to fulfill requests for documentation and meetings while still performing daily duties required at the agency. Figure 5 demonstrates how various responsibilities could be divided among various IG offices. While there is no clear-cut option with respect to the alternative OIG structures presented above, any specific decision concerning consolidations of IG offices should result from dialogue among the affected agencies, CIGIE, and the Congress. OIG’s are responsible for coordinating audits, inspections, and investigations. While the Commission OIG conducted limited oversight through inspections, it did not conduct performance audits or investigations and many of the critical standards in CIGIE’s Quality Standards for Federal Offices of Inspector General, such as planning and coordination, ensuring internal control, maintaining quality assurance, and receiving and reviewing allegations, were not addressed in the policies and procedures or the operations of the Commission OIG. For example, planning and coordination would include a risk-based approach to assessing the nature, scope, and inherent risk of Commission programs and operations. A risk-based approach for oversight would guide the general direction and focus of OIG work to ensure effective oversight of the Commission’s major programs and operations. Furthermore, it is important that OIG work products provide reliable information and adhere to CIGIE professional standards and the IG Act. However, we found no documented evidence in the OIG’s workpapers to support the inspection conclusions and recommendations for its reports. These OIG work products are used by the Congress and others to assess whether the Commission’s major programs and operations are achieving their desired results. We are making the following nine recommendations to the Commission IG, or to the individual or entity that ultimately assumes IG oversight responsibilities for the Commission under an alternate structure, in order to ensure that the Commission receives effective oversight of its major programs and operations. Develop and implement a risk-based approach that adheres to professional standards to help ensure effective oversight of the major Commission programs and operations in the form of audits and investigations. Develop policies and procedures for OIG office operations and management activities in accordance with CIGIE’s Quality Standards for Federal Offices of Inspector General. Implement the OIG’s policies and procedures developed in accordance with CIGIE’s Quality Standards for Federal Offices of Inspector General to ensure that the OIG’s management and operation of its office includes the following: annual work and strategic plans that identify goals, objectives, and performance measures to be accomplished by the OIG within a specific period; a quality assurance framework that includes both internal and external quality assurance reviews; an internal control structure that includes all elements of internal control, such as the control environment, risk assessment, control activities, information and communication, and monitoring; and an OIG hotline to receive and review anonymous tips, referrals, and allegations to help prevent and detect potential fraud, waste, and abuse. Update the OIG’s policies and procedures for inspections to ensure that they are fully in accordance with CIGIE’s Quality Standards for Inspection and Evaluation. Conduct inspections that are fully in accordance with CIGIE’s Quality Standards for Inspection and Evaluation and the OIG’s policies and procedures. Prepare semiannual reports to the Congress that fully comply with the reporting requirements of the IG Act. We provided a draft of this report to the Denali Commission for review and comment. The Commission concurred with the report’s conclusions and recommendations, and provided its perspective of the IG’s performance as well as the challenges for a one-person DFE OIG. The Commission’s letter is reprinted in appendix II. We are sending copies of this report to the appropriate congressional committees, the Federal Cochair and Commissioners of the Denali Commission, the Office of the Inspector General for the Department of Commerce, the Assistant Secretary for Economic Development for the Department of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the resources appropriated to and expensed by the Denali Commission’s (Commission) Office of Inspector General (OIG) for fiscal years 2011 through 2013, we reviewed OIG-related budget justification documents and expenditure reports for the OIG’s salary and benefits, contracts, training, and travel. We reviewed the cost elements used to develop the OIG’s annual budget estimate, as well as the contract types and contract payments made to assist with OIG-related activities. We interviewed the commissioners and agency management to determine whether the OIG obtained their input for program or operation areas of concern for which they wanted assistance. We also interviewed Commission staff to gain an understanding of the resources provided to the OIG from the Commission and from other federal agencies. To determine the number of work products issued by the OIG, we reviewed the Inspector General’s (IG) activity log and the OIG’s website to identify which publications were within our scope and provided the list to the IG for confirmation that the list was complete. We requested copies of the OIG’s annual work plan and strategic plan and interviewed commissioners and Commission staff to determine the extent to which they provided input to the OIG’s annual work and strategic plans. However, the IG did not prepare written annual work and strategic plans. Therefore, we had to rely on the interviews we conducted with the commissioners and Commission staff to determine the extent to which they provided input to the IG on areas the IG evaluated. To determine the extent to which the IG provided oversight of the Commission’s major programs and operations, we compared the grant funds awarded and disbursed by the Commission for fiscal years 2011 through 2013 to the work products issued by the OIG. We obtained the grant funds awarded and disbursed information from the Commission (including the program descriptions for these grants) and performed procedures that allowed us to determine that the grant information provided by the Commission was sufficient for our purposes. We did compare these grant amounts to the total grant funds reviewed by the OIG in its work products. We analyzed all of the OIG’s work products issued in fiscal years 2011 through 2013, noting the objectives, scope, and methodology of the reports to determine the extent to which these work products reviewed Commission programs or operations. We reviewed the Commission’s fiscal year 2015 budget justification to identify accomplishments by program, and we also reviewed the Commission’s annual financial report to identify the budgetary authority amounts by program. We compared the fiscal year budgeted amounts reported in the Commission’s audited annual financial statements with the amounts reported in the President’s budget, which allowed us to determine that the budget amounts provided by the Commission were sufficient for our purposes. To determine whether the design of the OIG’s policies and procedures adhered to applicable professional standards, we reviewed the Inspector General Act of 1978, as amended (IG Act), and the Council of the Inspectors General on Integrity and Efficiency’s (CIGIE) Quality Standards for Federal Offices of Inspector General and Quality Standards for Inspection and Evaluation, and compared the OIG’s inspection policies and procedures to these professional standards. To determine the extent to which the OIG implemented the CIGIE standards and its inspection policies and procedures, we prepared a data collection instrument using the CIGIE inspection standards and the OIG’s policies and procedures. We tested all of the OIG’s work products issued during fiscal years 2011 through 2013 to determine whether the OIG’s work products adhered to the CIGIE standards and were consistent with the OIG’s inspection policies and procedures. We reviewed the OIG’s inspection reports and supporting case files and compared them to the OIG’s policies and procedures and applicable CIGIE standards, including those related to quality control, planning, evidence, and reporting. We reviewed all of the semiannual reports issued by the OIG during fiscal years 2011 through 2013 to determine whether these reports were prepared in accordance with the reporting requirements of the IG Act. We reviewed the OIG’s semiannual reports and supporting case files and compared them to the IG Act reporting standards. To determine alternatives for OIG oversight structures that exist in federal agencies that could be applied at the Commission, we used previous GAO work to identify federal OIGs that provide (or have provided) OIG oversight for smaller agencies, and also identified other regional commissions with similar missions to that of the Commission. In addition, because the Denali Commission Federal Cochair is appointed by the Secretary of Commerce, we consulted with officials from the Department of Commerce to gain an understanding of their relationship and roles and responsibilities to the Commission. We conducted structured interviews with officials from these other OIGs with structures we considered to be potential alternative OIG oversight structures to gain an understanding of how they are organized and operate. We analyzed prior GAO reports to review recommendations made regarding alternatives for providing OIG oversight and Congressional Research Service reports and other relevant reports to identify applicable criteria for OIG oversight. We conducted this performance audit from May 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Chanetta Reed (Assistant Director), Matthew Frideres, Maxine Hattery, Jason Kirwan, Carroll M. Warfield Jr., and Doris Yanger made key contributions to this report. | The Commission was established to promote sustainable infrastructure improvement, job training, and other economic development services in Alaska. The Commission is a designated federal entity under the IG Act and is required to have an IG. IG oversight includes assessing the effectiveness and efficiency of agency programs and operations; providing leadership and coordination to detect fraud and abuse; and making recommendations to management to promote the economy, efficiency, and effectiveness of program and agency operations. GAO was asked to review the management and operations of the Commission's OIG. GAO's objectives were to (1) identify the resources appropriated to and expensed by the OIG and the OIG's work products reported for fiscal years 2011 through 2013, (2) assess the extent to which the OIG provided oversight of the Commission's programs and operations, (3) determine the extent to which the design and implementation of the OIG's policies and procedures and its work products were consistent with professional standards, and (4) identify alternatives for OIG oversight of the Commission. The Denali Commission (Commission) Office of Inspector General (OIG) received budgetary resources of approximately $1 million from fiscal years 2011 through 2013. OIG budgetary resources increased approximately 7 percent from fiscal years 2011 through 2013, from approximately $310,000 to $331,000. During this period, the OIG consisted of one full-time employee, the Inspector General (IG), who obtained additional support through contracts with auditors and others. The OIG issued six semiannual reports to the Congress and conducted 12 inspections during fiscal years 2011 through 2013. The OIG provided limited oversight of the Commission's major programs (energy, transportation, health facilities, and training) and operations. GAO's analysis of the 12 inspections completed by the OIG found that the OIG provided oversight for $150,000 of the $167 million in grant funds disbursed during fiscal years 2011 through 2013. The $150,000 of grant funds inspected by the OIG represented less than 1 percent of total grants awarded by the Commission during this period. The $167 million in disbursed grant funds are subject to the Single Audit Act, as applicable. While the OIG oversaw the Commission's annual financial statement audit, it did not conduct any performance audits or investigations related to the Commission's major programs and operations. The OIG did not have documented policies and procedures for its office operations and management that adhered to the Council of the Inspectors General on Integrity and Efficiency's Quality Standards for Federal Offices of Inspector General . The OIG did not implement the following four quality standards that are critical for the management and operations of the OIG: planning and coordinating; maintaining quality assurance; ensuring internal control; and receiving and reviewing allegations of potential fraud, waste, and abuse. For example, the OIG did not conduct any investigations for potential criminal prosecution. Also, the OIG did not prepare an annual work or strategic plan to document the office's planned activities. Additionally, the OIG's work products were not fully consistent with applicable professional standards, its own policies and procedures for inspections, or section 5 of the Inspector General Act of 1978, as amended (IG Act). For example, there was insufficient evidence in the OIG's inspection case files to support the conclusions and recommendations reported, and the semiannual reports prepared by the OIG did not provide information on the status of OIG recommendations as required by the act. If corrective actions are taken to mitigate the challenges faced by a one-person office, the current structure of the Commission OIG is one option for OIG oversight. GAO has also identified three alternative OIG oversight structures that could be applied to the Commission: (1) consolidation into a larger OIG; (2) consolidation into a regional commission OIG; and (3) division of OIG oversight responsibilities between two separate federal OIGs, such as a regional commission OIG or a larger OIG. The Commission IG resigned on December 28, 2013. On May 28, 2014, the Commission entered into an agreement with the Department of Commerce's OIG to provide oversight services pursuant to the IG Act. The agreement expires on September 30, 2014, but may be extended or amended by mutual written consent of the parties. GAO is making nine recommendations to the OIG to improve the operating effectiveness and efficiency of the OIG, including steps that the OIG should take to develop and implement policies and procedures consistent with professional standards to provide oversight of Commission programs and operations. The Commission concurred with the report's conclusions and recommendations. |
Under long-standing law, the so-called Posse Comitatus Act of 1878 (18 U.S.C. 1385) prohibits the use of the Departments of the Army or the Air Force to enforce the nation’s civilian laws except where specifically authorized by the Constitution or Congress. While the language of section 1385 lists only the Army and the Air Force, DOD has made the provisions of section 1385 applicable to the Department of the Navy and the U.S. Marine Corps through a DOD directive (DOD Directive 5525.5, Jan. 15, 1986). Congress has enacted various pieces of legislation authorizing a military role in supporting civilian law enforcement agencies. For example, in the Department of Defense Authorization Act for Fiscal Year 1982 (P.L. 97-86), Congress authorizes the Secretary of Defense to provide certain assistance-type activities for civilian law enforcement activities. This legislation also provided, however, that such U.S. military assistance does not include or permit participation in a search, seizure, arrest, or other similar activity, unless participation in such activity is otherwise authorized by law. Beginning in the early 1980s, Congress authorized an expanded military role in supporting domestic drug enforcement efforts. As part of the national counterdrug effort, for example, the U.S. military provides federal, state, and local law enforcement agencies with a wide range of services, such as air and ground transportation, communications, intelligence, and technology support. DOD counterdrug intelligence support is provided by Joint Task Force Six, which is based at Fort Bliss (El Paso, TX). This component coordinates operational intelligence in direct support of drug law enforcement agencies. Moreover, under congressional authorization that was initially provided in 1989 (32 U.S.C. 112), DOD may provide funds annually to state governors who submit plans specifying how the respective state’s National Guard is to be used to support drug interdiction and counterdrug activities. Such operations are conducted under the command and control of the state governor rather than the U.S. military. Also, federal, state, and local law enforcement personnel may receive counterdrug training at schools managed by the National Guard in California, Florida, and Mississippi. In 1989, Congress authorized the Secretary of Defense to transfer to federal and state agencies excess DOD personal property suitable for use in counterdrug activities, without cost to the recipient agency. In 1996, Congress authorized such transfers of excess DOD personal property suitable for use in law enforcement generally and not just specifically for counterdrug efforts. This Law Enforcement Support Program is managed by the Defense Logistics Agency. Military law enforcement agencies are major consumers of forensic laboratory services. The Army operates the U.S. Army Criminal Investigation Laboratory (Fort Gillem, GA), which provides forensic support regarding questioned documents, trace evidence, firearms and tool marks, fingerprints, imaging and technical services, drug chemistry, and serology. The Navy operates two limited-service forensic laboratories, which are referred to as Naval Criminal Investigative Service Regional Forensic Laboratories (Norfolk, VA, and San Diego, CA). Both Navy laboratories provide forensic support regarding latent prints, drug chemistry, arson, and questioned documents. The Air Force is the executive agent of the DOD Computer Forensics Laboratory (Linthicum, MD), which processes digital and analog evidence for DOD counterintelligence operations and programs as well as fraud and other criminal investigations. Generally, with the exception of participating with state or local law enforcement agencies in cases with a military interest, the military laboratories do not provide support to these agencies. In response to our inquiries, officials at each of the DOD components we contacted told us that they did not provide grants for any purposes, including crime technology-related assistance, to state and local law enforcement agencies during fiscal years 1996 through 1998. Moreover, we found no indications of crime technology-related grant assistance provided by DOD during our review of various DOD authorization, appropriations, and budget documents. According to the General Services Administration’s Catalog of Federal Domestic Assistance, DOD can provide grants for a variety of purposes to some non-law enforcement agencies. For example, some DOD grants may assist state and local agencies in working with the Army Corps of Engineers to control and eradicate nuisance vegetation in rivers and harbors. DOD direct funding—$563.3 million total appropriations for fiscal years 1996 through 1998—was provided for the National Guard Bureau’s counterdrug program, which covers the following six mission areas: (1) program management, (2) technical support, (3) general support, (4) counterdrug-related training, (5) reconnaissance/observation, and (6) demand reduction support. However, we determined that, with one exception, these mission areas did not involve activities that met our definition of crime technology assistance. The one exception involved courses at two of the National Guard’s three counterdrug training locations in operation during fiscal years 1996 through 1998. We considered these courses to be a “support service,” and they are discussed in the following section. Regarding support services and systems, DOD’s crime technology assistance to state and local law enforcement totaled an estimated $30 million for fiscal years 1996 through 1998. As table 2 shows, this assistance was provided by various DOD components—the Defense Security Service, the DOD Computer Forensics Laboratory, the Intelligence Systems Support Office, Joint Task Force Six, the military branch investigative agencies, National Guard Bureau counterdrug training schools, and the U.S. Army Military Police School. More details about the assistance provided by each of these components are presented in respective sections following table 2. As table 2 shows, the Defense Security Service estimated that its assistance to state and local law enforcement totaled approximately $5,200 during fiscal years 1996 through 1998. This total represents responses to 59 requests—with estimated assistance costs ranging from $75 to $100 per request (or an average of $87.50 per request)—for information from the Defense Clearance and Investigations Index. A single, automated central repository, the Defense Clearance and Investigations Index, contains information on (1) the personnel security determinations made by DOD adjudicative authorities and (2) investigations conducted by DOD investigative agencies. This database consists of an index of personal names and impersonal titles that appear as subjects, co-subjects, victims, or cross-referenced incidental subjects in investigative documents maintained by DOD criminal, counterintelligence, fraud, and personnel security investigative activities. For example, state and local law enforcement agencies may request and receive completed Defense Security Service investigations in support of criminal investigations or adverse personnel actions. The DOD Computer Forensics Laboratory (Linthicum, MD) became operational in July 1998. The laboratory is responsible for processing, analyzing, and performing diagnoses of computer-based evidence involving counterintelligence operations and programs as well as fraud and other criminal cases. According to DOD officials, forensic analyses can be provided to state and local law enforcement when there is a military interest or, in certain other instances, when specific criteria are met. In the last 3 months of fiscal year 1998 (July through Sept.), according to DOD officials, the laboratory performed 84 forensic analyses, 2 of which were for law enforcement officials in the states of North Carolina and Tennessee, respectively. As table 2 shows, DOD estimated that its costs (which were based on prorated staff hours) in providing forensic assistance to the states were $14,000 (or $7,000 per analysis). For fiscal years 1996 through 1998, DOD obligated $28.1 million for the Gulf States Initiative. Using law enforcement intelligence software, the Gulf States Initiative is an interconnected communications system among the states of Alabama, Georgia, Louisiana, and Mississippi. Included in this system are (1) specialized software for the analysis of counterdrug intelligence information, (2) a secure and reliable communications network, and (3) standardized tools to analyze and report counterdrug intelligence information. Each state operates a drug intelligence center (located in the capital city) that is connected to the hubs in other states. This system allows states to process and analyze intelligence information. At the request of a domestic law enforcement agency, DOD’s Joint Task Force Six coordinates operational, technological, intelligence, and training support for counterdrug efforts within the continental United States. For fiscal years 1996 through 1998, Joint Task Force Six officials estimated that the costs of crime technology assistance provided by this DOD component to state and local law enforcement totaled $48,800. As table 2 shows, this assistance consisted of two types—communications assessments ($16,300) and intelligence architecture assessments ($32,500). In providing such assistance, military personnel essentially acted as technical consultants in evaluating state or local agencies’ (1) existing communications systems, including their locations and the procedures for using them, and/or (2) intelligence organizations, functions, and systems. The military branch investigative agencies generally do not unilaterally provide assistance to state and local law enforcement. However, if there is a military interest, a military investigative agency may jointly conduct an investigation with state or local authorities. (See table I.1 in app. I.) During such collaborative efforts, the Army, Air Force, and Navy may provide forensic support in areas involving, for example, fingerprints, drug chemistry, and questioned documents. The cost data presented for the military branch investigative agencies in table 2 are the costs associated with (1) forensic analyses involving joint or collaborative cases and (2) other technology-related assistance, such as technical training. For example: In 1997, the Air Force enhanced the quality of an audiotape used as evidence for a homicide investigation for Prince George’s County, MD. The Air Force estimated its costs to be $8,400 for this assistance. In addition to the forensic analyses conducted during fiscal years 1996 through 1998, the Navy also provided technical training to 386 state and local law enforcement personnel. Such training covered various aspects of forensic technology, such as conducting DNA analyses and using computer databases. Although it does not have a forensic laboratory, the Marine Corps Criminal Investigation Division provided state and local law enforcement agencies with other types of assistance, such as the use of dog teams to detect explosives. However, we determined that these activities did not meet our definition of crime technology assistance. At two of its three counterdrug training locations in operation during fiscal years 1996 through 1998, the National Guard Bureau provided state and local law enforcement with courses that met our definition of crime technology assistance. According to National Guard Bureau officials, the two locations and the relevant courses (with a prorated estimated funding total of about $281,000 for the 3 fiscal years) are as follows: Multijurisdictional Counterdrug Task Force Training (St. Petersburg, FL): At this training location, the relevant course covered the use of technical equipment to intercept secure communications. This course accounted for about $60,000, or about 21 percent of the total $281,000 funding. Regional Counterdrug Training Academy (Meridian, MS): At this location, National Guard Bureau officials identified the following three relevant courses: (1) Basic Technical Service/Video Surveillance Operations, (2) Counterdrug Thermal Imagery Systems, and (3) Investigative Video Operations. These courses accounted for about $221,000, or the remaining 79 percent of the $281,000 funding total. The U.S. Army Military Police School (Fort Leonard Wood, MO) provided counterdrug training to state and local law enforcement agencies. Eight courses were conducted that focused on drug enforcement training for non-DOD students, including state and local law enforcement personnel. In response to our inquiry, DOD officials indicated that two of these courses—(1) Counterdrug Investigations and (2) Basic Analytical Investigative Techniques—fit our definition of crime technology assistance. For example, the Counterdrug Investigations course covered such topics as (1) criminal intelligence, (2) surveillance operations, and (3) technical surveillance equipment (audio/video). The Basic Analytical Investigative Techniques course trained law enforcement personnel how to maintain an automated criminal intelligence system under multijurisdictional narcotics scenarios. This course also covered such topics as (1) the analytical process, (2) sources of information, and (3) flowcharting. Regarding these 2 courses, Military Police School officials told us that training was provided to 2,121 state and local law enforcement personnel during fiscal years 1996 through 1998, at an estimated cost of over $1.4 million. During fiscal years 1996 through 1998, DOD’s in-kind assistance to state and local law enforcement totaled about $95.9 million. As table 3 shows, this category of assistance was provided by two DOD components—the Defense Information Systems Agency (about $24 million in the procurement and transfer of new equipment) and the Defense Logistics Agency (about $72.0 million in the transfer of surplus equipment). More details about the in-kind assistance provided by each of these two components are presented in respective sections following table 3. The in-kind assistance (about $24 million) provided by the Defense Information Systems Agency consisted of the procurement and transfer of equipment for the following information-sharing or communications systems: Regional Police Information System ($3 million): Arkansas, Louisiana, and Texas use this system, which (1) provides automated information capabilities for detecting and monitoring illegal drug activities within each state’s jurisdiction and (2) facilitates the sharing of both strategic and tactical intelligence among participating agencies. The Southwest Border States Anti-Drug Information System (about $21 million): This is a secure law enforcement counterdrug information- sharing system that connects intelligence databases of four southwest border states (Arizona, California, New Mexico, and Texas); the three Regional Information Sharing Systems in that area; and the El Paso Intelligence Center. This system provides for secure E-mail transmissions and includes a preestablished query system. The system allows all participants to query the databases of all other participants and has an administrative Web site server that offers key electronic services, such as providing agency contact information and system usage statistics. Through its Law Enforcement Support Program, the Defense Logistics Agency provided about $72.0 million of crime technology-related, in-kind assistance to state and local law enforcement during fiscal years 1996 through 1998. As table 3 shows, most of this assistance consisted of the following three types of equipment or assets: Automated data processing units, equipment, components, software, and control systems ($29.5 million); Radio and television equipment ($20.2 million); and Night vision equipment ($16.9 million). Collectively, these three categories accounted for $66.6 million or about 93 percent of the total crime technology-related, in-kind assistance (about $72.0 million) provided to state and local law enforcement by the Defense Logistics Agency during fiscal years 1996 through 1998. In its counterterrorism and counterdrug efforts, the federal government has invested considerable funds in recent years to develop technologies for detecting explosives and narcotics. For example, in 1996, we reported that DOD had spent over $240 million since 1991 to develop nonintrusive cargo inspection systems and counterdrug technologies for the Customs Service, the Drug Enforcement Administration, and other federal agencies. Although not directly intended for state and local law enforcement agencies, some of DOD’s research and development efforts have had spin-off benefits for these agencies. That is, proven technologies have resulted in crime-fighting products’ becoming commercially available for purchase by all levels of law enforcement. In citing two examples, DOD officials commented basically as follows: A “percussion actuated neutralization disruptor”—funded by DOD’s Office of Special Operations and Low-Intensity Conflict—can be used to disarm or neutralize pipebombs. Since becoming commercially available, this device has widespread applicability in all states and municipalities. A “temporal analysis system” has been developed under DOD’s Counterdrug Technology Development Program Office. This computer- based system, which analyzes time-series and other event-related data, allows law enforcement to predict a criminal’s activities and movements. The DOD officials further commented that, while these items first became commercially available some time during fiscal years 1996 through 1998, the research and development funds associated with the items were obligated in years before 1996. We did not attempt to identify all relevant examples nor to quantify the costs associated with specific products because DOD’s research and development efforts primarily and directly support federal agency needs rather than those of state and local law enforcement. Also, (1) any spin-off benefits to state and local law enforcement may not occur until years after federal research and development funds are expended and (2) the acquisition of commercially available products generally is dependent on these agencies’ own budgets. To identify relevant crime technology assistance programs, we reviewed, among other sources, the General Services Administration’s Catalog of Federal Domestic Assistance. Also, to identify funding amounts, we contacted cognizant DOD officials and reviewed budget and other applicable documents provided by DOD components. We did not independently verify the accuracy or reliability of the components’ funding data. However, to obtain an indication of the overall quality of these data, we contacted DOD officials to clarify the funding data when needed. Appendix I presents more details about our objectives, scope, and methodology. We performed our work from May 1999 to September 1999 in accordance with generally accepted government auditing standards. On September 14, 1999, we provided DOD with a draft of this report for comment. On September 23, 1999, DOD’s Office of the Inspector General orally informed us that the draft report had been reviewed by officials in relevant DOD components, and that these officials agreed with the information presented and had no comments. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. We are sending copies of this report to Senator Orrin G. Hatch, Chairman, and Senator Patrick J. Leahy, Ranking Minority Member, Senate Committee on the Judiciary; Representative Henry J. Hyde, Chairman, and Representative John Conyers, Jr., Ranking Minority Member, House Committee on the Judiciary; the Honorable William S. Cohen, Secretary of Defense; and the Honorable Jacob Lew, Director, Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-8777 or Danny R. Burton on (214) 777-5700. Key contributors to this assignment are acknowledged in appendix II. | Pursuant to a congressional request, GAO reviewed the crime technology assistance provided by the Department of Defense (DOD) to state and local law enforcement agencies during fiscal years (FY) 1996 through 1998, focusing on: (1) grants or other types of direct federal funding; (2) access to support services and systems, such as counterdrug or other intelligence centers; and (3) in-kind transfers of equipment or other assets. GAO noted that: (1) DOD said it provided no crime technology-related grants to state and local law enforcement agencies during FY 1996 through FY 1998; (2) although each state's National Guard received funds for its counterdrug program, these funds did not meet GAO's definition of crime technology assistance, with one exception; (3) GAO also did not find any other type of direct funding; (4) identifiable crime technology assistance provided by DOD to state and local law enforcement agencies during FY 1996 through FY 1998 totalled an estimated $125.9 million; (5) of this amount, about $95.9 million involved in-kind transfers, representing about 76 percent of the total; (6) although not directly intended for state and local law enforcement agencies, some of DOD's research and development efforts in recent years have had spin-off benefits for these agencies--particularly DOD's efforts to develop technologies for federal use in detecting explosives and narcotics; (7) for example, proven technologies have resulted in crime-fighting products--such as bomb detection equipment--becoming commercially available for purchase by all levels of law enforcement; and (8) GAO did not attempt to identify all relevant examples nor to quantify the costs associated with specific products because: (a) DOD's research and development efforts primarily and directly support federal agency needs; and (b) the acquisition of any resulting commercially available products generally is dependent on state and local law enforcement agencies' own budgets. |
PTSD can develop following exposure to combat, natural disasters, terrorist incidents, serious accidents, or violent personal assaults like rape. People who experience stressful events often relive the experience through nightmares and flashbacks, have difficulty sleeping, and feel detached or estranged. These symptoms may occur within the first 4 days after exposure to the stressful event or be delayed for months or years. Symptoms that appear within the first 4 days after exposure to a stressful event are generally diagnosed as acute stress reaction or combat stress. Symptoms that persist longer than 4 days are diagnosed as acute stress disorder. If the symptoms continue for more than 30 days and significantly disrupt an individual’s daily activities, PTSD is diagnosed. PTSD may occur with other mental health conditions, such as depression and substance abuse. Clinicians offer a range of treatments to individuals diagnosed with PTSD, including individual and group therapy and medication to manage symptoms. These treatments are usually delivered in an outpatient setting, but they can include inpatient services if, for example, individuals are at risk of causing harm to themselves. DOD’s screening for PTSD occurs during its post-deployment process. During this process, DOD evaluates servicemembers’ current physical and mental health and identifies any psychosocial issues commonly associated with deployments, special medications taken during the deployment, and possible deployment-related occupational/environmental exposures. The post-deployment process also includes completion by the servicemember of the post-deployment screening questionnaire, the DD 2796. DOD uses the DD 2796 to assess health status, including identifying servicemembers who may be at risk for developing PTSD following deployment. In addition to questions about demographics and general health, including questions about general mental health, the DD 2796 includes four questions used to screen servicemembers for PTSD. The four questions are: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities, or your surroundings? The completed DD 2796 is reviewed by a DOD health care provider who conducts a face-to-face interview to discuss any deployment-related health concerns with the servicemember. Health care providers that review the DD 2796 may include physicians, physician assistants, nurse practitioners, or independent duty medical technicians—enlisted personnel who receive advanced training to provide treatment and administer medications. DOD provides guidance for health care providers using the DD 2796 and screening servicemembers’ physical and mental health. The guidance gives background information to health care providers on the purpose of the various screening questions on the DD 2796 and highlights the importance of a health care provider’s clinical judgment when interviewing and discussing responses to the DD 2796. Health care providers may make a referral for a further mental health or combat/operational stress reaction evaluation by indicating on the DD 2796 that this evaluation is needed. When a DOD health care provider refers an OEF/OIF servicemember for a further mental health or combat/operational stress reaction evaluation, the provider checks the appropriate evaluation box on the DD 2796 and gives the servicemember information about PTSD. The provider does not generally arrange for a mental health evaluation appointment for the servicemember with a referral. See figure 1 for the portion of the DD 2796 that is used to indicate that a referral for a further mental health or combat/operational stress reaction evaluation is needed. DOD’s health care system, TRICARE, delivers health care services to over 9 million individuals. Health care services, which include mental health services, are provided by DOD personnel in military treatment facilities or through civilian health care providers, who may be either network providers or nonnetwork providers. A military treatment facility is a military hospital or clinic on or near a military base. Network providers have a contractual agreement with TRICARE to provide health care services and are part of the TRICARE network. Nonnetwork providers may accept TRICARE allowable charges for delivering health care services or expect the beneficiary to pay the difference between the provider’s fee and TRICARE’s allowable charge for services. VA’s health care system includes medical facilities, community-based outpatient clinics, and Vet Centers. VA medical facilities offer services which range from primary care to complex specialty care, such as cardiac or spinal cord injury. VA’s community-based outpatient clinics are an extension of VA’s medical facilities and mainly provide primary care services. Vet Centers offer readjustment and family counseling, employment services, bereavement counseling, and a range of social services to assist veterans in readjusting from wartime military service to civilian life. Vet Centers are also community points of access for many returning veterans, providing them with information and referrals to VA medical facilities. In January 2004, DOD implemented the Deployment Health Quality Assurance Program. As part of the program, each military service branch must implement its own quality assurance program and report quarterly to DOD on the status and findings of the program. The program requires military installation site visits by DOD and military service branch officials to review individual medical records to determine, in part, whether the DD 2796 was completed. The program also requires a monthly report from the Army Medical Surveillance Activity (AMSA), which maintains a database of all servicemembers’ completed DD 2796s. DOD uses the information from the military service branches, site visits, and AMSA to develop an annual report on its Deployment Health Quality Assurance Program. DOD offers an extended health care benefit to some OEF/OIF veterans for a specific period of time, and VA offers health care services that include specialized PTSD services. For some OEF/OIF veterans, DOD offers three health care benefit options through the Transitional Assistance Management Program (TAMP) under TRICARE, DOD’s health care system. The three benefit options are offered for 180 days following discharge or release from active duty. In addition, OEF/OIF veterans may purchase health care benefits through DOD’s Continued Health Care Benefit Program (CHCBP) for 18 months. VA also offers health care services to OEF/OIF veterans following their discharge or release from active duty. VA’s health benefits include health care services, including specialized PTSD services, which are delivered by clinicians who have concentrated their clinical work in the area of PTSD treatment and who work as a team to coordinate veterans’ treatment. Through TAMP, DOD provides health care benefits that allow some OEF/OIF veterans to obtain health care services, which include mental health services, for 180 days following discharge or release from active duty. This includes services for those who may be at risk for developing PTSD. These OEF/OIF veterans can choose one of three TRICARE health care benefit options through TAMP. While the three options have no premiums, two of the options have deductibles and copayments and allow access to a larger number of providers. The options are TRICARE Prime—a managed care option that allows OEF/OIF veterans to obtain, without a referral, mental health services directly from a mental health provider in the TRICARE network of providers with no cost for services. TRICARE Extra—a preferred provider option that allows OEF/OIF veterans to obtain, without a referral, mental health services directly from a mental health provider in the TRICARE network of providers. Beneficiaries pay a deductible and a share of the cost of services. TRICARE Standard—a fee-for-service option that allows OEF/OIF veterans to obtain, without a referral, mental health services directly from any mental health provider, including those outside the TRICARE network of providers. Beneficiaries pay a deductible and a larger share of the costs of services than under the TRICARE Extra option. See Table 1 for a description of the beneficiary costs associated with each TRICARE option. In addition, OEF/OIF veterans may purchase DOD health care benefits through CHCBP for 18 months. CHCBP began on October 1, 1994, and like TAMP, the program provides health care benefits, including mental health services, for veterans making the transition to civilian life. Although benefits under this plan are similar to those offered under TRICARE Standard, the program is administered by a TRICARE health care contractor and is not part of TRICARE. OEF/OIF veterans must purchase the extended benefit within 60 days after their 180-day TAMP benefit ends. CHCBP premiums in 2006 were $311 for individual coverage and $665 for family coverage per month. Reserve and National Guard OEF/OIF veterans who commit to future service can extend their health care benefits after their CHCBP or TAMP benefits expire by purchasing an additional benefit through the TRICARE Reserve Select (TRS) program. As of January 1, 2006, premiums under TRS are $81 for individual coverage and $253 for family coverage per month. DOD also offers a service, Military OneSource, that provides information and counseling resources to OEF/OIF veterans for 180 days after discharge from the military. Military OneSource is a 24-hour, 7-days a week information and referral service provided by DOD at no cost to veterans. Military OneSource provides OEF/OIF veterans up to six free counseling sessions for each topic with a community-based counselor and also provides referrals to mental health services through TRICARE. VA also offers health care services to OEF/OIF veterans, and these services include mental health services that can be used for evaluation and treatment of PTSD. VA offers all of its health care services to OEF/OIF veterans through its health care system at no cost for 2 years following these veterans’ discharge or release from active duty.21, 22 VA’s mental health services, which are offered on an outpatient or inpatient basis, include individual and group counseling, education, and drug therapy. For those veterans with PTSD whose condition cannot be managed in a primary care or general mental health setting, VA has specialized PTSD services at some of its medical facilities. These services are delivered by clinicians who have concentrated their clinical work in the area of PTSD treatment. The clinicians work as a team to coordinate veterans’ treatment and offer expertise in a variety of disciplines, such as psychiatry, psychology, social work, counseling, and nursing. Like VA’s general mental health services, VA’s specialized PTSD services are available on both an outpatient and inpatient basis. Table 2 lists the various outpatient and inpatient specialized PTSD treatment programs available in VA. See 38 U.S.C. § 1710(e)(1)(D), 1712A(a)(2)(B) (2000), and VHA Directive 2004-017, Establishing Combat Veteran Eligibility. OEF/OIF veterans can receive VA health care services, including mental health services, without being subject to copayments or other cost for 2 years after discharge or release from active duty. After the 2-year benefit ends, some OEF/OIF veterans without a service- connected disability or with higher incomes may be subject to a copayment to obtain VA health care services. VA assigns veterans who apply for hospital and medical services to one of eight priority groups. Priority is generally determined by a veteran’s degree of service-connected or other disability or on financial need. VA gives veterans in Priority Group 1 (50 percent or higher service-connected disabled) the highest preference for services and gives lowest preference to those in Priority Group 8 (no disability and with income exceeding VA guidelines). In addition to the 2-year mental health benefit, VA’s 207 Vet Centers offer counseling services to all OEF/OIF veterans with combat experience, with no time limitation or cost to the veteran for the benefit. Vet Centers are also authorized to provide counseling services to veterans’ family members to the extent this is necessary for the veteran’s post-war readjustment to civilian life. VA Vet Center counselors may refer a veteran to VA mental health services when appropriate. Using data provided by DOD from the DD 2796s, we found that about 5 percent of the OEF/OIF servicemembers in our review may have been at risk for developing PTSD, and over 20 percent received referrals for further mental health or combat/operational stress reaction evaluations. About 5 percent of the 178,664 OEF/OIF servicemembers in our review responded positively to three or four of the four PTSD screening questions on the DD 2796. According to the clinical practice guideline jointly developed by VA and DOD, individuals who respond positively to three or four of the four PTSD screening questions may be at risk for developing PTSD. Of those OEF/OIF servicemembers who may have been at risk for PTSD, 22 percent were referred for further mental health or combat/operational stress reaction evaluations. Of the 178,664 OEF/OIF servicemembers who were deployed in support of OEF/OIF from October 1, 2001, through September 30, 2004, and were in our review, 9,145—or about 5 percent—may have been at risk for developing PTSD. These OEF/OIF servicemembers responded positively to three or four of the four PTSD screening questions on the DD 2796. Compared with OEF/OIF servicemembers in other service branches of the military, more OEF/OIF servicemembers from the Army and Marines provided positive answers to three or four of the PTSD screening questions—about 6 percent for the Army and about 4 percent for the Marines (see fig. 2). The positive response rates for the Army and Marines are consistent with research that shows that these servicemembers face a higher risk of developing PTSD because of the intensity of the conflict they experienced in Afghanistan and Iraq. We also found that OEF/OIF servicemembers who were members of the National Guard and Reserves were not more likely to be at risk for developing PTSD than other OEF/OIF servicemembers. Concerns have been raised that OEF/OIF servicemembers from the National Guard and Reserve are at particular risk for developing PTSD because they might be less prepared for the intensity of the OEF/OIF conflicts. However, the percentage of OEF/OIF servicemembers in the National Guard and Reserves who answered positively to three or four PTSD screening questions was 5.2 percent, compared to 4.9 percent for other OEF/OIF servicemembers. Of the 9,145 OEF/OIF servicemembers who may have been at risk for developing PTSD, we found that 2,029 or 22 percent received a referral— that is, had a DD 2796 indicating that they needed a further mental health or combat/operational stress reaction evaluation. The Army and Air Force servicemembers had the highest rates of referral—23.0 percent and 22.6 percent, respectively (see fig. 3). Although the Marines had the second largest percentage of servicemembers who provided three or four positive responses to the PTSD screening questions (3.8 percent), the Marines had the lowest referral rate (15.3 percent) among the military service branches. During the post-deployment process, DOD relies on the clinical judgment of its health care providers to determine which servicemembers should receive referrals for further mental health or combat/operational stress reaction evaluations. Following a servicemember’s completion of the DD 2796, DOD requires its health care providers to interview all servicemembers. For these interviews, DOD’s guidance for health care providers using the DD 2796 advises the providers to “pay particular attention to” servicemembers who provide positive responses to three or four of the four PTSD screening questions on their DD 2796s. According to DOD officials, not all of the servicemembers with three or four positive responses to the PTSD screening questions need referrals for further evaluations. Instead, DOD instructs health care providers to interview the servicemembers, review their medical records for past medical history and, based on this information, determine which servicemembers need referrals. DOD expects its health care providers to exercise their clinical judgment in determining which servicemembers need referrals. DOD’s guidance suggests that its health care providers consider, when exercising their clinical judgment, factors such as servicemembers’ behavior, reasons for positive responses to any of the four PTSD screening questions on the DD 2796, and answers to other questions on the DD 2796. However, DOD has not identified whether these factors or other factors are used by its health care providers in making referral decisions. As a result, DOD cannot provide reasonable assurance that all OEF/OIF servicemembers who need referrals for further mental health or combat/operational stress reaction evaluations receive such referrals. DOD has a quality assurance program that, in part, monitors the completion of the DD 2796, but the program is not designed to evaluate health care providers’ decisions to issue referrals for mental health and combat/operational stress reaction evaluations. As part of its review, the Deployment Health Quality Assurance Program requires DOD’s military service branches to collect information from medical records on, among other things, the percentage of DD 2796s completed in each military service branch and whether referrals were made. However, the quality assurance program does not require the military service branches to link responses on the four PTSD screening questions to the likelihood of receiving a referral. Therefore, the program could not provide information on why some OEF/OIF servicemembers with three or more positive responses to the PTSD screening questions received referrals while others did not. DOD is conducting a study that is intended to evaluate the outcomes and quality of care provided by DOD’s health care system. This study is part of DOD’s National Quality Management Program. The study is intended to track those who responded positively to three or four PTSD screening questions on the DD 2796 and used the form as well to indicate they had other mental health issues, such as feeling depressed. One of the objectives of the study is to determine the percentage of those who were referred for further mental health or combat/operational stress reaction evaluations, based on their responses on the DD 2796. Many OEF/OIF servicemembers have engaged in the type of intense and prolonged combat that research has shown to be highly correlated with the risk for developing PTSD. During DOD’s post-deployment process, DOD relies on its health care providers to assess the likelihood of OEF/OIF servicemembers being at risk for developing PTSD. As part of this effort, providers use their clinical judgment to identify those servicemembers whose mental health needs further evaluation. Because DOD entrusts its health care providers with screening OEF/OIF servicemembers to assess their risk for developing PTSD, the department should have confidence that these providers are issuing referrals to all servicemembers who need them. Variation among DOD’s military service branches in the frequency with which their providers issued referrals to OEF/OIF servicemembers with identical results from the screening questionnaire suggests the need for more information about the decision to issue referrals. Knowing the factors upon which DOD health care providers based their clinical judgments in issuing referrals could help explain variation in the referral rates and allow DOD to provide reasonable assurance that such judgments are being exercised appropriately. However, DOD has not identified the factors its health care providers used in determining why some servicemembers received referrals while other servicemembers with the same number of positive responses to the four PTSD screening questions did not. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to identify the factors that DOD health care providers use in issuing referrals for further mental health or combat/operational stress reaction evaluations to explain provider variation in issuing referrals. In commenting on a draft of this report, DOD concurred with our conclusions and recommendation. DOD’s comments are reprinted in appendix II. DOD noted that it plans a systematic evaluation of referral patterns for the post-deployment health assessment through the National Quality Management Program and that an ongoing validation study of the post-deployment health assessment and the post-deployment health reassessment is projected for completion in October 2006. Despite its planned implementation of our recommendation to identify the factors that its health care providers use to make referrals, DOD disagreed with our finding that it has not provided reasonable assurance that OEF/OIF servicemembers receive referrals for further mental health evaluations when needed. To support its position, DOD identified several factors in its comments that it stated may explain why some OEF/OIF servicemembers with the same number of positive responses to the four PTSD screening questions are referred while others are not. For example, DOD health care providers may employ watchful waiting instead of a referral for a further evaluation for servicemembers with three or four positive responses to the PTSD screening questions. Additionally, DOD stated in its technical comments that providers may use the referral category of “other” rather than place a mental health label on a referral by checking the further evaluation categories of mental health or combat/operational stress reaction. DOD also stated in its technical comments that health care providers may not place equal value on the four PTSD screening questions and may only refer servicemembers who indicate positive responses to certain questions. Although DOD identified several factors that may explain why some servicemembers are referred while others are not, DOD did not provide data on the extent to which these factors affect health care providers’ clinical judgments on whether to refer OEF/OIF servicemembers with three or four positive responses to the four PTSD screening questions. Until DOD has better information on how its health care providers use these factors when applying their clinical judgment, DOD cannot reasonably assure that servicemembers who need referrals receive them. DOD’s plans to develop this information should lead to reasonable assurance that servicemembers who need referrals receive them. DOD also described in its written comments its philosophy of clinical intervention for combat and operational stress reactions that could lead to PTSD. Central to its approach is the belief that attempting to diagnose normal reactions to combat and assigning too much significance to symptoms when not warranted may do more harm to a servicemember than good. While we agree that PTSD is a complex disorder that requires DOD health care providers to make difficult clinical decisions, issues relating to diagnosis and treatment are not germane to the referral issues we reviewed and were beyond the scope of our work. Instead, our work focused on the referral of servicemembers who may be at risk for PTSD because they answered three or four of the four PTSD screening questions positively, not whether they should be diagnosed and treated. Further, DOD implied that our position is that servicemembers must have a referral to access mental health care, but there are other avenues of care for servicemembers where a referral is not needed. We do not assume that servicemembers must have a referral in order to access these health care services. Rather, in this report we identify the health care services available to OEF/OIF servicemembers who have been discharged or released from active duty and focus on how decisions are made by DOD providers regarding referrals for servicemembers who may be at risk for PTSD. DOD also provided technical comments, which we incorporated as appropriate. VA provided comments on a draft of this report by e-mail. VA concurred with the facts in the draft report that related to VA. We are sending copies of this report to the Secretary of Veterans Affairs; the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. To describe the mental health benefits available to veterans who served in military conflicts in Afghanistan and Iraq—Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF), we reviewed the Department of Defense (DOD) health care benefits and Department of Veterans Affairs (VA) mental health services available for these veterans. We reviewed the policies, procedures, and guidance issued by DOD’s TRICARE and VA’s health care systems and interviewed DOD and VA officials about the benefits and services available for post-traumatic stress disorder (PTSD). We defined an OEF/OIF veteran as a servicemember who was deployed in support of OEF or OIF from October 1, 2001, through September 30, 2004, and had since been discharged or released from active duty status. We classified National Guard and Reserve members as veterans if they had been released from active duty status after their deployment in support of OEF/OIF. We interviewed officials in DOD’s Office of Health Affairs about health care benefits, including length of coverage, offered to OEF/OIF veterans who are members of the National Guard and Reserves and have left active duty status. We attended an Air Force Reserve and National Guard training seminar in Atlanta, Georgia, for mental health providers, social workers, and clergy to obtain information on PTSD mental health services offered to National Guard and Reserve members returning from deployment. To obtain information on DOD’s Military OneSource, we interviewed DOD officials and the manager of the Military OneSource contract about the services available and the procedures for referring OEF/OIF veterans for mental health services. We interviewed representatives from the Army, Air Force, Marines, and Navy about their use of Military OneSource. We interviewed VA headquarters officials, including mental health experts, to obtain information about VA’s specialized PTSD services. We reviewed applicable statutes and policies and interviewed officials to identify the services offered by VA’s Vet Centers for OEF/OIF veterans. In addition, to inform our understanding of the issues related to DOD’s post-deployment process, we interviewed veterans’ service organization representatives from The American Legion, Disabled American Veterans, and Vietnam Veterans of America. To determine the number of OEF/OIF servicemembers who may be at risk for developing PTSD and the number of these servicemembers who were referred for further mental health evaluations, we analyzed computerized DOD data. We worked with officials at DOD’s Defense Manpower Data Center to identify the population of OEF/OIF servicemembers from the Contingency Tracking System deployment and activation data files. We then worked with officials from DOD’s Army Medical Surveillance Activity (AMSA) to identify which OEF/OIF servicemembers had responded positively to one, two, three, or four of the four PTSD screening questions on the DD 2796 questionnaire. AMSA maintains a database of all servicemembers’ completed DD 2796s. The DD 2796 is a questionnaire that DOD uses to identify servicemembers who may be at risk for developing PTSD after their deployment and contains the four PTSD screening questions that may identify these servicemembers. The four questions are: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities, or your surroundings? Because a servicemember may have been deployed more than once, some servicemembers’ records at AMSA included more than one completed DD 2796. We obtained information from the DD 2796 that was completed following the servicemembers’ most recent deployment in support of OEF/OIF. We removed from our review servicemembers who either did not have a DD 2796 on file at AMSA or completed a DD 2796 prior to DOD adding the four PTSD screening questions to the questionnaire in April 2003. In all, we reviewed DD 2796’s completed by 178,664 OEF/OIF servicemembers. To determine the criteria we would use to identify OEF/OIF servicemembers who may have been at risk for developing PTSD, we reviewed the clinical practice guideline for PTSD developed jointly by VA and DOD, which states that three or more positive responses to the four questions indicate a risk for developing PTSD. Further, we reviewed a retrospective study that found that those individuals who provided three or four positive responses to the four PTSD screening questions were highly likely to have been previously given a diagnosis of PTSD prior to the screening. To determine the number of OEF/OIF servicemembers who may be at risk for developing PTSD and were referred for further mental health evaluations, we asked AMSA to identify OEF/OIF servicemembers whose DD 2796 forms indicated that they were referred for further mental health or combat/operational stress reaction evaluations by a DOD health care provider. To examine whether DOD has reasonable assurance that OEF/OIF veterans who needed further mental health evaluations received referrals, we reviewed DOD’s policies and guidance, as well as policies and guidance for each of the military service branches (Army, Navy, Air Force, and Marines). Based on electronic testing of logical elements and our previous work on the completeness and accuracy of AMSA’s centralized database, we concluded that the data were sufficiently reliable for the purposes of this report. NDAA also directed us to determine the number of OEF/OIF veterans who, because of their referrals, accessed DOD or VA health care services to obtain a further mental health or combat/operational stress reaction evaluation. However, as discussed with the committees of jurisdiction, we could not use data from OEF/OIF veterans’ DD 2796 forms to determine if veterans accessed DOD or VA health care services because of their mental health referrals. DOD officials explained that the referral checked on the DD 2796 cannot be linked to a subsequent health care visit using DOD computerized data. Therefore, we could not determine how many OEF/OIF veterans accessed DOD or VA health care services for further mental health evaluations because of their referrals. We conducted our work from December 2004 through April 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, key contributors to this report were Marcia A. Mann, Assistant Director; Mary Ann Curran, Martha A. Fisher, Krister Friday, Lori Fritz, and Martha Kelly. | Many servicemembers supporting Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) have engaged in intense and prolonged combat, which research has shown to be strongly associated with the risk of developing post-traumatic stress disorder (PTSD). GAO, in response to the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, (1) describes DOD's extended health care benefit and VA's health care services for OEF/OIF veterans; (2) analyzes DOD data to determine the number of OEF/OIF servicemembers who may be at risk for PTSD and the number referred for further mental health evaluations; and (3) examines whether DOD can provide reasonable assurance that OEF/OIF servicemembers who need further mental health evaluations receive referrals. DOD offers an extended health care benefit to some OEF/OIF veterans for a specified time period, and VA offers health care services that include specialized PTSD services. DOD's benefit provides health care services, including mental health services, to some OEF/OIF veterans for 180 days following discharge or release from active duty. Additionally, some veterans may purchase extended benefits for up to 18 months. VA also offers health care services to OEF/OIF veterans following their discharge or release from active duty. VA offers health benefits for OEF/OIF veterans at no cost for 2 years following discharge or release from active duty. After their 2-year benefit expires, some OEF/OIF veterans may continue to receive care under VA's eligibility rules. Using data provided by DOD, GAO found that 9,145 or 5 percent of the 178,664 OEF/OIF servicemembers in its review may have been at risk for developing PTSD. DOD uses a questionnaire to identify those who may be at risk for developing PTSD after deployment. DOD providers interview servicemembers after they complete the questionnaire. A joint VA/DOD guideline states that servicemembers who respond positively to three or four of the questions may be at risk for PTSD. Further, we reviewed a retrospective study that found that those individuals who provided three or four positive responses to the four PTSD screening questions were highly likely to have been previously given a diagnosis of PTSD prior to the screening. Of the 5 percent who may have been at risk, GAO found that DOD providers referred 22 percent or 2,029 for further mental health evaluations. DOD cannot provide reasonable assurance that OEF/OIF servicemembers who need referrals receive them. According to DOD officials, not all of the servicemembers with three or four positive responses to the PTSD screening questions will need referrals for further mental health evaluations. DOD relies on providers' clinical judgment to decide who needs a referral. GAO found that DOD health care providers varied in the frequency with which they issued referrals to OEF/OIF servicemembers with three or more positive responses; the Army referred 23 percent, the Marines about 15 percent, the Navy 18 percent, and the Air Force about 23 percent. However, DOD did not identify the factors its providers used in determining which OEF/OIF servicemembers needed referrals. Knowing the factors upon which DOD health care providers based their clinical judgments in issuing referrals could help explain variation in the referral rates and allow DOD to provide reasonable assurance that such judgments are being exercised appropriately. |
Today’s Army faces an enormous challenge to balance risks and resources in order to meet its many missions. Since 1990, active Army ranks have been reduced from 770,000 to 495,000 personnel, a reduction of about 36 percent. Simultaneously, world events have dictated that forces be trained and ready to respond to potential high-intensity missions in areas such as Korea and the Persian Gulf while conducting peace enhancement operations around the world. The Army currently has 10 active combat divisions compared to the 18 it had at the start of Operation Desert Storm in 1991. Four of the 10 divisions are considered contingency divisions and would be the first to deploy in the event of a major theater war. These units are the 82nd Airborne, 101st Air Assault, 3rd Infantry, and 1st Cavalry divisions. The 2nd Infantry Division, while not a contingency force division, is already deployed in Korea. The remaining five divisions, which are the focus of my testimony, are expected to deploy in the event of a second simultaneous or nearly simultaneous major theater contingency or as reinforcements for a larger-than-expected first contingency. These units are the 1st Armored, 1st Infantry, 4th Infantry, 10th Infantry, and 25th Infantry divisions. Also, these divisions have been assigned the bulk of the recent peacekeeping missions in Bosnia and Haiti, and the 4th Infantry division over the last 2 years has been conducting the Army’s advanced war-fighting experiment. Appendix I provides a list of the Army’s current active divisions and the locations of each division’s associated brigades. In the aggregate, the Army’s later-deploying divisions were assigned 66,053, or 93 percent, of their 70,665 authorized personnel at the beginning of fiscal year 1998. However, aggregate numbers do not adequately reflect the condition that exists within individual battalions, companies, and platoons of these divisions. This is because excess personnel exist in some grades, ranks, and skills, while shortages exist in others. For example, while the 1st Armored Division was staffed at 94 percent in the aggregate, its combat support and service support specialties were filled at below 85 percent, and captains and majors were filled at 73 percent. In addition, a portion of each later-deploying division exists only on paper because all authorized personnel have not been assigned. All these divisions contain some squads, crews, and platoons in which no personnel or a minimum number of personnel are assigned. Assigning a minimum number of personnel to a crew means having fewer personnel than needed to fully accomplish wartime missions; for example, having five soldiers per infantry squad rather than nine, tank crews with three soldiers instead of four, or artillery crews with six soldiers rather than nine. We found significant personnel shortfalls in all the later-deploying divisions. For example: At the 10th Infantry Division, only 138 of 162 infantry squads were fully or minimally filled, and 36 of the filled squads were unqualified. At the 2nd and 3rd brigades of the 25th Infantry Division, 52 of 162 infantry squads were minimally filled or had no personnel assigned. At the 1st Brigade of the 1st Infantry Division, only 56 percent of the authorized infantry soldiers for its Bradley Fighting Vehicles were assigned, and in the 2nd Brigade, 21 of 48 infantry squads had no personnel assigned. At the 3rd Brigade of the 1st Armored Division, only 16 of 116 M1A1 tanks had full crews and were qualified, and in one of the Brigade’s two armor battalions, 14 of 58 tanks had no crewmembers assigned because the personnel were deployed to Bosnia. In addition, at the Division’s engineer brigade in Germany, 11 of 24 bridge teams had no personnel assigned. At the 4th Infantry Division, 13 of 54 squads in the engineer brigade had no personnel assigned or had fewer personnel assigned than required. The significance of personnel shortfalls in later-deploying divisions cannot be adequately captured solely in terms of overall numbers. The rank, grade, and experience of the personnel assigned must also be considered. For example, captains and majors are in short supply Army-wide due to drawdown initiatives undertaken in recent years. The five later-deploying divisions had only 91 percent and 78 percent of the captains and majors authorized, respectively, but 138 percent of the lieutenants authorized. The result is that unit commanders must fill leadership positions in many units with less experienced officers than Army doctrine requires. For example, in the 1st Brigade of the 1st Infantry Division, 65 percent of the key staff positions designated to be filled by captains were actually filled by lieutenants or captains that were not graduates of the Advanced Course. We found that three of the five battalion maintenance officers, four of the six battalion supply officers, and three of the four battalion signal officers were lieutenants rather than captains. While this situation represents an excellent opportunity for the junior officers, it also represents a situation in which critical support functions are being guided by officers without the required training or experience. There is also a significant shortage of NCOs in the later-deploying divisions. Again, within the 1st Brigade, 226, or 17 percent of the 1,450, total NCO authorizations, were not filled at the time of our visit. As was the case in all the divisions, a significant shortage was at the first-line supervisor, sergeant E-5 level. At the beginning of fiscal year 1998, the 5 later-deploying divisions were short nearly 1,900 of the total 25,357 NCOs authorized, and as of February 15, 1998, this shortage had grown to almost 2,200. In recent years, in reports and testimony before the Congress, we discussed the Status of Resources and Training System (SORTS), which is used to measure readiness, and reported on the need for improvements. SORTS data for units in the later-deploying divisions have often reflected a high readiness level for personnel because the system uses aggregate statistics to assess personnel readiness. For example, a unit that is short 20 percent of all authorized personnel in the aggregate could still report the ability to undertake most of its wartime mission, even though up to 25 percent of the key leaders and personnel with critical skills may not be assigned. Using aggregate data to reflect personnel readiness masks the underlying personnel problems I have discussed today, such as shortages by skill level, rank, or grade. Compounding these problems are high levels of personnel turnover, incomplete squads and crews, and frequent deployments, none of which are part of the readiness calculation criteria. Yet, when considered collectively, these factors create situations in which commanders may have difficulty developing unit cohesion, accomplishing training objectives, and maintaining readiness. Judging by our analysis of selected commanders’ comments submitted with their SORTS reports and other available data, the problems I have just noted are real. However, some commanders apparently do not consider them serious enough to warrant a downgrade in the reported readiness rating. For example, at one engineer battalion, the commander told us his unit had lost the ability to provide sustained engineer support to the division. His assessment appeared reasonable, since company- and battalion-level training for the past 4 months had been canceled due to the deployment of battalion leaders and personnel to operations in Bosnia. As a result of this deployment, elements of the battalion left behind had only 33 to 55 percent of its positions filled. The commander of this battalion, however, reported an overall readiness assessment of C-2, which was based in part on a personnel level that was over 80 percent in the aggregate. The commander also reported that he would be able to achieve a C-1 status in only 20 training days. This does not seem realistic, given the shortages we noted. We found similar disconnects between readiness conditions as reported in SORTS and actual unit conditions at other armor, infantry, and support units. Many factors have contributed to shortfalls of personnel in the Army’s later-deploying divisions, including (1) the Army’s priority for assigning personnel to units, commands, and agencies; (2) Army-wide shortages of some types of personnel; (3) peacekeeping operations; and (4) the assignment of soldiers to joint and other Army command, recruiting, and base management functions. The Army uses a tiered system to allocate personnel and other resources to its units. The Army gives top priority to staffing DOD agencies; major commands such as the Central Command, the European Command, and the Pacific Command; the National Training Center; and the Army Rangers and Special Forces Groups. These entities receive 98 to 100 percent of the personnel authorized for each grade and each military occupational specialty. The 2nd Infantry Division, which is deployed in Korea, and the four contingency divisions are second in priority. Although each receives 98 to 100 percent of its aggregate authorized personnel, the total personnel assigned are not required to be evenly distributed among grades or military specialties. The remaining five later-deploying divisions receive a proportionate share of the remaining forces. Unlike priority one and two forces, the later-deploying units have no minimum personnel level. Army-wide shortages of personnel add to the shortfalls of later-deploying divisions. For example, in fiscal year 1997, the Army’s enlistment goal for infantrymen was 16,142. However, only about 11,300 of those needed were enlisted, which increased the existing shortage of infantry soldiers by an additional 4,800 soldiers. As of February 15, 1998, Army-wide shortages existed for 28 Army specialties. Many positions in squads and crews are left unfilled or minimally filled because personnel are diverted to work in key positions where they are needed more. Also, because of shortages of experienced and branch-qualified officers, the Army has instituted an Officer Distribution Plan, which distributes a “fair share” of officers by grade and specialty among the combat divisions. While this plan has helped spread the shortages across all the divisions, we noted significant shortages of officers in certain specialties at the later-deploying divisions. Since 1995, when peacekeeping operations began in Bosnia-Herzegovina, there has been a sustained increase in operations for three of the later-deploying divisions: the 1st Armored Division, the 1st Infantry Division, and the 10th Infantry Division. For example, in fiscal year 1997, the 1st Armored Division was directed 89 times to provide personnel for operations other than war and contingency operations, training exercises, and for other assignments from higher commands. More than 3,200 personnel were deployed a total of nearly 195,000 days for the assignments, 89 percent of which were for operations in Bosnia. Similarly, the average soldier in the 1st Infantry Division was deployed 254 days in fiscal year 1997, primarily in support of peacekeeping operations. Even though the 1st Armored and 1st Infantry Divisions have had 90 percent or more of their total authorized personnel assigned since they began operations in Bosnia, many combat support and service support specialties were substantially understrength, and only three-fourths of field grade officers were in place. As a result, the divisions took personnel from nondeploying units to fill the deploying units with the needed number and type of personnel. As a further result, the commanders of nondeploying units have squads and crews with no, or a minimal number of, personnel. Unit commanders have had to shuffle personnel among positions to compensate for shortages. For example, they assign soldiers that exist in the largest numbers—infantry, armor, and artillery—to work in maintenance, supply, and personnel administration due to personnel shortages in these technical specialties; assign soldiers to fill personnel shortages at a higher headquarters or to accomplish a mission for higher headquarters; and assign soldiers to temporary work such as driving buses, serving as lifeguards, and managing training ranges—vacancies, in some cases, which have resulted from civilian reductions on base. At the time of our visit, the 1st Brigade of the 1st Infantry Division had 372, or 87 percent, of its 428 authorized dismount infantry. However, 51 of these 372 soldiers were assigned to duties outside their specialties to fill critical technical shortages, command-directed positions, and administrative and base management activities. These reassignments lowered the actual number of soldiers available for training to 75 percent daily. In Germany, at the 2nd Brigade of the 1st Infantry Division, 21 of 48 infantry squads had no personnel assigned due to shortages. From the remaining 27 squads that were minimally filled, the equivalent of another 5 squads of the Brigade’s soldiers were working in maintenance, supply, and administrative specialties to compensate for personnel shortages in those specialties. The end result is that the brigade only had 22 infantry squads with 7 soldiers each rather than 48 squads with 9 soldiers each. According to Army officials, the reduction of essential training, along with the cumulative impact of the shortages I just outlined, has resulted in an erosion of readiness. Readiness in the divisions responsible for peacekeeping operations in Bosnia has been especially affected because the challenges imposed by personnel shortages are compounded by frequent deployments. Universally, division officials told us that the shortage of NCOs in the later-deploying divisions is the biggest detriment to overall readiness because crews, squads, and sections are led by lower-level personnel rather than by trained and experienced sergeants. Such a situation impedes effective training because these replacement personnel become responsible for training soldiers in critical skills they themselves may not have been trained to accomplish. At one division, concern was expressed about the potential for a serious training accident because tanks, artillery, and fighting vehicles were being commanded by soldiers without the experience needed to safely coordinate the weapon systems they command. According to Army officials, the rotation of units to Bosnia has also degraded the training and readiness of the divisions providing the personnel. For example, to deploy an 800-soldier task force last year, the Commander of the 3rd Brigade Combat Team had to reassign 63 soldiers within the brigade to serve in infantry squads of the deploying unit, strip nondeploying infantry and armor units of maintenance personnel, and reassign NCOs and support personnel to the task force from throughout the brigade. These actions were detrimental to the readiness of the nondeploying units. For example, gunnery exercises for two armor battalions had to be canceled and 43 of 116 tank crews became unqualified on the weapon system, the number of combat systems out of commission increased, and contractors were hired to perform maintenance. According to 1st Armored and 1st Infantry division officials, this situation has reduced their divisions’ readiness to the point of not being prepared to execute wartime missions without extensive training and additional personnel. If the later-deploying divisions are required to deploy to a second major theater contingency, the Army plans to fill personnel shortfalls with retired servicemembers, members of the Individual Ready Reserve, and newly trained recruits. The number of personnel to fill the later deploying divisions could be extensive, since (1) personnel from later deploying divisions would be transferred to fill any shortages in the contingency units that are first to deploy and (2) these divisions are already short of required personnel. The Army’s plan for providing personnel under a scenario involving two major theater contingencies includes unvalidated assumptions. For example, the plan assumes that the Army’s training base will be able to quadruple its output on short notice and that all reserve component units will deploy as scheduled. Army officials told us that based on past deployments, not all the assumptions in their plans will be realized, and there may not be sufficient trained personnel to fully man later-deploying divisions within their scheduled deployment times. Finally, if retired personnel or Individual Ready Reserve members are assigned to a unit, training and crew cohesion may not occur prior to deployment because Army officials expect some units to receive personnel just before deployment. Finding solutions to the personnel problems I have discussed today will not be easy, given the Army’s many missions and reduced personnel. While I have described serious shortfalls of personnel in each of later-deploying divisions, this condition is not necessarily new. What is new is the increased operating tempo, largely brought about because of peacekeeping operations, which has exacerbated the personnel shortfalls in these divisions. However, before any solutions can be discussed, the Army should determine whether it wants to continue to accept the current condition of its active force today, that is, five fully combat-ready divisions and five less than fully combat-capable divisions. The Army has started a number of initiatives that ultimately may help alleviate some of the personnel shortfalls I have described. These initiatives include targeted recruiting goals for infantry and maintenance positions; the advanced war-fighting experiment, which may reduce the number of personnel required for a division through the use of technology; and better integration of active and reserve forces. Efforts to streamline institutional forces may also yield personnel that could be used to fill vacancies such as these noted in my testimony. If such efforts do not yield sufficient personnel or solutions to deal with the shortages we have noted in this testimony, we believe it is important that the Army, at a minimum, review its current plans for rectifying these shortfalls in the event of a second major theater war. In particular, if the Army expects to deploy fully combat-capable divisions for such a war, it should review the viability of alleviating shortfalls predominately with reservists from the Individual Ready Reserve. This concludes my testimony. I will be happy to answer any questions you may have at this time. 1st Cavalry Division - headquarters and three brigades at Fort Hood, Tex. 3rd Infantry Division - headquarters and two brigades at Fort Stewart, Ga., and one brigade at Fort Benning, Ga. 82nd Airborne Division - headquarters and three brigades at Fort Bragg, N.C. 101st Airborne Division - headquarters and three brigades at Fort Campbell, Ky. 2nd Infantry Division - headquarters and two brigades in Korea, and one brigade at Fort Lewis, Wash. 1st Infantry Division - headquarters and two brigades in Germany, and one brigade at Fort Riley, Kans. 1st Armored Division - headquarters and two brigades in Germany, and one brigade at Fort Riley, Kans. 4th Infantry Division - headquarters and two brigades at Fort Hood, Tex., and one brigade at Fort Carson, Colo. 10th Mountain Division - headquarters and two brigades at Fort Drum, N.Y. 25th Infantry Division - headquarters and two brigades at Schofield Barracks, Hawaii, and one brigade at Fort Lewis, Wash. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed its preliminary findings from its ongoing evaluation of personnel readiness in the Army's five later-deploying divisions, focusing on the: (1) extent of personnel shortages in the divisions and the extent to which these shortages are reflected in readiness reports; (2) key factors contributing to personnel shortages and the impact such shortages have on readiness; (3) Army's plans for correcting such shortages should these divisions be called upon to deploy; and (4) issues to be considered in dealing with personnel shortages. GAO noted that: (1) in the aggregate, the Army's five later-deploying divisions had an average of 93 percent of their personnel on board at the time of GAO's visits; (2) however, aggregate data does not fully reflect the extent of shortages of combat troops, technical specialists, experienced officers, and noncommissioned officers (NCO) that exist in those divisions; (3) the readiness reporting system that contains the aggregate data on these divisions does not fully disclose the impact of personnel shortages on the ability of the divisions' units to accomplish critical wartime tasks; (4) as a result, there is a disconnect between the reported readiness of these forces in formal readiness reports and the actual readiness that GAO observed on its visits; (5) these disconnects exist because the unit readiness reporting system does not consider some information that has a significant impact on a unit's readiness, such as operating tempo, personnel shortfalls in key positions, and crew and squad staffing; (6) the Army's priority in assigning personnel to these divisions, Army-wide shortages of personnel, frequent deployments to peacekeeping missions, and the assignment of soldiers to other tasks outside of their specialty are the primary reasons for personnel shortfalls; (7) the impact of personnel shortages on training and readiness is exacerbated by the extent to which personnel are being used for work outside their specialties or units; (8) according to commanders in all the divisions, the collective impact of understaffing squads and crews, transferring to other jobs the NCOs from their crews and squads they are responsible for training, and assigning personnel to other units as fillers for exercises and operations have degraded their capability and readiness; (9) if the Army had to deploy these divisions for a high-intensity conflict, these divisions would fill their units with Individual Ready Reserve Soldiers, retired servicemembers, and newly recruited soldiers; (10) however, the Army's plan for providing these personnel includes assumptions that have not been validated, and there may not be enough trained personnel to fully staff or fill later-deploying divisions within their scheduled deployment times; and (11) solutions, if any, will depend upon how the Army plans to use these divisions in the future. |
According to IRS’s Statistics of Income (SOI) research, of about 51 million taxpayers who owned IRAs as of 2004, nearly 41 million owned traditional IRAs compared to more than 13 million taxpayers who owned Roth IRAs, as shown in figure 1. Of the $3.3 trillion held in IRAs as of 2004, traditional IRAs accounted for almost $3 trillion. Traditional IRAs have grown not only from contributions but also from rollovers from employer pension plans. First introduced in 1998, Roth IRAs totaled $140 billion as of 2004. While Roth IRAs and employer-based SIMPLE and SEP IRAs constitute a small share of IRA assets, the number of taxpayers who own Roth IRAs surpasses the number of those owning employer-based IRAs, as shown in figure 1. IRA assets surpass assets held in either employer- sponsored defined benefit or defined contribution plans. According to IRS’s SOI analysis, about 5.3 million taxpayers contributed about $12.6 billion to traditional IRAs for tax year 2004, and deductible contributions accounted for over three-quarters of that amount. Also, about 6.7 million taxpayers contributed more than $14.7 billion to Roth IRAs for tax year 2004. Traditional IRA contributions averaged $2,381 and Roth IRA contributions averaged $2,211 for tax year 2004. In addition to contributions, more than 3.6 million taxpayers rolled over about $215 billion into traditional IRAs from employer plans in 2004, according to IRS’s SOI analysis. Taxpayers contributing to Roth IRAs are younger on average than traditional IRA owners, as illustrated in figure 2. On the basis of IRS’s SOI analysis for tax year 2004, more taxpayers under age 55 contributed to a Roth IRA than to a traditional IRA, while the reverse is true for taxpayers ages 55 and over. Taxpayers over age 70 may contribute to Roth IRAs, but contributions to traditional IRAs are not allowed after age 70½. Even with the sheer numbers of taxpayers owning IRAs, most taxpayers eligible for IRAs do not take advantage of the opportunity to save for retirement on a tax-preferred basis. According to IRS SOI estimates, only 10 percent of those eligible contributed in 2004. According to IRS’s SOI analysis, about 12.3 million taxpayers withdrew $140 billion—traditional IRA withdrawals accounted for more than 95 percent—during tax year 2004. Based on SOI estimates for tax year 2004, about 54 percent of taxpayers with IRA withdrawals were age 70 and older, withdrawing about $55 billion during 2004. Individuals face a myriad of tax rules in using tax-advantaged traditional and Roth IRAs, and noncompliance can trigger taxes and penalties that may reduce their retirement savings. As outlined in table 1, rules governing IRAs and associated penalties generally can be categorized into contribution rules, including contribution limits and eligibility; distribution rules; and rollover rules. Taxpayers and IRA custodians must also follow rules for reporting IRA transactions. Publication 590 explains the IRA rules that taxpayers are to follow, and IRS offers additional assistance through its Web site and toll-free phone lines. Both traditional and Roth IRAs have rules governing eligibility to contribute, and all IRA contributions are subject to an annual limit. For both IRA types, eligibility is limited to taxpayers with taxable compensation. For a traditional IRA in tax year 2008, eligibility for a full or partial deduction from taxable income depends on whether a taxpayer or spouse is covered by an employer-sponsored retirement plan as well as limits on modified adjusted gross income (AGI) and filing status, as shown in table 2. For example, a single taxpayer not covered by an employer plan can take a full deduction regardless of income, and a married taxpayer filing jointly whose spouse is covered by an employer plan can take the full deduction if the couple’s modified AGI was $159,000 or less for 2008. A taxpayer ineligible for any deduction based on pension coverage or modified AGI can still make nondeductible contributions to a traditional IRA. A taxpayer in the phaseout range who is married and filing jointly or a qualified widower has to figure out the deductible contribution portion and decide whether to contribute the rest on a nondeductible basis. For a Roth IRA, eligibility also depends on modified AGI and filing status, as shown in table 3. For example, a single taxpayer with modified AGI of less than $101,000 is eligible for a full contribution in 2008, and married taxpayers filing jointly with modified AGI of less than $159,000 also are eligible for a full contribution. A single taxpayer with AGI of at least $116,000 or married taxpayers filing jointly with AGI of $169,000 or more cannot contribute to a Roth IRA for 2008. Taxpayers in the income phaseout range have reduced contribution limits. Coverage by an employer-sponsored retirement plan does not affect eligibility for a Roth IRA. Both traditional and Roth IRAs are subject to annual contribution limits. Table 4 shows how the annual limit has changed over time and varies depending on a taxpayer’s age. For 2008, taxpayers under age 50 can contribute up to $5,000 and those over the age of 50 and under age 70½ can contribute up to $6,000 to traditional and Roth IRAs. Taxpayers age 70½ and older can contribute to a Roth IRA only. Contributions can be made any time during a year or by the due date for filing a tax return, not including extensions. For example, contributions for 2008 must be made by April 15, 2009. A taxpayer who contributes less than the annual limit cannot make up the difference after that date. Total contributions to Roth and traditional IRAs in any year cannot exceed the combined contribution limit. Any contribution in excess of the limit or made by an ineligible taxpayer is subject to a 6 percent penalty annually if the excess amount—and any earnings—is not withdrawn by the date the return for the year is due, including extensions. A taxpayer can choose to either withdraw an excess contribution to avoid the penalty or carry the excess forward as a contribution for a later year but still is liable for the excess contribution penalty. For example, if a taxpayer contributes more than allowed in 2008, that taxpayer has until April 15, 2009, to remove the excess, or the taxpayer can pay the penalty for 2008 and apply the excess as contributions for 2009. A taxpayer also can treat a contribution made to one IRA as having been made to a different IRA type; this is known as a recharacterization. For example, a taxpayer ineligible for a Roth IRA could recharacterize a contribution and transfer the amount to a traditional IRA. Whereas both IRA types share a combined contribution limit and have rules limiting eligibility, tax rules for distributions diverge for traditional and Roth IRAs. Traditional IRA distributions are taxable in the year received. Distributions are fully taxable for a taxpayer who made only deductible contributions and partially taxable for a taxpayer who also made nondeductible contributions. In contrast, Roth IRA distributions are tax-free after age 59½ as long as the taxpayer has held the Roth IRA for 5 years. For both traditional and Roth IRAs, early distributions before age 59½ for reasons other than specific exceptions result in 10 percent additional tax. Another key difference between traditional and Roth IRAs is the required minimum distribution rule to limit tax benefit from earnings accumulating on a tax-preferred basis in an IRA. Taxpayers age 70½ are required to take minimum distributions from tax-deferred traditional IRAs, whereas Roth IRA owners are not required to take distributions during their lifetime. Under 2002 regulations, which simplified the calculation, required minimum distributions for a traditional IRA are calculated using the previous year’s fair market value divided by life expectancy based on a uniform table. A taxpayer must begin required minimum distributions by April 1 of the year after turning age 70½. The second distribution must be made by December 31 of the year containing this April 1, and subsequent required minimum distributions must be made by December 31. Failure to take the required minimum is subject to a 50 percent penalty on the required amount not distributed. A taxpayer may request that IRS excuse the penalty if the excess accumulation is due to reasonable error and the taxpayer is taking steps to draw down the excess. Beneficiaries inheriting either IRA type are required to take minimum distributions, and the rules depend on whether the beneficiary is a surviving spouse, other individual, or an entity, such as a trust. Rollovers, where a taxpayer moves money from a pension plan or an IRA into another IRA, are also subject to rules specifying that a tax-free rollover must be completed within 60 days of the taxpayer receiving the money and that only one IRA-to-IRA rollover is allowed per traditional IRA during a 1-year period. If a taxpayer does not roll over money within the allowed time or rolls over an IRA too frequently, the transaction does not qualify for tax-free treatment. For a failed rollover, the distribution from the first IRA is taxable and subject to the 10 percent early distribution penalty if the taxpayer is younger than age 59½. Further, amounts contributed in excess of the annual limit to the second IRA are subject to a 6 percent additional tax to penalize excess contributions. IRS has authority to waive the 60-day requirement and extend the rollover period where the failure to do so would be against good conscience, such as in the event of a disaster or an event beyond the taxpayer’s reasonable control. The 60-day requirement is automatically waived in the cases of financial institution errors in transferring and depositing the rollover funds. Otherwise, a taxpayer can pay a user fee and apply for a private letter ruling requesting a waiver. A direct transfer between IRA trustees is not considered a rollover and is not subject to the 1-year waiting period. Similarly, a recharacterization of contributions from one type of IRA to another IRA type is not considered a rollover subject to the 1-year waiting period. Conversions, where a taxpayer pays taxes deferred in a traditional IRA to convert those amounts to a Roth IRA, are subject to income eligibility rules. For tax year 2007, a taxpayer with modified AGI of $100,000 or less and not married filing separately is eligible to make a Roth conversion. If a taxpayer is ineligible and the conversion fails, unless recharacterized, amounts distributed from the traditional IRA before age 59½ are subject to a 10 percent penalty, and the Roth IRA contribution in excess of the annual limit is subject to a 6 percent additional tax. Taxpayer reporting rules differ for traditional IRAs and Roth IRAs. For a traditional IRA, a taxpayer reports deductible contributions on line 32 of the individual tax return (Form 1040), and the deduction reduces a taxpayer’s current taxable income. A taxpayer is to report taxable traditional IRA distributions—including amounts converted to a Roth IRA—as income on line 15 of the individual tax return. Any taxpayer who makes a nondeductible traditional IRA contribution or receives a distribution from a traditional IRA and ever made nondeductible traditional IRA contributions must also file Form 8606. In contrast to nondeductible traditional IRA contributions, taxpayers do not report Roth IRA contributions to IRS, and Roth IRA distributions are tax-free and generally are not reported on the 1040 tax return. Taxpayers who contributed more than allowed to a traditional or Roth IRA, withdrew money before age 59½, or failed to take required minimum distributions must file Form 5329 to report the associated penalties due. Custodians, including banks or mutual funds holding account owners’ IRA assets, follow the same basic procedures for traditional and Roth IRA contributions and distributions in terms of reporting to IRS. For both traditional and Roth IRAs, the custodian is required to submit a Form 5498 detailing the total contributions, rollovers, recharacterizations, and fair market value for every IRA. For example, for a taxpayer holding one traditional IRA and two Roth IRAs, the custodian should send three Form 5498s to the taxpayer and IRS. Given that taxpayers have until the return filing date to contribute to an IRA, the due date for filing Form 5498 is May 31. Custodians are also to report on Form 5498 whether a taxpayer is subject to required minimum distributions for the coming year but are not required to report to IRS the minimum amount calculated for each account. Instead, the custodians must report the minimum required distribution amount to the taxpayer or at least offer to calculate the amount; the statement or offer must include the date by which the amount must be distributed. For both traditional and Roth IRAs, the custodian is also required to submit a Form 1099-R each year that a withdrawal takes place detailing the total distributions taken from the account during the calendar year and providing some information about the distribution, such as whether the distributions were taken before age 59½ and whether a known penalty exception applies. The due date for filing Form 1099-R with IRS is February 28. Another key information report is the Form W-2 from employers showing compensation and employer pension plan coverage used in determining eligibility for traditional IRA deductions. Third-party reporting by IRA custodians provides information that taxpayers can use in preparing their tax returns and that IRS can use to identify noncompliant taxpayers. Figure 3 illustrates what taxpayers report for a traditional IRA contribution deduction and what custodians report on Form 5498 about the contribution. Mismatches between these two sources of information can trigger an enforcement response by IRS. Likewise, mismatches between distributions reported by custodians on a 1099-R and taxpayers’ individual tax returns can trigger enforcement by IRS. IRS service activities aim to increase taxpayer understanding of and improve taxpayer compliance with IRA rules for traditional and Roth IRAs. IRS’s Media and Publications office offers publications, forms, and forms instructions to help taxpayers complete their tax returns accurately. IRS updated the 2006 Publication 590 to reflect new legislation, permanently raising the IRA contribution limits and indexing them for inflation. IRS also developed procedures and guidance for the new provision effective for tax year 2006, allowing those ages 70½ and older to exclude from gross income an amount that does not exceed $100,000 if it is distributed for charitable purposes directly from their traditional IRAs. For 2007, IRS updated Publication 590 to reflect new provisions for onetime IRA transfers to fund a qualified health savings account and catch-up contributions for pension plan participants whose employers went bankrupt. Financial industry organizations and advisor representatives we interviewed complimented Publication 590 for translating the myriad of complicated IRA contribution and distributions rules into “plain English” to help taxpayers comply. IRS also provides special publications that include IRA information for taxpayers seeking assistance at IRS walk-in centers or for those consulting the IRS Web site. Further, taxpayers seeking assistance with IRA questions can call the IRS toll-free lines. IRS employees are trained with a probe and response guide to accurately answer questions about IRA rules. IRS research and enforcement data show that some taxpayers misreported in aggregate millions of dollars in traditional IRA contributions and distributions on their tax returns. The NRP examination study of tax year 2001 returns, the most recent research results available, showed that nearly 15 percent of those who made traditional IRA contribution deductions misreported their deductions on their tax returns, and nearly 15 percent of taxpayers who took taxable distributions from traditional IRAs misreported this information. IRS relies primarily on automated enforcement to detect misreported IRA contribution deductions and taxable distributions. IRS first checks tax returns for obvious IRA contribution errors and then matches tax returns to custodian-reported information to ensure, among other things, that taxpayers reported taxable distributions. For example, the AUR program, in tax year 2004, assessed taxes and penalties totaling about $61 million from almost 38,000 taxpayers who misreported early distributions from traditional IRAs. The NRP study of tax year 2001 returns, the most recent research results available, reported numbers of taxpayers who either misreported deductions for their traditional IRA contributions or misreporte95 taxes when taking withdrawals. NRP results for 2001 yield a measure of noncompliance across taxpayers. Of taxpayers who made deductible traditional IRA contributions, an estimated 14.8 percent (554,657 taxpayers) did not accurately report the IRA deduction on their individual tax returns—10.4 percent overstated their deductible contributions (that is, exceeded the applicable limit) and 4.4 percent underreported their deductible contributions (that is, reported less on their returns than they actually could deduct). The understated net income due to these misreported traditional IRA contribution deductions was $392 million, including both taxpayers who either overstated or understated their contribution deductions to a traditional IRA. For example, a taxpayer is considered to have overstated a deduction if the deduction reported exceeds the taxpayer’s actual contribution or if the deduction is higher than the taxpayer’s eligibility allows. Of taxpayers who had taxable traditional IRA distributions, an estimated 14.6 percent (1.5 million taxpayers) misreported withdrawals from their traditional IRA distributions—13.7 percent understated (that is, reported an amount less than what the taxpayer withdrew) and 0.9 percent overstated IRA distributions (that is, reported an amount greater than what the taxpayer withdrew). The underreported net income due to misreported IRA distributions was $6.3 billion, including taxpayers who failed to report early distributions and the associated tax. The 2001 NRP data did not provide a measure of noncompliance for some IRA transactions not reported directly on the Form 1040 tax return. For example, because Roth IRA contributions are not reported, the 2001 NRP study did not capture information on taxpayer errors under Roth IRA contribution rules. While NRP does cover misreporting of distributions taken, the 2001 NRP study did not capture estimates of noncompliance for older taxpayers who failed to take required minimum distributions from their traditional IRAs. IRS officials told us that for the upcoming NRP for tax year 2007, they are planning to gather additional information about taxpayer as well as custodian misreporting of IRA transactions. Whereas NRP yields a measure of taxpayer misreporting of traditional IRA transactions, IRS enforcement data reflect cases where IRS pursued taxpayers who appeared to not comply in reporting their IRA deductions or traditional IRA distributions on their tax returns. Through automated checks and document matching, IRS detects and corrects millions of dollars in taxpayer misreporting of IRA transactions. The Math Error program checks for obvious math errors as returns are processed, and the AUR program matches returns with custodian-reported information. Larger early withdrawal matching cases, including failed rollovers, are subject to correspondence examination. Figure 4 provides an overview of IRS’s automated enforcement activities for IRA transactions reported on tax returns. As tax returns are processed, the Math Error program reviews traditional IRA deductions claimed by taxpayers for amounts higher than allowable limits. For example, the Math Error program tests whether a taxpayer claimed a deduction greater than the maximum annual contribution limit. The Math Error program adjusts every taxpayer return for which an error is found to reflect any change in the deduction and sends a math error notice to the taxpayer. From tax years 2003 through 2006, IRS issued thousands of math error notices annually to taxpayers misreporting deductions on their traditional IRAs. IRS continues to use the Math Error program because IRA math errors must be resolved to process tax returns and adjust the tax liability so that taxpayers are in compliance. In May 2007, IRS officials told us that the Math Error program would discontinue age-based tests for traditional IRA contribution deductions because IRS does not have authority to use the Math Error program for IRA age rules. Starting in 2003, IRS used Social Security Administration age data during initial return processing to test taxpayers claiming traditional IRA contribution deductions. This up-front check allowed IRS to implement the higher contribution limit for taxpayers age 50 and over and also to check the lower limit allowed for taxpayers below age 50. For example, for tax year 2004, a taxpayer below age 50 could contribute up to $3,000 and a taxpayer age 50 or over could contribute up to $3,500. For 2007, the Math Error program will test that no taxpayer exceeds the highest limit allowed. Whereas the Math Error program checks for conspicuous errors in the taxpayer’s return, the AUR program compares information reported on the individual tax return with third-party information reported by financial institutions for individual taxpayers. The AUR program creates an inventory of potential cases by matching taxpayer return data with the information return file to verify that all income and deductions are reported accurately. An underreporter case results when computer analysis detects a discrepancy between the tax return and the information returns. Because of resource constraints, IRS officials said that they do not contact taxpayers in all cases where the AUR program finds a mismatch between what was reported on an information return and what was reported on a tax return. If a mismatch occurs over a certain tax threshold, IRS decides if the mismatch warrants a notice, asking the taxpayer to explain the discrepancies, such as when a taxpayer inadvertently fills in the wrong line on the tax return, or pay any taxes assessed. AUR reviewers are directed to consider the reasonableness of the taxpayers’ responses to notices but generally do not examine the accuracy of the information in the responses because they do not have examination authority. For tax year 2001, AUR contacts represented about 2 percent of the noncompliant taxpayers estimated by NRP of either taking an ineligible deduction or overdeducting contributions for traditional IRAs, with about 9,000 taxpayers assessed by the AUR program compared with nearly 555,000 taxpayers estimated by NRP. For tax year 2004, the last full year for which data are available, of 25,000 mismatches for taxpayers potentially ineligible for the contribution deduction, IRS assessed additional taxes of $7 million for nearly 9,000 taxpayers. Of about 85,000 mismatches for taxpayers who potentially overdeducted their traditional IRA contributions, the AUR program assessed additional taxes of $16.2 million for about 15,000 taxpayers for tax year 2004. Table 5 shows the numbers of taxpayers and total additional taxes assessed for misreported traditional IRA deductions for tax years 2001 to 2004. The AUR program does not necessarily work these IRA cases on a stand-alone basis and may pursue potential IRA deduction misreporting along with other discrepancies for taxpayers in the AUR inventory. We could not isolate the AUR data on additional taxes assessed on taxpayers who misreported distributions from traditional IRAs because those data are combined with misreporting of taxable distributions from other retirement plans. The AUR program does, however, capture separate data for taxpayers who misreported early distributions from their traditional IRAs. In tax year 2004, of about 420,000 mismatches, the AUR program assessed taxes against about 38,000 taxpayers. As shown in table 5, the AUR program assessed total taxes and penalties of $61.0 million on taxpayers who misreported early distributions from traditional IRAs in that year. In addition to those cases with assessments, the AUR program follows up with taxpayers on some cases where the Form 1099-Rs filed by custodians reported that no known penalty exception applies for an early distribution. According to a financial industry organization representative we interviewed, custodians play a limited role in reporting whether an exception applies because a custodian may not know why a taxpayer took a distribution and is not in a position to validate exceptions reported by the taxpayer. IRS is considering a new compliance initiative that could alert more taxpayers about their misreported IRA transactions. According to IRS compliance officials, when fully implemented a new AUR “soft notice” program would send letters to many taxpayers in the AUR inventory asking them to voluntarily fix their noncompliance by filing amended returns, or to not repeat the action the following year. A soft notice requires a taxpayer to take minimal actions and is intended to educate and stimulate compliance without IRS having to invest substantial resources. With phased rollout proposed to begin in fiscal year 2009, many taxpayers detected by the AUR program as misreporting traditional IRA deductions or distributions could ultimately receive soft notices under this proposal. For taxpayers who withdraw large amounts from their traditional IRAs before retirement age, a division under Correspondence Examinations handles larger AUR matching cases involving the additional 10 percent penalty and taxes due on early distributions. Through Correspondence Examinations, IRS can determine if taxpayers qualified for a penalty exception using some automated filters. For example, IRS employees can filter out early distribution exceptions for disability using information reported on Form 1099-R or the issuance of Form 1099-SSA. Correspondence Examinations may also ask the taxpayer for further documentation of an exception claimed. In fiscal year 2004, 20,771 taxpayers agreed with the proposed assessments for an average tax change of $1,313. One aspect of IRA noncompliance detected through the early distribution rule check by Correspondence Examinations is failed rollovers. Taxpayers have the option to withdraw funds from one traditional IRA and roll them over to another traditional IRA. If the taxpayer fails to complete a rollover at all and is under age 59½, Correspondence Examinations will treat the withdrawn funds as an early distribution and the taxpayer is subject to the 10 percent penalty. An examiner may detect a late rollover in a case where a taxpayer provides additional documentation showing the dates of the distribution and the subsequent deposit. Beyond systematic checks through its automated programs, IRS can also address IRA noncompliance through its field examination program. According to IRS examination officials, issues related to IRA transactions may surface when an examiner is working an examination case. For example, an examiner could uncover underreported income from an IRA distribution during a probe of a taxpayer’s reported income or determine that a taxpayer failed to complete a rollover within the 60-day limit. According to examination officials, an examiner may also revisit traditional IRA eligibility if an examination results in other adjustments to the taxpayer’s income. In using traditional and Roth IRAs to save for their own retirement, taxpayers face challenges in figuring how much they can contribute, navigating the various distribution rules, and moving their IRAs between custodians. Complexity of IRA rules was cited as the overarching contributor to challenges facing taxpayers and IRS in ensuring compliance with IRA rules by IRS officials, IRA custodians, and financial planners we interviewed. Table 6 highlights some challenges that taxpayers face with specific IRA rules. Representatives of financial firms and advisors we interviewed identified some options for IRS to clarify IRA guidance or offer additional IRA service activities. Some options to reduce IRA rule complexity would require changing the laws governing IRA contributions and distributions. Even as IRS works to inform taxpayers about IRA contribution rules, some taxpayers remain confused about whether and how much they can contribute, according to financial industry organization and advisor representatives we interviewed. Some contribution rule challenges span both traditional and Roth IRAs, while other challenges related to whether a traditional IRA contribution is deductible and how to keep records for nondeductible contributions. Some taxpayers may not understand the annual limits on IRA contributions in part because the limits have changed over recent years and vary by taxpayer age. For example, the 2007 limit for both kinds of IRAs was $4,000 for taxpayers under age 50 and $5,000 for taxpayers age 50 and older, and the 2008 limits are $5,000 and $6,000, respectively. As IRS’s math error data show for tax years 2003 to 2006, some taxpayers try to deduct more than their legal limit for traditional IRA contributions. According to the Treasury Inspector General for Tax Administration (TIGTA), taxpayers over age 70½ continue to improperly claim traditional IRA deductions. Financial industry organization and advisor representatives we interviewed agreed that the annual contribution limit rule, with the amount indexed for inflation beginning in 2009, could confuse some taxpayers, but they did not see complying with the contribution limit as a major challenge. According to interviewees, IRS publishes the limits well ahead of when they become effective, and IRA custodians and financial advisors reach out to advise taxpayers on any changes. While custodians typically would not accept a contribution exceeding the annual limit, a custodian would not know if a taxpayer contributed to other IRAs for the same tax year. According to some interviewees, taxpayers may be confused by the combined limits and some may not understand that the total limit applies across traditional and Roth IRAs and is not a limit for each type. Currently, the 2007 IRS Publication 590 discusses the total contribution limit in the traditional IRA chapter on page 11: “If you have more than one IRA, the limit applies to the total contributions made on your behalf to all your traditional IRAs for the year.” Page 11 also has a caution: “Contributions on your behalf to a traditional IRA reduce your limit for contributions to a Roth IRA. See chapter 2 for information about Roth IRAs.” On page 60 in the Roth IRA chapter, traditional IRA and Roth IRA contribution limits are discussed, as follows: “If contributions are made to both Roth IRAs and traditional IRAs established for your benefit, your contribution limit for Roth IRAs generally is the same as your limit would be if contributions were made only to Roth IRAs, but then reduced by all contributions for the year to all IRAs other than Roth IRAs. Employer contributions under a SEP or SIMPLE IRA plan do not affect this limit.” Even though the publication explains the rule separately in each chapter, we believe the various statements could be confusing to some taxpayers, particularly those who may not read both chapters. According to IRS officials we interviewed, other options to help clarify the guidance about the rule could include repeating the total contribution limit in the general reminder section up front in Publication 590 as well as on the IRS Web site and in other IRS outreach materials. For example, IRS included a reminder about the total contribution limit in its summer 2008 employer plans newsletter for tax practitioners, and IRS could include a similar reminder in the annual press release announcing the new contribution limits for the upcoming year. Income eligibility rules are a challenge for both traditional and Roth IRAs. Taxpayers over certain income limits cannot contribute to a Roth IRA and cannot deduct traditional IRA contributions. Interviewees also said that one reason taxpayers may be ineligibly contributing is because their year- end modified AGI exceeds the eligibility limit after they have already made the contribution. In addition, taxpayers must determine their partial deduction amounts if their modified AGI falls within certain phaseout ranges near the income limit for eligibility. The phaseouts, thus, introduce opportunities for some taxpayers to err by overdeducting their traditional IRA contributions or overcontributing to a Roth IRA. IRA contributions errors can contribute to the gross tax gap. For example, for tax year 2004, IRS’s AUR program assessed $23.2 million in additional tax on nearly 24,000 taxpayers who overdeducted or ineligibly deducted traditional IRA contributions. Interviewees generally agreed that the pension participation rule is a major challenge for taxpayers trying to determine their eligibility for a traditional IRA contribution deduction. Taxpayers who are single, heads of households, or qualifying widows/widowers and married couples not covered by any employer retirement plan are eligible for the full deduction regardless of income. Interviewees said that taxpayers might be ineligibly contributing because they are unaware that their employers made contributions to their employer-sponsored retirement plans until they receive their W-2s in January, after having already made their IRA contributions. According to some interviewees, some taxpayers may not understand the definition of “active participant.” One representative suggested that the definition for active participant in an employer- sponsored plan could be clarified to reduce confusion among employers and taxpayers and to ensure that employers mark the W-2 correctly. However, an IRS official knowledgeable about employer plans said that the W-2 guidance details how employers are to handle the W-2 checkbox. In turn, individuals need to know their participation status but not the full rule. According to the official, individuals receive a breadth of benefit information when hired, and they simply may not remember that they are enrolled in their new employer’s retirement plan until they receive their W- 2s. When a taxpayer contributes to a Roth IRA when he or she is ineligible because of filing status or the modified AGI limits, one way to correct the ineligible contribution and avoid the 6 percent penalty is to recharacterize. However, the recharacterization process—treating a contribution made to an IRA as having been made to a different IRA type and transferring the amount between IRAs—can be confusing for taxpayers, according to representatives we interviewed. To avoid possible errors and the burden of recharacterization, taxpayers could, for example, wait until they receive their 2008 W-2s to check their 2008 income eligibility and retirement plan coverage and then make their eligible 2008 IRA contributions by April 15, 2009. However, some taxpayers may not want to wait until the end of the year to make contributions and forgo the accruals on that year’s contribution. Some interviewees suggested basing eligibility on the previous year’s modified AGI, which taxpayers would already know and could then use to better plan their contributions over the upcoming year and avoid contribution errors. Nondeductible contributions to traditional IRAs pose their own challenges for taxpayers because of recordkeeping needs. Interviewees said that taxpayers may find it difficult or forget to track the basis of nondeductible contributions over time in part because these contributions do not appear on the Form 1040. They added that taxpayers who did not complete the supplemental Form 8606 to track their nondeductible contributions may find it challenging to determine the taxable amount of their distribution. Some may potentially pay tax on the full distribution amount rather than their taxable basis, while others face the burden of trying to locate the information needed to determine the taxable amount and filing an amended return. Representatives suggested that taxpayers may need more IRS help to understand how to report and track nondeductible contributions to traditional IRAs. Suggested options include clarifying the tax return and Form 8606 and related guidance on tracking the basis for nondeductible contributions, conducting research to determine where taxpayer errors are occurring and developing corrective actions, and implementing a minimum threshold for requiring basis calculation to reduce taxpayer burden in making complicated calculations for small distribution amounts. On the distribution end, some taxpayers may be confused about which IRA distributions are taxable or subject to penalty, and older taxpayers in particular may not understand when they must begin required minimum distributions from traditional IRAs. Financial industry organization and advisor representatives we interviewed agreed that IRA distribution rules pose challenges for taxpayers trying to navigate on their own without the help of a tax advisor. As more people contribute and roll over pension amounts to IRAs and the population ages, more taxpayers will have to figure out how to navigate IRA distribution rules. Whereas taxpayers can undo various contribution errors, distribution errors cannot be undone and can trigger taxes and penalties. IRS’s NRP estimates show that about 15 percent of those with traditional IRA distributions misreport their distributions. According to one representative we interviewed, some taxpayers initially forget to report traditional IRA distributions to IRS in part because retirement income is not taxable in some states. Other taxpayers make mistakes in determining the taxable portion of their distributions because of their original failure to track the basis for nondeductible contributions. Interviewees viewed Roth IRAs as less challenging for taxpayers because these distributions are generally tax-free in retirement. Even though many owners plan to hold their IRAs until retirement age, those who take withdrawals before age 59½ face an additional 10 percent penalty unless they qualify for an exception. This can be a costly mistake, and IRS’s AUR program assessed 38,000 taxpayers a total of $61.1 million in taxes and penalties on early withdrawals in 2004. As more exceptions have been added, giving individuals more latitude to tap their IRAs before retirement, taxpayers may be challenged to understand what the penalty exceptions are and that IRAs have no general hardship exception. According to IRS officials, rule exceptions created through late or retroactive legislation, although meant to benefit taxpayers, can create challenges for IRS to timely prepare guidance and update IRA-related forms and publications. Another interviewee said that taxpayers could be easily confused about how allowed exceptions to the early distribution rules differ between IRAs and employer-sponsored retirement plans. Interviewees disagreed on whether IRA early distribution rules should be conformed across different types of retirement plans. Some representatives said that differences among plans made sense, while other interviewees said that the complexity introduced by different rule exceptions for different types of retirement plans confuses taxpayers and conforming the early distribution rules would be beneficial. One interviewee said that some taxpayers make mistakes when taking early distributions from Roth IRAs because qualified distributions within the 5-year holding period for a Roth IRA, while not subject to the 10 percent penalty, are still subject to taxation. Moreover, a taxpayer taking an early distribution from a Roth IRA has to calculate the taxable portion of distribution and that requires recordkeeping of all contributions and earnings for the account. One interviewee said that taxpayers were sometimes confused about how custodians report reasons for early distributions to IRS on the Form 1099-R. One representative suggested that Publication 590 could better explain to taxpayers that IRA custodians are only obligated to report to IRS whether an early withdrawal was taken, not whether the early withdrawal qualified as an exception to the 10 percent penalty rule. In addition, some IRA custodians suggested that to reduce taxpayer and custodian confusion about temporary or newly enacted qualified exceptions, IRS could designate a special code, such as for hurricane relief or charitable giving, that custodians could use to complete the Form 1099-R until further guidance could be developed. Financial industry organization and advisor representatives generally agreed that the required minimum distribution rule for traditional IRAs was particularly challenging for older taxpayers in terms of both determining the timing of the first distribution and calculating the correct amount. For example, the carryover date for the first distribution, April 1, does not coincide with the filing deadline of April 15, and some taxpayers may not realize that subsequent distributions must be done by December 31. To help taxpayers comply with the rule, in 2002 IRS issued uniform tables to simplify the calculation and effective for 2003 required that custodians notify taxpayers through a check box on the Form 5498 that the taxpayers are required to take a distribution in the following year. Even with the added service, the complexity of the required minimum distribution rule is challenging for taxpayers to navigate. Representatives said that the age of 70½ was a confusing concept for taxpayers. Although IRS’s tables for calculating each year’s minimum distribution were updated in 2002 to reflect current life expectancy, the required beginning age of 70½ has been in place since IRAs were created in 1974. Although some suggested changing the required minimum distribution rule, one representative noted that changing the age, which likely benefits the taxpayer through simplification, would likely create some burden for financial institutions to adjust their information technology systems. Interviewees also suggested that IRS could help taxpayers comply with required minimum distribution rules by developing an online tool on IRS’s Web site to help taxpayers calculate the correct minimum distribution amounts or directly notifying taxpayers approaching age 70½ that they will be subject to the required minimum distribution rule. Some representatives we interviewed expressed concern that the 50 percent penalty on minimum distributions not taken was harsh and that taxpayers may not understand how to request that IRS waive the penalty for reasonable errors. Through tax year 2004, IRS directed taxpayers to pay the penalty first and then request a waiver to get the penalty refunded. In tax year 2005, IRS dropped the requirement for a taxpayer to pay first to request a waiver. For tax year 2007, IRS revised the instructions for Form 5329—the form for reporting additional taxes due on IRAs and other tax- favored accounts—to clarify that a taxpayer is to provide documentation explaining the issue and does not have to pay the penalty in advance to request a waiver. Whereas IRA custodians provide educational assistance and notice to the original taxpayer that the required minimum distribution rule applies for the coming year, beneficiaries inheriting IRAs are likely to be less prepared to deal with required distributions. According to interviewees, taxpayers do not always keep beneficiary information up-to-date, and both taxpayers and their advisors are learning the different distribution rules for IRAs inherited by spouses versus other beneficiaries. Interviewees said that some taxpayers may face challenges when they periodically rollover their IRAs or transfer money from a pension plan or an IRA into another IRA. Transfers—where IRA funds are shifted directly from one IRA custodian to another—are not always available, and taxpayer mistakes in completing rollovers—where the taxpayer receives a distribution from the first IRA and must contribute the amount to a new IRA—can trigger taxes and penalties. Failed rollovers are subject to the 10 percent early withdrawal penalty if the taxpayer is under age 59½ and the taxpayer’s retirement savings would no longer be eligible for tax-preferred treatment. As increasing amounts of pension assets are rolled over into IRAs, more taxpayers may experience challenges when moving money between accounts. For transfers between financial institutions, some taxpayers may face a challenge in that not all financial institutions offer this option or are able to systematically handle trustee-to-trustee transfers. The automated system that mutual fund companies use to transfer assets differs from the system brokers use to transfer securities, according to these representatives. They added that system enhancements are being explored, but currently, transfer options may be limited for some taxpayers who wanted to directly transfer IRA assets. Interviewees also pointed out that some taxpayers preferred not to directly transfer their IRAs in part because some may want to use the funds as a temporary loan during the 60-day window in which they are to complete a rollover. This situation can be problematic, according to some representatives, because taxpayers might cash out their funds or miss the deadline to complete the rollover. Other interviewees said that the 60-day rollover window benefits taxpayers and may increase savings because it makes assets seem more accessible, which may alleviate some apprehension about setting assets aside in the IRA. Taxpayers who miss the 60-day window may qualify for an automatic waiver for custodian mistakes in depositing funds or providing erroneous advice. Others can pay to request a private letter ruling from IRS to waive the deadline, but IRS has typically not granted waivers for taxpayers who knew or should have known of the 60-day deadline but had no intention of rolling over the funds by the deadline. To address the various challenges facing taxpayers in complying with IRA rules, options suggested by interviewees and others ranged from improvements in IRS service to help taxpayers better understand the current rules to broader regulatory and legislative changes to simplify the rules governing IRAs. Enhancing IRS’s taxpayer service efforts, such as clarifying and revising IRS Publication 590, could help strengthen compliance with IRA rules by helping taxpayers better understand the rules and avoid unintentional errors. However, these efforts are not easily assessed in terms of their effect on improving compliance. IRS officials said that even though they are able to receive some feedback from customers about how to improve IRA-related forms and publications, they added that it was hard to gather data on their effectiveness, especially when changes are constantly being made. Financial representatives and others we interviewed suggested additional opportunities to clarify guidance and offer new tools to help taxpayers with challenging IRA rules. As recommended by TIGTA, IRS plans to clarify the IRA deduction worksheet to instruct taxpayers over age 70½ that they cannot claim the deduction. As discussed above, IRS could do more to help taxpayers better understand that the total contribution limit applies across traditional and Roth IRAs and is not a limit for each IRA type. IRS also could explore developing new tools to aid taxpayers in complying with complex IRA rules. For the required minimum distribution rule, interviewees suggested that IRS offer an online calculator. IRS officials cautioned that offering an online calculator is not cost free, estimating that developing the tool could cost about $250,000. Another option could be to expand custodian reporting—beyond simply requiring custodians to notify taxpayers that they are subject to the distribution rule—to requiring that custodians calculate and report the minimum distribution amount per account. Options requiring additional reporting by IRA custodians could improve information available to help taxpayers comply with IRA rules and to help IRS detect noncompliance, but such options pose trade-offs in terms of the added reporting burden for those parties and costs for IRS to use the information. Beyond improved IRS service to help educate taxpayers about current IRA rules, financial representatives and advisors we interviewed frequently mentioned simplifying IRA rules through legislative changes as an option to strengthen taxpayer compliance. While not an easy task, simplification could help prevent unintentional taxpayer errors and allow fewer opportunities to hide intentional noncompliance. Modifying the IRA rules, while intended to benefit taxpayers, could also create unintended confusion. Interviewees said that the Pension Protection Act of 2006 provision allowing charitable contributions of up to $100,000 to be made directly from IRAs to charities as tax-free distributions raised many questions from IRA owners and custodians, as well as charities, about how to implement the transaction, since the provision had become effective before IRS could develop guidance and was due to expire in 2007. Another option—basing IRA eligibility on the previous year’s modified AGI—may place a burden on taxpayers to track their modified AGI from the previous year, pose a challenge to IRS to match taxpayer information across years through its automated systems, and introduce additional confusion by deviating from other retirement plan rules, which use current year data. While some interviewees favored this option, one interviewee expressed concern that this change could introduce situations where some taxpayers may be unable to contribute to any retirement plan for that year. Broad IRA legislative options, such as eliminating limits on income eligibility for traditional IRA contribution deductions or Roth IRA contributions or eliminating the required minimum distribution rule, could greatly simplify the rules for some taxpayers, such as older taxpayers who do not need to draw down their IRAs to pay for retirement needs. Such options, however, would certainly reduce federal revenue and with no certainty that more people would take advantage of IRAs to save for retirement. A full evaluation of options to simplify the taxation of retirement savings is beyond the scope of this report, which focuses on the challenges taxpayers face with the current key rules for traditional and Roth IRAs. The National Taxpayer Advocate has suggested simplifying the rules across IRAs and employer pension plans to reduce complexity and encourage participation. In 2004, the National Taxpayer Advocate’s Annual Report to Congress recommended that retirement savings provisions be simplified. The report cited that the various types of retirement savings vehicles, while intended to help taxpayers save for retirement, also created complexity and redundancy in the tax law because of their different rules regulating eligibility, contribution limits, withdrawals, and other transactions. Another approach is to reexamine traditional and Roth IRA rules in the context of broader tax reform when considering fundamental decisions about how to make tax investment and saving. The volume and complexity of IRA rules create a maze where taxpayers may intentionally or unintentionally wander out of compliance with the rules, triggering taxes and penalties. As more taxpayers take advantage of IRAs to contribute funds for retirement or preserve pension rollovers, and with an aging population beginning to tap their retirement savings, taxpayers will encounter growing challenges in complying with the myriad of IRA rules. Sustained attention to taxpayer service and education will be key to helping taxpayers comply with IRA rules and avoid unnecessary penalties on distributions before retirement age or late distributions for older taxpayers owning traditional IRAs. IRS’s service efforts, such as Publication 590, have been a positive step toward strengthening taxpayer compliance. Nevertheless, those using IRAs make basic mistakes in figuring out how much to contribute and how much in taxes they may owe on distributions. To address the challenges taxpayers face with do-it-yourself retirement saving using IRAs, IRS has some opportunities to clarify its IRA guidance and possibly offer new tools to help taxpayers stay on a compliant path. Even with added IRS service for taxpayers, some IRA rules, notably the required minimum distribution rule, may need legislative or regulatory simplification to best help taxpayers navigate their way. Broader options to simplify IRA contribution and distribution rules have implications for both federal revenue and taxpayer choice for tax-preferred retirement saving, and policymakers could consider IRA rule changes in the context of broader tax reform. To help address the challenges facing taxpayers in complying with IRA rules, we recommend that the Commissioner of Internal Revenue take the following two actions: Clarify guidance and outreach materials to help taxpayers better understand that the combined IRA contribution limit applies across all traditional and Roth IRAs. Identify administrative options to improve compliance with the minimum required distribution rule, including additional taxpayer guidance or information reporting, and work in consultation with Treasury on regulatory or legislative strategies to strengthen compliance with the rule. In comments on a draft of this report (which are reprinted in app. III), IRS said that our report fairly represents the rules and challenges that apply to IRAs and that IRS is committed to providing clear information to taxpayers about IRA rules. IRS agreed to take actions consistent with both of our recommendations. Specifically, IRS will continue to find opportunities to provide taxpayers with a better understanding of the combined limit rule and explore ways to improve compliance with the required minimum distribution rule, including working with Treasury or seeking legislative options. Treasury provided technical comments on a draft of this report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of this report are to (1) provide an overview of key individual retirement account (IRA) contribution, distribution, and other rules and describe how the Internal Revenue Service (IRS) educates taxpayers on these rules; (2) describe what IRS knows about the extent of noncompliance for IRA transactions reported on tax returns; and (3) describe the challenges taxpayers face with key IRA rules and some options to strengthen taxpayer compliance. This report focuses on traditional and Roth IRAs set up by individuals to save on their own. Specifically, we relied on key rules governing contributions and distributions for tax-deferred traditional IRAs and after-tax Roth IRAs. Accordingly, we did not review every IRA rule in its entirety. Throughout this engagement, we relied upon IRS Publication 590 as our primary source for understanding traditional IRA and Roth IRA rules. To provide an overview of key IRA rules, we relied primarily on IRS Publication 590, which explains the rules that taxpayers are to follow in contributing to, and receiving distributions from, an IRA. Specific rules depend on account type. We also spoke with IRS and Department of the Treasury (Treasury) officials and reviewed reports from the Congressional Research Service, the Congressional Budget Office, the Investment Company Institute, and others. We also reviewed laws and regulations related to IRAs. For background about IRAs, we spoke with officials in the Research, Analysis and Statistics unit from Statistics of Income (SOI), reviewed SOI bulletin articles, and compiled statistics from SOI data. To describe how IRS educates taxpayers in complying with IRA rules, we reviewed relevant documents and interviewed relevant agency officials who were identified for us by IRS, including officials from the following divisions: Wage and Investment (W&I); Tax Exempt and Government Entities (TE/GE); and Research, Analysis and Statistics. In addition, we spoke with the Taxpayer Advocate Service, an independent office within IRS. Within W&I, we spoke with officials from Media and Publications, Stakeholder Partnerships Education and Communication, Accounts Management, and Field Assistance. Within TE/GE, we spoke with officials from Employee Plans. We also reviewed other guidance, such as the IRS Web site and various IRS publications. To describe what IRS knows about the extent and types of taxpayer misreporting of IRA transaction on tax returns, we reviewed publications and documents, interviewed IRS officials, and used the 2001 National Research Program (NRP)—IRS’s most recent study of individual taxpayer compliance—to estimate the extent of taxpayer misreporting with traditional IRA deductible contribution rules and traditional IRA taxable distribution rules. Specifically, taxpayers report these transactions on the Form 1040 individual income tax return. We provide the margin of error based on 95 percent confidence for estimates from the NRP samples of individual tax returns. We also analyzed data on taxpayer misreporting supplied by the Math Error program and the Automated Underreporter (AUR) program. The Math Error program electronically checks for obvious math errors as tax returns are processed, and the AUR program matches taxpayer returns with IRA custodian-reported information. We followed our guidance for assessing the reliability of computerized databases in using these data sources, and determined that the data are reliable for the purposes of this engagement. To describe the challenges taxpayers face with key IRA rules and some options for strengthening taxpayer compliance with these rules, we reviewed prior GAO reports, IRS documents from its enforcement and service programs that address IRA rules, and documents from other organizations, such as IRS Publication 590 and reports issued by financial industry organizations and research agencies. We also spoke with IRS officials and financial industry organization and advisor representatives knowledgeable about IRAs who provide tax planning advice or serve as IRA custodians to get their perspectives on challenges related to IRA compliance and how those challenges could be addressed. Specifically, we interviewed representatives from financial industry organizations, including the Securities Industry and Financial Markets Association; Investment Company Institute, which represents the mutual fund industry; Financial Planning Association; American Bankers Association; American Institute of Certified Public Accountants; and AARP. We conducted two rounds of interviews with financial industry organizations and advisor representatives. In our first round of interviews, we asked open-ended questions to obtain information about the range of challenges and options for strengthening compliance. Because we asked open-ended questions, the frequency of our interviewees’ responses is not comparable. Therefore, we report responses without reporting the total number of officials or representatives associated with each response. After analyzing the information gathered to identify some common challenges raised by the interviewees, we used a standard set of questions in a second round of interviews with the representatives to try to verify responses and obtain additional context for the challenges and options mentioned, such as whether the challenges were major or frequent or whether an option might impose significant burden on taxpayers or IRA custodians. We also shared the list of challenges with IRS and Treasury to obtain concurrence about taxpayer challenges. Finally, we limited the discussion of challenges and options identified to only those related to the key IRA rules reviewed in this report. These challenges and options are not exhaustive nor are the trade-offs associated with each option. Many of the options are concepts, rather than fully developed proposals with details on how they would be implemented. We conducted this performance audit from March 2008 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Deductible contributions to a traditional IRA from Form 5498 (1040 line 32, 1040A line 17), Summary of distribution information from Form 8606 or Form 1099-R (1040 line 15, 1040A line 11), Additional tax on an IRA (1040 line 60; from Form 5329), Retirement savings contribution credit from Form 8880 (1040 line 51, 1040A line 32), or Taxable compensation from Form W-2 (1040 and 1040A line 7). To report one of four actions taken during the year: Nondeductible contribution to traditional IRA (Part I). Distribution from a Simplified Employee Pension (SEP) or Savings Incentive Match Plan for Employees (SIMPLE) IRA or a traditional IRA with basis greater than zero (Part I). Conversions from traditional, SEP, or SIMPLE IRA to a Roth IRA (Part II). A distribution from a Roth IRA (Part III). Summary information is reported on 1040 line 15 or 1040A line 11. The penalty for not filing an 8606 when required is $50. The penalty for overstating a contribution is $100. To report additional tax related to an IRA being paid because of Early, unexcepted, distribution from an IRA (Part I); Excess contribution made to a traditional or Roth IRA.(Parts III & IV); or Excess accumulation in an IRA for account holders subject to minimum required distributions (Part VIII). Summary information is reported on 1040 line 60. For lower-income taxpayers to claim a tax credit of up to 50 percent of their contribution to a traditional or Roth IRA (8880 line 1). Summary information is reported on 1040 line 51 or 1040A line 32. To request no withholding or additional withholding on distributions from a traditional IRA (other than eligible rollover distributions). To document the following: Earned income (Box 1). Money to be contributed to an IRA must come out of taxable compensation. Whether the taxpayer is covered by an employer-sponsored retirement plan (Box 13), which affects deductibility for a traditional IRA. Box 1 taxable compensation is reported on 1040 and 1040A line 7. To report a distribution from an IRA to an account owner. Information reported includes the gross value of the distribution, its taxable amount, and whether any tax was withheld. The form also includes a distribution code that reports the type of distribution (normal, early with qualified exception, early without qualified exception, withdrawn contributions, etc.). If a distribution is taxable, it is reported on 1040 line 15 or 1040A line 11. To report the fair market value and contribution amounts to an IRA by an account owner. Information reported includes the value of the contribution and its type: normal, rollover, Roth conversion, recharacterization, or Roth contribution. The form also includes a check box that indicates whether a required minimum distribution is due that year. Summary information is reported on 1040 line 32 or 1040A line 17. In addition to the contact named above, MaryLynn Sergent, Assistant Director; Elizabeth Fan; Evan Gilman; Rob Malone; Donna Miller; Karen O’Conor; Cheryl Peterson; Matthew Reilly; Sam Scrutchins; Walter Vance; and Jennifer Li Wong made key contributions to this report. Status of an individual’s participation in an employer-sponsored retirement plan. Typically, the employer will indicate on the individual’s Form W-2 if the individual is an active participant by checking the Retirement Plan box. The accumulated nondeductible contributions in a traditional IRA. For IRA purposes includes wages/salaries/tips, commissions, self- employment income, alimony, separate maintenance, and nontaxable combat pay. IRA contributions must come from taxable compensation received during the year. An amount, subject to limits, deposited into an IRA. A contribution that allows tax deferral on investment earnings until retirement distribution with an up-front tax deduction for contributions by eligible taxpayers depending on pension coverage and income. Annual contributions are subject to a total limit. An after-tax contribution to a traditional IRA that is not eligible for up- front tax deduction. Nondeductible contributions are not subject to an income limit but are subject to the total limit. An after-tax contribution to a Roth IRA by eligible taxpayers. Roth IRA contributions are subject to a limit based on a taxpayer’s income and filing status. A contribution made in excess of eligible amount for the year, which is subject to a 6 percent additional tax to penalize excess contributions. A specific type of rollover in which funds on deposit in a traditional IRA are transferred to a Roth IRA. An amount paid out from an IRA. Distributions from traditional IRAs are taxable subject to a recovery of basis for a nondeductible IRA, and Roth IRAs are generally tax-free. An amount issued to an account owner from an IRA during a year in which the account owner has not reached age 59-½, for reasons such as cost of medical insurance for the unemployed or purchase or rebuilding of a first home. A distribution made to an account owner that is eligible to be contributed via rollover to another IRA or qualified plan. The minimum amount that must by distributed to an account owner each year beginning in the year when the account owner reaches age 70-½. The required minimum contribution is calculated uniquely for each account based on account balance and life expectancy. A condition that occurs when an account owner is age 70-½ or older and does not receive distributions during the year that equal at least the required minimum distribution. Excess accumulations are subject to a 50 percent penalty. An account for tax-deferred retirement savings that is controlled by individuals, not employers. The term IRA, also known as individual retirement arrangement, also includes individual retirement annuities. An IRA that accepts after-tax contributions subject to limits and allows tax-free withdrawals. An arrangement for an employer to make deductible contributions to a traditional IRA (a Simplified Employee Pension or SEP IRA) set up for the employee to receive contributions. Generally, distributions from SEP IRAs are subject to the withdrawal and tax rules that apply to traditional IRAs. An IRA that is set up to receive contributions to an employee pursuant to a Savings Incentive Match Plan for Employees (SIMPLE), which is available to qualifying small businesses. An IRA that accepts both deductible and nondeductible contributions, subject to limits, and makes taxable distributions. The income level referenced to establish eligibility for making contributions to a traditional IRA or a Roth IRA. This amount is calculated via a worksheet that is specific to the plan type to which the taxpayer wants to contribute. The range of modified AGI during which the eligible contribution amount phases down from the maximum amount to zero. A process by which an ineligible contribution is moved, before the tax- filing deadline, to an account that is eligible to receive it. The process by which funds on deposit in a qualified employer plan or another IRA are transferred to an IRA. A rollover that is transferred, tax-free, directly from one trustee to another with no distribution being made to the account owner. A financial institution that maintains an account owner’s IRA. The amount held back by a trustee when a distribution is made to an account owner; amount of withholding depends on type of distribution. | Individual retirement accounts (IRA) allow individuals to save for retirement in a tax-preferred way. Traditional IRA contributions, subject to certain limitations, can be deducted from taxable earnings and taxes on earnings are deferred until distribution. In contrast, Roth IRA contributions are made after tax and distributions are tax-free. Faced with a myriad of rules covering IRA contributions and distributions, taxpayers may fail to comply with the rules. GAO was asked to (1) provide an overview of key rules and describe how the Internal Revenue Service (IRS) educates taxpayers about these rules, (2) describe what IRS knows about the extent of noncompliance with IRA transactions reported on taxpayer returns, and (3) describe challenges taxpayers face with key rules and some options for strengthening compliance. GAO reviewed IRS documents and compliance data. To identify challenges, GAO interviewed officials from the financial industry and advisor representatives. Taxpayers face a myriad of tax rules governing contributions to, distributions from, and rollovers between accounts for traditional and Roth IRAs. Both types of IRAs have rules governing eligibility to contribute, and all IRA contributions are subject to an annual limit. For example, eligibility to deduct (from taxable income) contributions to a traditional IRA and to contribute to a Roth IRA depends on taxpayer income and filing status, while coverage by an employer-sponsored retirement plan only affects eligibility for deductible contributions to a traditional IRA. Tax rules for distributions diverge for traditional and Roth IRAs, but both types are generally subject to a 10 percent early withdrawal penalty, with some exceptions. Further, traditional IRA owners over age 70? must take minimum distributions or face a 50 percent penalty on the required distribution amount. Rollovers, where a taxpayer moves money from one account into an IRA account, must be completed within 60 days, or the amounts are taxable and subject to penalty. To assist taxpayers in voluntarily complying with IRA rules, IRS offers special publications and telephone assistance for taxpayers with IRA questions. Even with IRS's service efforts, IRS data show that some taxpayers fail to comply with rules for reporting contribution deductions and taxable distributions from traditional IRAs. IRS's National Research Program showed that nearly 15 percent of taxpayers who took traditional IRA contribution deductions as well as 15 percent of those who took taxable distributions misreported on them on their tax returns in 2001 (the most recent data available). IRS has automated enforcement programs--matching tax returns with information reported by IRA custodians--to detect and correct these types of IRA misreporting. For tax year 2004, IRS assessed additional taxes of $23.2 million for ineligible traditional IRA contribution deductions or exceeding the deduction limits and $61.1 million in taxes and penalties for early withdrawals from traditional IRAs. As partly shown by taxpayer misreporting to IRS, taxpayers face challenges in figuring how much they can contribute, navigating the various distribution rules, and rolling over their IRAs between custodians. For example, according to representatives of financial firms and advisors GAO interviewed, taxpayers may not understand that the annual contribution limit applies across traditional IRAs and Roth IRAs in combination. On the distribution side, interviewees said that older taxpayers make mistakes in determining when they must start distributions and calculating the correct amount. Interviewees identified some options for IRS to clarify guidance, such as for the combined contribution limit rule, or develop tools to help taxpayers, such as a Webbased calculator for required minimum distributions. IRS could explore actions such as requiring additional reporting by custodians or simplifying the required minimum distribution rule to strengthen compliance with this complicated rule. Other options to reduce the complexity of IRA rules, such as eliminating income limits on eligibility, pose trade-offs and could be considered in the context of broader tax reform |
VHA’s outpatient consult process is governed by a national policyoutlines the use of an electronic system for requesting and managing consults and delineates oversight responsibilities at the national, VISN, and VAMC level. Outpatient consults include requests by physicians or other providers for both clinical consultations and procedures. A clinical consultation is a request seeking an opinion, advice, or expertise regarding evaluation or management of a patient’s specific clinical concern, whereas a procedure request is for a specialty care procedure, such as a colonoscopy. The consult process—displayed in figure 1—is governed by VHA’s national consult policy, which requires VAMCs to manage consults using a national electronic consult system, and to provide timely and appropriate care to veterans. Outpatient consults typically are requested by a veteran’s primary care provider using VHA’s electronic consult system. To send a consult request, providers log on to the system and complete an electronic consult request template that may be customized by the VAMC’s applicable specialty care clinic. The template requires the requesting provider to provide specific information, such as a diagnosis and a reason why the specialty care is needed, and may require additional information as determined by the specialty care clinic. For example, a gastroenterology template for abdominal pain used at one VAMC asked the requesting provider whether the treatment should be provided in person, reminded the provider about specific lab tests to be completed, and asked the provider to provide a brief history of the patient’s symptoms. (See fig. 2.) This specialty care clinic had specific templates depending on the patient’s symptoms. (See appendix I for examples of other templates used by the gastroenterology clinic at this VAMC.) After completing the template, the requesting provider electronically submits the consult for the specialty care provider to review. According to VHA’s guideline, the specialty care provider is to review and determine whether to accept a consult within 7 days of the request. Typically, the provider’s review involves determining whether to accept the consult—that the consult is needed and appropriate—and if the consult is accepted, determining its relative urgency—a process known as triaging. When reviewing a consult request, a specialty care provider may decide not to accept it, and will send the consult back to the requesting provider. This is referred to as discontinuing the consult, which a specialty care provider may decide to do for several reasons, including that the care is not needed, the patient refuses care, or the patient is deceased. In other cases the specialty care provider may determine that additional information is needed before accepting the consult; in such cases, the specialty care provider will send the consult back to the requesting provider, who can resubmit it with the needed information. If the provider accepts the consult, an attempt is made to contact the patient and schedule an appointment. Appointments resulting from outpatient consults, like other outpatient medical appointments, are subject to VHA’s scheduling policy. This policy is designed to help VAMCs meet their commitment of scheduling medical appointments with no undue waits or delays for patients. According to VHA officials, the scheduler is to take into account the relative urgency of the consult, that is, the result of the reviewing specialty provider’s triage decision, when attempting to schedule the appointment. If an appointment resulting from a consult is scheduled and held, VHA’s policy requires the specialty care provider to appropriately document the results in the consult system, which would then close out the consult as completed. To do so, the provider updates the consult with the results of the appointment by entering a clinical progress note in the consult system. If the provider does not perform this step, or does not perform it appropriately, the consult remains open in the consult system. If an appointment is not held, specialty care clinic staff members are to document why they were unable to complete the consult. According to VHA’s national consult policy, VHA central office officials have overall oversight responsibility for the consult process, including the measurement and monitoring of ongoing performance. The policy also requires VISN leadership to oversee the consult processes for VAMCs in their networks, and requires each VAMC to manage individual consults consistent with VHA’s timeliness guidelines. To evaluate the timeliness of resolving consults across VAMCs, in September 2012, VHA created a national consult database from the information contained in its electronic consult system. After reviewing these data, VHA determined that they were inadequate for monitoring consults, because they had not been entered in the consult system in a consistent, standard manner, among other issues. For example, in addition to requesting consults for clinical concerns, VHA found that VAMCs also were using the consult system to request and manage a variety of administrative tasks, such as arranging patient travel to appointments. Additionally, VHA could not accurately determine whether patients actually received the care they needed, or if they received the care in a timely fashion. VHA found that this was due, in part, to the fact that data in the consult system included consults for both care that was clinically appropriate to be open for more than 90 days—known as future care consults—as well as those for care that was needed within 90 days. At the time of the database’s creation, according to VHA officials, approximately 2 million consults (both clinical and administrative) were unresolved for more than 90 days. Subsequently, in October 2012, a task force convened by VA’s Under Secretary for Health began addressing several issues, including those regarding VHA’s consult system. In response to the task force recommendations, in May 2013, VHA launched the consult business rules initiative to standardize aspects of the consult process and develop consistent and reliable information on consults across all VAMCs. For example, the consult business rules initiative required that VAMCs limit their use of the consult system to requesting consults for care expected within 90 days, and distinguish between administrative and clinical consults in the consult system. As part of this initiative, VAMCs were required to complete four tasks between July 1, 2013, and May 1, 2014: Review and properly assign codes to consistently record consult requests in the consult system. Assign distinct identifiers in the electronic consult system to differentiate between clinical and administrative consults. Develop and implement strategies for managing requests for future care consults that are not needed within 90 days. Conduct a clinical review, as warranted, to determine if care has been provided or is still needed for unresolved consults—those open more than 90 days. After the initial implementation of these tasks, VHA required VAMCs to maintain adherence to the consult business rules initiative when processing consults. VHA was updating its national consult policy to incorporate aspects of the consult business rules initiative and expected to have a draft policy by September 2014. Our review of a sample of consults at five VAMCs found that veterans did not always receive outpatient specialty care in a timely manner, if at all. We found consults that were not processed in accordance with VHA timeliness guidelines—for example, consults that were not reviewed within 7 days or not completed within 90 days. We also found consults for which veterans did not receive the outpatient specialty care requested— 64 of the 150 consults in our sample (43 percent)—and those for which the requested specialty care was provided, but the consults were not properly closed in the consult system. We found that specialty care providers at the five VAMCs we examined were not always able to make their initial consult reviews within VHA’s 7-day guideline. Specifically, we found that for 31 of the150 consults in our sample (21 percent), specialty care providers did not meet the 7-day guideline, but they were able to meet the guideline for 119 of the consults (79 percent). (See table 1.) For one VAMC, nearly half the consults were not reviewed and triaged within 7 days, and for some consults, we found it took several weeks before the specialty care providers took action. Officials at this VAMC cited a shortage of providers needed to review and triage the consults in a timely manner. We also found that for the majority of the 150 consults in our sample, veterans did not receive care within 90 days of the date the consult was requested, in accordance with VHA’s guideline. Specifically, veterans did not receive care within 90 days for 122 of the 150 consults we examined (81 percent). (See table 2.) We also found that for the 28 consults in our sample for which VAMCs provided care to veterans within 90 days, an extended amount of time elapsed before specialty care providers completed all but 1 of them in the consult system. As a result, the consults remained open in the system, making them appear as though the requested care was not provided within 90 days. Although 1 consult remained open for only 8 days from when the care was provided, for the remaining 27 consults, it took between 29 and 149 days from the time care was provided until the consults were completed in the system. In addition, of the 28 consults, we found that specialty care providers at one VAMC did not properly document the results of all 10 cardiology consults we reviewed, in order to close them in the system. Officials from four of the five VAMCs told us that specialty care providers often do not properly document that consults are complete, which requires the selection of the correct clinical progress note that corresponds to the patient’s consult. Officials attributed this ongoing issue in part to the use of medical residents who rotate in and out of specialty care clinics after a few months, and lack experience with completing consults. Officials from one VAMC told us such rotations require VAMC leadership to ensure new residents are continually trained on how to properly complete consults. To help ensure that specialty care providers consistently choose the correct clinical progress note, this VAMC activated a technical solution consisting of a prompt in its consult system that instructs providers to choose the correct clinical progress note needed to complete consults. Officials stated that this has resulted in providers more frequently choosing the correct notes needed to complete consults. Examples of consults that were not completed in 90 days, or were closed without the veterans being seen, included: For 3 of 10 gastroenterology consults we examined for one VAMC, we found that between 140 and 210 days elapsed from the dates the consults were requested to when the veterans received care. For the consult that took 210 days, an appointment was not available and the veteran was placed on a waiting list before having a screening colonoscopy. For 4 of the 10 physical therapy consults we examined for one VAMC, we found that between 108 and 152 days elapsed, with no apparent actions taken to schedule appointments for the veterans for whom consults were requested. The veterans’ medical records indicated that due to resource constraints, the clinic was not accepting consults for For 1 of these non-service-connected physical therapy evaluations.consults, several months passed before the veteran was referred to non-VA care, and was seen 252 days after the initial consult request. The other 3 consults were sent back to the requesting providers without the veterans receiving care. For all 10 of the cardiology consults we examined for one VAMC, we found that staff initially scheduled veterans for appointments between 33 and 90 days after the request, but medical records for those patients indicated that the veterans either cancelled or did not show for their initial appointments. In several instances, medical records indicated the veterans cancelled multiple times. For 4 of the consults, VAMC staff closed the consults without the veterans being seen; for the other 6 consults, VAMC staff rescheduled the appointments for times that exceeded VHA’s 90-day guideline. VAMC officials cited increased demand for services, patient no-shows, and cancelled appointments, among the factors that hinder specialty care providers’ ability to meet VHA’s guideline for completing consults within 90 days. Several VAMC officials also noted a growing demand for both gastroenterology procedures, such as colonoscopies, as well as consultations for physical therapy evaluations, combined with a difficulty in hiring and retaining specialty care providers for these two clinical areas, as causes of periodic backlogs in providing these services. Officials at these VAMCs indicated that they try to mitigate backlogs by referring veterans to non-VA providers for care. Although officials indicated that use of non-VA care can help mitigate backlogs, several officials also indicated that this requires more coordination between the VAMC, the patient, and the non-VA provider; can require additional approvals for the care; and also may increase the amount of time it takes a VAMC specialty care provider to obtain the results (such as diagnoses, clinical findings and treatment plans) of medical appointments or procedures. Officials acknowledged that using non-VA care does not always prevent delays in veterans receiving timely care or in specialty care providers completing consults. Additionally, we identified one consult for which the patient experienced delays in obtaining non-VA care and died prior to obtaining needed care. In this case, the patient needed endovascular surgery to repair two aneurysms—an abdominal aortic aneurysm and an iliac aneurysm. According to the patient’s medical record, the timeline of events surrounding this consult was: September 2013 – Patient was diagnosed with two aneurysms. October 2013 – VAMC scheduled patient for surgery in November, but subsequently cancelled the scheduled surgery due to staffing issues. December 2013 – VAMC approved non-VA care and referred the patient to a local hospital for surgery. Late December 2013 – After the patient followed up with the specialty care clinic, it was discovered that the non-VA provider lost the patient’s information. The specialty care clinic staff resubmitted the patient’s information to the non-VA provider. February 2014 – The consult was closed because the patient died prior to the surgery scheduled by the non-VA provider. According to VAMC officials, they conducted an investigation of this case. They found that the non-VA provider planned to perform the surgery on February 14, 2014, but the patient died the previous day. Additionally, they stated that according to the coroner, the patient died of cardiac disease and hypertension, and that the aneurysms remained intact. Since launching the consult business rules initiative in May 2013, VHA officials reported overseeing the consult process system-wide primarily by reviewing consult reports created from its national database to monitor VAMCs’ progress in meeting VHA’s timeliness guidelines. However, we found limitations in VHA’s system-wide oversight, as well as in the oversight provided by the five VISNs included in our review. These limitations have affected the reliability of VHA’s consult data and consequently VHA’s ability to effectively assess VAMC performance in managing consults. VHA and VISNs do not routinely assess VAMCs’ management of consults. Although VHA officials reported using system-wide consult data to help ensure that VAMCs are meeting VHA timeliness guidelines, and the five VISNs included in our review reported using consult data to monitor VAMCs they oversee, neither routinely assesses how VAMCs are actually managing consults. According to federal internal control standards, managers should perform ongoing monitoring, including independent assessments of performance.important to help VHA identify the underlying causes of delays and to help ensure that its consult data reliably reflects the number of, and length of time, veterans are waiting for care. VHA and VISN officials reported that they do not routinely audit consults to assess whether VAMC providers have been appropriately requesting, reviewing, and resolving consults in accordance with VHA’s consult policy. Instead, VHA and VISN officials reported their oversight primarily relies on monitoring reports that track VAMCs’ progress in reducing the number of consults unresolved for more than 90 days. VHA officials stated that they delegate oversight of unresolved consults to VAMCs and as such, do not conduct assessments of individual consults. Further, several VISN officials stated that they did not see the need for such assessments and that ongoing monitoring of consult data has been sufficient. Although VHA and the five VISNs included in our review do not routinely conduct such assessments, our work at five VAMCs found such reviews may help provide insights into the underlying causes of delays. Our examination of a sample of consults revealed several issues with VAMCs’ specialty care clinics’ management of consults, including delays in reviewing and scheduling consults, incorrectly discontinuing consults, and in some cases incorrectly closing a consult as complete even though care had not been provided. We discussed these issues with officials at the five VAMCs included in our review. Officials from two VAMCs stated that in responding to our questions, they researched the actions taken on each consult and learned about some of the root causes contributing to consult delays. For example, one VAMC found that its process for managing consults requested from other VAMCs was not clear to providers and needed to be improved to mitigate delays in processing such consults. Additionally, for a few of the consults for which we identified that care had not been provided, VAMC officials stated that, as a result of our findings, they contacted the veterans to schedule appointments when care was still needed. In addition, VHA officials stated that independent assessments of consults may be helpful and that they would consider conducting them in the future. By primarily relying on reviewing data and not routinely conducting an assessment of VAMCs’ management of consults, VHA and VISN officials may be limited in identifying systemic issues affecting VAMCs’ ability to provide veterans with timely access to care. VHA lacks documentation of how VAMCs addressed unresolved consults. One task under the consult business rules initiative required VAMCs to resolve consults that had been open for more than 90 days. VHA provided system-wide guidance outlining how to appropriately complete this task. VAMCs were to conduct clinical reviews of all non- administrative consults and determine whether the consult should be completed or discontinued—thus closing them in the consult system. However, VHA did not require VAMCs to document these decisions or the processes by which they were made, only to self-certify the task had been completed. Further, VHA did not require VISNs to independently verify that the task was completed appropriately. VAMC officials told us their reviews indicated that for many of the consults, care had been provided, but an incorrect clinical progress note was used. Therefore, officials had to select the correct note that corresponded to each consult, which completed the consult in the system. In addition, officials also told us that they discontinued many other consults because they found that patients were deceased or that patients had repeatedly cancelled appointments and thus, they determined that care was no longer needed. However, none of the five VAMCs in our review were able to provide us with specific documentation of these decisions and rationales. At one VAMC, for example, we found that a specialty care clinic discontinued 18 consults the same day that a task for addressing unresolved consults was due. Three of these 18 consults were part of our random sample, and we found no indication that a clinical review was conducted prior to the consults being discontinued. The lack of documentation is not consistent with federal internal control standards, which indicate that all transactions and other significant events need to be clearly documented and stress the importance of the creation and maintenance of related records, which provide evidence of execution of these activities. In addition to monitoring VAMC performance in completing the consult business rules initiative tasks, VHA officials told us they are continuing to monitor VAMCs’ performance in addressing unresolved consults. In 2012, VHA estimated that approximately 2 million consults in its system were unresolved for more than 90 days. According to a VHA June 2014 consult tracking report, 285,877 consults were unresolved. attributed this reduction in the number of unresolved consults to implementation of the consult business rules initiative and their continued monitoring of VAMC performance in meeting VHA’s consult timeliness guideline. Given the thousands of consults that have been closed by VAMCs, the lack of documentation and independent verification of how VAMCs addressed these unresolved consults raises questions about the reliability of VHA consult data and whether the data accurately reflects whether patients received the care needed in a timely manner, if at all. VHA officials told us that this number changes daily and expects it to continue to decline as VAMCs continue to resolve consults open more than 90 days. VAMCs were instructed to track future care consults either by developing markers so such consults could be identified in the consult system, or by using existing mechanisms outside of the consult system such as an electronic wait list. The electronic wait list is a component of the VistA scheduling system designed for recording, tracking, and reporting veterans waiting for medical appointments. completing this task, we found that each of the five VAMCs initially implemented strategies for managing future care consults that were, wholly or in part, non-approved VHA options. For example, one VAMC reported to us that initially its staff entered consult requests for future care into the consult system without the use of a future care flag, and subsequently discontinued these consults if they reached the 90-day threshold. Discontinuing future care consults closed them in the consult system, and thus prevented the consults from being monitored, which may have increased the risk of the VAMC losing track of these requests for specialty care. Further, during the course of our work, officials from three VAMCs reported revising their initial strategies for managing future care consults. (See table 3.) Some of these VAMCs continued to implement strategies that were non-approved VHA options and could have resulted in consult data that failed to distinguish future care consults from those that were truly delayed. According to federal internal control standards, managers should perform ongoing monitoring, including independent assessments of performance. However, because VHA officials relied on self- certifications submitted by VAMCs, they were not aware of the extent to which VAMCs implemented strategies that were not one of VHA’s approved options, nor would they be aware of the extent to which VAMCs have since changed their strategies. As of June 2014, VHA officials told us they did not have detailed information on the various strategies VAMCs have implemented to manage future care consults, and they acknowledged that they had not conducted a system-wide review of VAMCs’ strategies. Furthermore, VHA does not have a formal process by which VAMCs could share best practices system-wide. control standards, identifying and sharing information is an essential part of ensuring effective and efficient use of resources. We found that VAMCs may not be benefiting from the challenges and solutions other VAMCs discovered when implementing strategies for managing future care consults. For example, during our review, we found that one VAMC revised its initial strategy in a way that another VAMC had already found ineffective. Officials at that VAMC stated that they were implementing a new strategy to manage future care consults in a separate electronic system. However, another VAMC opted not to use a similar electronic system it piloted after finding that it confused providers and required extensive training; that VAMC opted instead to use future care markers in its consult system. A more systematic identification and sharing of best practices for managing future care consults would enable VAMCs to more efficiently implement effective strategies for managing specialty care consults. Officials from VAMCs in our review described sharing best practices with colleagues at other VAMCs in their VISN on an ad hoc basis. standardized data needed for conducting oversight. Additionally, according to federal internal control standards, management is responsible for developing the detailed policies, procedures, and practices to fit their agency’s operations and to ensure that they are built into, and an integral part of, operations. However, we found that VHA has not developed a detailed, system-wide policy on how to address patient no-shows and cancelled appointments, two frequently noted causes of delays in providing care. Instead, VHA policies provide general guidance that state that after a patient does not show for or cancels an appointment, the specialty care clinic staff should review the consult and determine whether or not to reschedule the VHA officials told us that they allow each VAMC to appointment. determine its own approach to managing these occurrences. However, such variations in no-show and cancellation policies are reflected in the consult data, and as a result, this variation may make it difficult to assess and compare VAMCs’ performance. For example, if a specialty care clinic allows a patient to cancel multiple specialty care appointments, the consult would remain open and could inaccurately suggest delays in care where none might exist. In contrast, if the specialty care clinic limited the number of patient cancellations, the consult would be closed after the allowed number and would not appear as a delay in care, even if a delay had occurred. See VHA Directive 2010-027, VHA Outpatient Scheduling Processes and Procedures (June 9, 2010) and VHA Directive 2008-056, VHA Consult Policy (Sept. 16, 2008). varied in its requirements.we found that specialty care providers had scheduled appointments for 127 of the consults, and that patient no-shows and cancelled appointments were among the factors contributing to delays in providing timely care for 66 of these consults (52 percent). Providing our nation’s veterans with timely access to medical care, including outpatient specialty care, is a crucial responsibility of VHA. We and others have identified problems with VHA’s consult process used to manage the outpatient specialty care needs of veterans. Our review of a sample of consults found that VAMCs did not always provide veterans with requested specialty care in a timely manner, if at all. In other cases, VAMCs were able to provide the needed care on a timely basis, but specialty care providers failed to properly complete or document the consults, making it appear as though care for veterans was delayed, even when it was not. Limitations in VHA’s oversight of the consult process have affected the reliability of VHA’s consult data and its usefulness for oversight. Although VHA officials cited VAMCs’ progress in reducing the backlog of consults unresolved for more than 90 days, they have not independently verified that VAMCs appropriately closed these consults, calling into question the accuracy of these data. Due to their lack of oversight, VHA officials are not aware of the various strategies VAMCs implemented to manage future care consults, and thus when monitoring consult data, cannot adequately determine if future care consults are distinguishable from those that are truly delayed. Additionally, VHA has not developed a system-wide process for identifying and sharing VAMCs’ best practices for managing future care and other types of consults; thus, VAMCs may be implementing strategies that others already have found ineffective or may be unaware of strategies that others have successfully implemented. Further, VHA’s decentralized approach for handling patient no-shows and cancelled appointments, as well as other issues, makes it difficult to compare timeliness of providing outpatient specialty care system-wide. Ultimately, this decentralized approach may further limit the usefulness of the data and VHA’s and VISNs’ ability to assess VAMCs’ performance in managing consults and providing timely care to our nation’s veterans. To improve VHA’s ability to effectively oversee the consult process, and help ensure VAMCs are providing veterans with timely access to outpatient specialty care, we recommend that the Secretary of Veterans Affairs direct the Interim Under Secretary for Health to take the following six actions: Assess the extent to which specialty care providers across all VAMCs, including residents who may be serving on a temporary basis, are using the correct clinical progress notes to complete consults in a timely manner, and, as warranted, develop and implement system- wide solutions such as technical enhancements, to ensure this is done appropriately. Enhance oversight of VAMCs by routinely conducting independent assessments of how VAMCs are managing the consult process, including whether they are appropriately resolving consults. This oversight could be accomplished, for example, by VISN officials periodically conducting reviews of a random sample of consults as we did in the review we conducted. Require specialty care providers to clearly document in the electronic consult system their rationale for resolving a consult when care has not been provided. Identify and assess the various strategies that all VAMCs have implemented for managing future care consults; including determining the potential effects these strategies may have on the reliability of consult data; and identifying and implementing measures for managing future care consults that will ensure the consistency of consult data. Establish a system-wide process for identifying and sharing VAMCs’ best practices for managing consults that may have broader applicability throughout VHA, including future care consults. Develop a national policy for VAMCs to manage patient no-shows and cancelled appointments that will ensure standardized data needed for effective oversight of consults. We provided VA with a draft of this report for its review and comment. VA provided written comments, which are reprinted in appendix II. In its written comments, VA concurred with all six of the report’s recommendations. To implement five of the recommendations, VA indicated that the VHA Deputy Under Secretary for Health for Operations and Management will take a number of actions, such as chartering a workgroup to develop clear standard operating procedures for completing and managing consults. VA indicated that target completion dates for implementing these recommendations range from December 2014 through December 2015. For the sixth recommendation, VA indicated that, by December 2014, VHA will establish a system-wide process that facilitates identifying and disseminating VAMC best practices for managing consults. VA also provided technical comments, which we have incorporated as appropriate. As arranged with your office, unless you publicly disclose the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Veterans Affairs and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To send a consult request, providers log on to the consult system and complete an electronic consult request template developed by the VA medical center's specialty care clinic. As shown in figures 3 and 4 below, the information requested in these templates may vary depending on the patient’s symptoms. After completing the template, the requesting provider electronically submits the consult for the specialty care provider to review. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Jacquelyn Hamilton; David Lichtenfeld; Brienne Tierney; and Ann Tynan made key contributions to this report. | There have been numerous reports of VAMCs failing to provide timely care to veterans, including specialty care. In some cases, delays have reportedly resulted in harm to patients. In 2012, VHA found that its consult data were not adequate to determine the extent to which veterans received timely outpatient specialty care. In May 2013, VHA launched an initiative to standardize aspects of the consult process at its 151 VAMCs and improve its ability to oversee consults. GAO was asked to evaluate VHA's management of the consult process. This report evaluates (1) the extent to which VHA's consult process has ensured veterans' timely access to outpatient specialty care, and (2) how VHA oversees the consult process to ensure veterans are receiving outpatient specialty care in accordance with its timeliness guidelines. GAO reviewed documents and interviewed officials from VHA and from five VAMCs that varied based on size and location. GAO also reviewed a non-generalizeable sample of 150 consults requested across the five VAMCs. Based on its review of a non-generalizable sample of 150 consults requested from April 2013 through September 2013, GAO found that the Department of Veterans Affairs' (VA) Veterans Health Administration's (VHA) management of the consult process has not ensured that veterans always receive outpatient specialty care in a timely manner, if at all. Specifically, GAO found that for 122 of the 150 consults reviewed—requests for evaluation or management of a patient for a specific clinical concern—specialty care providers did not provide veterans with the requested care in accordance with VHA's 90-day timeliness guideline. For example, for 4 of the 10 physical therapy consults GAO reviewed for one VA medical center (VAMC), between 108 and 152 days elapsed with no apparent actions taken to schedule an appointment for the veteran. VAMC officials cited increased demand for services, and patient no-shows and cancelled appointments among the factors that lead to delays and hinder their ability to meet VHA's timeliness guideline. Further, for all but 1 of the 28 consults for which VAMCs provided care within 90 days, an extended amount of time elapsed before specialty care providers properly documented in the consult system that the care was provided. As a result, the consults remained open in the system, making them appear as though the requested care was not provided within 90 days. VHA's limited oversight of consults impedes its ability to ensure VAMCs provide timely access to specialty care. VHA officials reported overseeing the consult process primarily by reviewing data on the timeliness of consults; however, GAO found limitations in VHA's oversight, including oversight of its initiative designed to standardize aspects of the consult process. Specifically: VHA does not routinely assess how VAMCs are managing their local consult processes, and thus is limited in its ability to identify systemic underlying causes of delays. As part of its consult initiative, VHA required VAMCs to review a backlog of thousands of unresolved consults—those open more than 90 days—and if warranted to close them. However, VHA did not require VAMCs to document their rationales for closing them. As a result, questions remain about whether VAMCs appropriately closed these consults and if VHA's consult data accurately reflect whether veterans received the care needed in a timely manner, if at all. VHA does not have a formal process by which VAMCs can share best practices for managing consults. As a result, VAMCs may not be benefitting from the challenges and solutions other VAMCs have discovered regarding managing the consult process. VHA lacks a detailed system-wide policy for how VAMCs should manage patient no-shows and cancelled appointments for outpatient specialty care, making it difficult to compare timeliness in providing this care system-wide. Consequently, concerns remain about the reliability of VHA's consult data, as well as VHA's oversight of the consult process. GAO recommends that VHA take actions to improve its oversight of consults, including (1) routinely assess VAMCs' local consult processes, (2) require VAMCs to document rationales for closing unresolved consults, (3) develop a formal process for VAMCs to share consult management best practices, and (4) develop a policy for managing patient no-shows and cancelled appointments. VA concurred with all of GAO's recommendations and identified actions it is taking to implement them. |
Our survey of the largest sponsors of DB pension plans reveals that they have made a number of revisions to their benefit offerings over approximately the last 10 years or so. Generally, respondents reported that they revised benefit formulas, converted some plans to hybrid plans (such as cash balance plans), or froze some of their plans. For example, 81 percent of responding sponsors reported that they modified the formulas of one or more of their DB plans. Respondents were asked to report changes for plans or benefits that covered only nonbargaining employees, as well as to report on plans or benefits that covered bargaining unit employees. Fifty-eight percent of respondents who reported on plans for collective- bargaining employees indicated they had generally increased the generosity of their DB plan formulas between January 1997 and the time of their response (see app. I, slide 12). In contrast, 48 percent of respondents reporting on plans for their nonbargaining employees had generally decreased the generosity of their DB plan formulas since 1997. “Unpredictability or volatility of DB plan funding requirements” was the key reason cited for having changed the benefit formulas of plans covering nonbargaining employees (see app. I, slide 14). “Global or domestic competitive pressures” in their industry was the key reason cited for the changes to the plans covering collectively bargained employees (see app. I, slide 13). With regard to plans for bargaining employees, however, a number of the sponsors who offered reasons for changes to bargaining unit plans also volunteered an additional reason for having modified their plans covering bargaining employees. Specifically, these sponsors wrote that inflation or a cost-of- living adjustment was a key reason for their increase to the formula. This suggests that such plans were flat-benefit plans that may have a benefit structure that was increased annually as part of a bargaining agreement. Meanwhile, sponsors were far more likely to report that they had converted a DB plan covering nonbargaining unit employees to a hybrid plan design than to have converted DB plans covering collectively bargained employees. For example, 52 percent of respondents who reported on plans for nonbargaining unit employees had converted one or more of their traditional plans to a cash balance or other hybrid arrangement (see app. I, slide 15). Many cited “trends in employee demographics” as the top reason for doing so (see app. I, slide 16). Among respondents who answered the cash balance conversion question for their collectively bargained plans, 21 percent reported converting one or more of their traditional plans to a cash balance plan. Regarding plan freezes, 62 percent of the responding firms reported a freeze, or a plan amendment to limit some or all future pension accruals for some or all plan participants, for one or more of their plans (see app. I, slide 18). Looking at the respondent’s plans in total, 8 percent of the plans were described as hard frozen, meaning that all current employees who participate in the plan receive no additional benefit accruals after the effective date of the freeze, and that employees hired after the freeze are ineligible to participate in the plan. Twenty percent of respondents’ plans were described as being under a soft freeze, partial freeze, or “other” freeze. Although not statistically generalizable, the prevalence of freezes among the large sponsor plans in this survey is generally consistent with the prevalence of plan freezes found among large sponsors through a previous GAO survey that was statistically representative. The vast majority of respondents (90 percent) to our most recent survey also reported on their 401(k)-type DC plans. At the time of this survey, very few respondents reported having reduced employer or employee contribution rates for these plans. The vast majority reported either an increase or no change to the employer or employee contribution rates, with generally as many reporting increases to contributions as reporting no change (see app. I, slide 21). The differences reported in contributions by bargaining status of the covered employees were not pronounced. Many (67 percent) of responding firms plan to implement or have already implemented an automatic enrollment feature to one or more of their DC plans. According to an analysis by the Congressional Research Service, many DC plans require that workers voluntarily enroll and elect contribution levels, but a growing number of DC plans automatically enroll workers. Additionally, certain DC plans with an automatic enrollment feature may gradually escalate the amount of the workers’ contributions on a recurring basis. However, the Pension Protection Act of 2006 (PPA) provided incentives to initiate automatic enrollment for those plan sponsors that may not have already adopted an automatic enrollment feature. Seventy- two percent of respondents reported that they were using or planning to use automatic enrollment for their 401(k) plans covering nonbargaining employees, while 46 percent indicated that they were currently doing so or planning to do so for their plans covering collective-bargaining employees (see app. I, slide 22). The difference in automatic enrollment adoption by bargaining status may be due to the fact that nonbargaining employees may have greater dependence on DC benefits. That is, a few sponsors noted they currently automatically enroll employees who may no longer receive a DB plan. Alternatively, automatic enrollment policies for plans covering collective-bargaining employees may not yet have been adopted, as that plan feature may be subject to later bargaining. Health benefits are a large component of employer offered benefits. As changes to the employee benefits package may not be limited to pensions, we examined the provision of health benefits to active workers, as well as to current and future retirees. We asked firms to report selected nonwage compensation costs or postemployment benefit expenses for the year 2006 as a percentage of base pay. Averaging these costs among all those respondents reporting such costs, we found that health care comprised the single largest benefit cost. Active employee health plans and retiree health plans combined to represent 15 percent of base pay (see app. I, slide 24). DB and DC pension costs were also significant, representing about 14 percent of base pay. All of the respondents reporting on health benefits offered a health care plan to active employees and contributed to at least a portion of the cost. Additionally, all of these respondents provided health benefits to some current retirees, and nearly all were providing health benefits to retirees under the age of 65 and to retirees aged 65 and older. Eighty percent of respondents offered retiree health benefits to at least some future retirees (current employees who could eventually become eligible for retiree benefits), although 20 percent of respondents offered retiree health benefits that were fully paid by the retiree. Further, it appears that, for new employees among the firms in our survey, a retiree health benefit may be an increasingly unlikely offering in the future, as 46 percent of responding firms reported that retiree health care was no longer to be offered to employees hired after a certain date (see app. I, slide 25). We asked respondents to report on how an employer’s share of providing retiree health benefits had changed over the last 10 years or so for current retirees. Results among respondents generally did not vary by the bargaining status of the covered employees (app. I, slide 27). However, 27 percent of respondents reporting on their retiree health benefits for plans covering nonbargaining retirees reported increasing an employer’s share of costs, while only 13 percent of respondents reporting on their retiree health benefits for retirees from collective-bargaining units indicated such an increase. Among those respondents with health benefits covering nonbargained retirees, they listed “large increases in the cost of health insurance coverage for retirees” as a major reason for increasing an employer’s share—not surprisingly. This top reason was the same for all of these respondents, as well as just those respondents reporting a decrease in the cost of an employer’s share. Additionally, a number of respondents who mentioned “other” reasons for the decrease in costs for employers cited the implementation of predefined cost caps. Our survey also asked respondents to report on their changes to retiree health offerings for future retirees or current workers who may eventually qualify for postretirement health benefits. As noted earlier, 46 percent of respondents reported they currently offered no retiree health benefits to active employees (i.e., current workers) hired after a certain date. Reporting on changes for the last decade, 54 percent of respondents describing their health plans for nonbargaining future retirees indicated that they had decreased or eliminated the firm’s share of the cost of providing health benefits (see app. I, slide 30). A smaller percentage (41 percent) of respondents reporting on their health benefits for collectively bargained future retirees indicated a decrease or elimination of benefits. The need to “match or maintain parity with competitor’s benefits package” was the key reason for making the retiree health benefit change for future retirees among respondents reporting on their collective-bargaining employees (app. I, slide 32). We asked respondents to report their total, future liability (i.e., present value in dollars) for retiree health as of 2004. As of the end of the 2004 plan year, 29 respondents reported a total retiree health liability of $68 billion. The retiree health liability reported by our survey respondents represents 40 percent of the $174 billion in DB liabilities that we estimate for these respondents’ DB plans as of 2004. According to our estimates, the DB liabilities for respondents reporting a retiree health liability were supported with $180 billion in assets as of 2004. We did not ask respondents about the assets underlying the reported $68 billion in retiree health liabilities. Nevertheless, these liabilities are unlikely to have much in the way of prefunding or supporting assets, due in large part to certain tax consequences. Although we did not ask sponsors about the relative sustainability of retiree health plans given the possible difference in the funding of these plans relative to DB plans, we did ask respondents to report the importance of offering a retiree health plan for purposes of firm recruitment and retention. Specifically, we asked about the importance of making a retiree health plan available relative to making a DB or DC pension plan available. Only a few respondents reported that offering DB or DC plans was less (or much less) important than offering a retiree health plan. Responding before October 2008—before the increasingly severe downturns in the national economy—most survey respondents reported they had no plan to revise benefit formulas or freeze or terminate plans, or had any intention to convert to hybrid plans before 2012. Survey respondents were asked to consider how their firms might change specific employee benefit actions between 2007 and 2012 for all employees. The specific benefit actions they were asked about were a change in the formula for calculating the rates of benefit accrual provided by their DB plan, a freeze of at least one DB plan, the conversion of traditional DB plans to cash balance or other hybrid designs, and the termination of at least one DB plan. For each possibility, between 60 percent and 80 percent of respondents said their firm was not planning to make the prospective change (see app I, slide 34). When asked about how much they had been or were likely to be influenced by recent legislation or account rule changes, such as PPA or the adoption of Financial Accounting Standards Board (FASB) requirements to fully recognize obligations for postretirement plans in financial statements, responding firms generally indicated these were not significant factors in their decisions on benefit offerings. Despite these legislative and regulatory changes to the pension environment, most survey respondents indicated that it was unlikely or very unlikely that their firms would use assets from DB plans to fund qualified health plans; increase their employer match for DC plans; terminate at least one DB plan; amend at least one DB plan to change (either increase or decrease) rates of future benefit accruals; convert a DB plan to a cash balance or hybrid design plan, or replace a DB plan with a 401(k)-style DC plan. Additionally, most respondents indicated “no role” when asked whether PPA, FASB, or pension law and regulation prior to PPA had been a factor in their decision (see app 1, slide 35). Though the majority of these responses indicated a trend of limited action related to PPA and FASB, it is interesting to note that, among the minority of firms that reported they were likely to freeze at least one DB plan for new participants only, most indicated that PPA played a role in this decision. Similarly, while only a few firms indicated that it was likely they would replace a DB plan with a 401(k)-style DC plan, most of these firms also indicated that both PPA and FASB played a role in that decision. There were two prospective changes that a significant number of respondents believed would be likely or very likely implemented in the future. Fifty percent of respondents indicated that adding or expanding automatic enrollment features to 401(k)-type DC plans was likely or very likely, and 43 percent indicated that PPA played a major role in this decision. This is not surprising, as PPA includes provisions aimed at encouraging automatic enrollment and was expected to increase the use of this feature. Forty-five percent of respondents indicated that changing the investment policy for at least one DB plan to increase the portion of the plan’s portfolio invested in fixed income assets was likely or very likely—with 21 percent indicating that PPA and 29 percent indicating that FASB played a major or moderate role in this decision (see app 1, slide 36). Our survey did not ask about the timing of this portfolio change, so we cannot determine the extent of any reallocation that may have occurred prior to the decline in the financial markets in the last quarter of 2008. Finally, responding sponsors did not appear to be optimistic about the future of the DB system, as the majority stated there were no conditions under which they would consider forming a new DB plan. For the 26 percent of respondents that said they would consider forming a new DB plan, some indicated they could be induced by such changes as a greater scope in accounting for DB plans on corporate balance sheets and reduced unpredictability or volatility of plan funding requirements (see app I, slides 38). Conditions less likely to cause respondents to consider a new DB plan included increased regulatory requirements of DC plans and reduced PBGC premiums (see app I, slide 39). Until recently, DB pension plans administered by large sponsors appeared to have largely avoided the general decline evident elsewhere in the system since the 1980s. Their relative stability has been important, as these plans represent retirement income for more than three-quarters of all participants in single-employer plans. Today, these large plans no longer appear immune to the broader trends that are eroding retirement security. While few plans have been terminated, survey results suggest that modifications in benefit formulas and plan freezes are now common among these large sponsors. This trend is most pronounced among nonbargained plans but is also apparent among bargained plans. Yet, this survey was conducted before the current economic downturn, with its accompanying market turmoil. The fall in asset values and the ensuing challenge to fund these plans places even greater stress on them today. Meanwhile, the survey findings, while predating the latest economic news, add to the mounting evidence of increasing weaknesses throughout the existing private pension system that include low contribution rates for DC plans, high account fees that eat into returns, and market losses that significantly erode the account balances of those workers near retirement. Moreover, the entire pension system still only covers about 50 percent of the workforce, and coverage rates are very modest for low-wage workers. Given these serious weaknesses in the current tax-qualified system, it may be time for policymakers to consider alternative models for retirement security. We provided a draft of this report to the Department of Labor, the Department of the Treasury, and PBGC. The Department of the Treasury and PBGC provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Director of the PBGC, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have or your staffs any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix III. the nation’s largest private sector DB plans: 1) What recent changes have employers made to their pension and benefit offerings? 2) What changes might employers make with respect to their pensions in the future, and how might these changes be influenced by changes in pension law and other factors? generalizable to all DB plan sponsors. However, the sample can serve as an important indicator of the health of the private DB system and the sample’s possible importance to the Pension Benefit Guaranty Corporation (PBGC) The 44 sponsoring firms that responded represent an estimated: 25 percent (or $370 billion) of total DB system liabilities as of 19 percent (or 6 million) of the system’s DB participants (active, separated-vested, retired as of 2004) business line was manufacturing, with other key areas being finance and information. (Figure 1) These firms reported employing on average 75,000 employees in their U.S. operations in 2006. increased or did not change employer contributions to 401(k) plans for their NB employees. (Figure 8) Main reasons for change included redesigned matching formula as well as compensation adjustments to attract top employees. The vast majority of respondents reported that plans covering NB employees either increased or did not change employee contributions. Main reasons among respondents reporting increased contributions included addition of automatic enrollment feature to one or more plans. 72 percent of large sponsors reported either using or planning to use auto enrollment for plans covering NB employees (Figure 9). either increased or did not change employer contributions to 401(k) plans for their bargaining unit employees. (Figure 8) No single reason stood out for this result. Bargaining unit employees of most sponsors did not change employee contributions. (Figure 8) 50 percent of large sponsors with plans covering CB employees reported either not using or not planning to use auto enrollment (Figure 9). (Figure 10) All responding DB plan sponsors offered health insurance to active employees and contributed to the cost All responding DB plan sponsors offered health insurance to at least some current retirees—nearly all to both pre-age 65 and age 65-plus employees 80 percent provided health insurance to at least some active employees who become eligible for the benefit upon retirement 20 percent provided health insurance that was fully paid by the retired employee (Figure 11) Compared to respondents reporting on their benefits covering CB employees, respondents with NB employees reported decrease in the employer’s share of the cost of providing health benefits to current retirees (Figure 12) Main reasons were increases in cost of health insurance for retirees and for active employees (Figure 13) 46 percent of plan sponsors no longer offered retiree health benefits to active employees hired after a certain date. 54 percent decreased or eliminated the firm's share cost of providing health benefits for future retirees who were non-bargaining employees; (Figure 14) Primary reasons cited were large cost increases in health insurance for both retirees and active employees (Figure 15) 41 percent of sponsors with bargaining unit employees reported decrease in or elimination of firm's share of health care costs for future retirees (Figure 14) 26 percent reported no change Primary reason cited was match/maintain parity with competitor’s benefits package (Figure 16) them definitely consider forming a new DB plan 26 percent of sponsors reported that there were conditions under which they would have considered offering a new DB plan; the most common conditions selected were: Provide sponsors with greater scope in accounting for DB plans on corporate balance sheets DB plans became more effective as an employee retention Reduced unpredictability or volatility in DB plan funding requirements (Figure 17) To achieve our objectives, we conducted a survey of sponsors of large defined-benefit (DB) pension plans. For the purposes of our study, we defined “sponsors” as the listed sponsor on the 2004 Form 5500 for the largest sponsored plan (by total participants). To identify all plans for a given sponsor, we matched plans through unique sponsor identifiers. We constructed our population of DB plan sponsors from the 2004 Pension Benefit Guaranty Corporation’s (PBGC) Form 5500 Research Database by identifying unique sponsors listed in this database and aggregating plan- level data (for example, plan participants) for any plans associated with this sponsor. As a result of this process, we identified approximately 23,500 plan sponsors. We further limited these sponsors to the largest sponsors (by total participants in all sponsored plans) that also appeared on the Fortune 500 or Fortune Global 500 lists. We initially attempted to administer the survey to the first 100 plans that met these criteria, but ultimately, we were only able administer the survey to the 94 sponsoring firms for which we were able to obtain sufficient information for the firm’s benefits representative. While the 94 firms we identified for the survey are an extremely small subset of the approximately 23,500 total DB plan sponsors in the research database, we estimate that these 94 sponsors represented 50 percent of the total single-employer liabilities insured by PBGC and 39 percent of the total participants (active, retired, and separated-vested) in the single-employer DB system as of 2004. The Web-based questionnaire was sent in December 2007, via e-mail, to the 94 sponsors of the largest DB pension plans (by total plan participants as of 2004) who were also part of the Fortune 500 or Global Fortune 500. This was preceded by an e-mail to notify respondents of the survey and to test our e-mail addresses for these respondents. This Web questionnaire consisted of 105 questions and covered a broad range of areas, including the status of current DB plans; the status of frozen plans (if any) and the status of the largest frozen plan (if applicable); health care for active employees and retirees; pension and other benefit practices or changes over approximately the last 10 years and the reasons for those changes (parallel questions asked for plans covering collectively bargained employees and those covering nonbargaining employees); prospective benefit plan changes; the influence of laws and accounting practices on possible prospective benefit changes; and opinions about the possible formation of a new DB plan. The first 17 questions and last question of the GAO Survey of Sponsors of Large Defined Benefit Pension Plans questionnaire mirrored the questions asked in a shorter mail questionnaire (Survey of DB Pension Plan Sponsors Regarding Frozen Plans) about benefit freezes that was sent to a stratified random sample of pension plan sponsors that had 100 or more participants as of 2004. Sponsors in the larger survey were, like the shorter survey, asked to report only on their single-employer DB plans. To help increase our response rate, we sent four follow-up e-mails from January through November 2008. We ultimately received responses from 44 plan sponsors, representing an overall response rate of 44 percent. To pretest the questionnaires, we conducted cognitive interviews and held debriefing sessions with 11 pension plan sponsors. Three pretests were conducted in-person and focused on the Web survey, and eight were conducted by telephone and focused on the mail survey. We selected respondents to represent a variety of sponsor sizes and industry types, including a law firm, an electronics company, a defense contractor, a bank, and a university medical center, among others. We conducted these pretests to determine if the questions were burdensome, understandable, and measured what we intended. On the basis of the feedback from the pretests, we modified the questions as appropriate. The practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. We took the following steps to increase the response rate: developing the questionnaire, pretesting the questionnaires with pension plan sponsors, and conducting multiple follow-ups to encourage responses to the survey. We performed computer analyses of the sample data to identify inconsistencies and other indications of error and took steps to correct inconsistencies or errors. A second, independent analyst checked all computer analyses. We initiated our audit work in April 2006. We issued results from our survey regarding frozen plans in July 2008. We completed our audit work for this report in March 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. Barbara D. Bovbjerg, (202) 512-7215 or [email protected]. In addition to the contact above, Joe Applebaum, Sue Bernstein, Beth Bowditch, Charles Ford, Brian Friedman, Charles Jeszeck, Isabella Johnson, Gene Kuehneman, Marietta Mayfield, Luann Moy, Mark Ramage, Ken Stockbridge, Melissa Swearingen, Walter Vance, and Craig Winslow made important contributions to this report. | The number of private defined benefit (DB) pension plans, an important source of retirement income for millions of Americans, has declined substantially over the past two decades. For example, about 92,000 single-employer DB plans existed in 1990, compared to just under 29,000 single-employer plans today. Although this decline has been concentrated among smaller plans, there is a widespread concern that large DB plans covering many participants have modified, reduced, or otherwise frozen plan benefits in recent years. GAO was asked to examine (1) what changes employers have made to their pension and benefit offerings, including to their defined contribution (DC) plans and health offerings over the last 10 years or so, and (2) what changes employers might make with respect to their pensions in the future, and how these changes might be influenced by changes in pension law and other factors. To gather information about overall changes in pension and health benefit offerings, GAO asked 94 of the nation's largest DB plan sponsors to participate in a survey; 44 of these sponsors responded. These respondents represent about one-quarter of the total liabilities in the nation's single-employer insured DB plan system as of 2004. The survey was largely completed prior to the current financial market difficulties of late 2008. GAO's survey of the largest sponsors of DB pension plans revealed that respondents have made a number of revisions to their retirement benefit offerings over the last 10 years or so. Generally speaking, they have changed benefit formulas; converted to hybrid plans (such plans are legally DB plans, but they contain certain features that resemble DC plans); or frozen some of their plans. Eighty-one percent of responding sponsors reported that they modified the formula for computing benefits for one or more of their DB plans. Among all plans reported by respondents, 28 percent of these (or 47 of 169) plans were under a plan freeze--an amendment to the plan to limit some or all future pension accruals for some or all plan participants. The vast majority of respondents (90 percent, or 38 of 42 respondents) reported on their 401(k)-type DC plans. Regarding these DC plans, a majority of respondents reported either an increase or no change to the employer or employee contribution rates, with roughly equal responses to both categories. About 67 percent of (or 28 of 42) responding firms plan to implement or have already implemented an automatic enrollment feature to one or more of their DC plans. With respect to health care offerings, all of the (42) responding firms offered health care to their current workers. Eighty percent (or 33 of 41 respondents) offered a retiree health care plan to at least some current workers, although 20 percent of (or 8 of 41) respondents reported that retiree health benefits were to be fully paid by retirees. Further, 46 percent of (or 19 of 41) responding firms reported that it is no longer offered to employees hired after a certain date. At the time of the survey, most sponsors reported no plans to revise plan formulas, freeze or terminate plans, or convert to hybrid plans before 2012. When asked about the influence of recent legislation or changes to the rules for pension accounting and reporting, responding firms generally indicated these were not significant factors in their benefit decisions. Finally, a minority of sponsors said they would consider forming a new DB plan. Those sponsors that would consider forming a new plan might do so if there were reduced unpredictability or volatility in DB plan funding requirements and greater scope in accounting for DB plans on corporate balance sheets. The survey results suggest that the long-time stability of larger DB plans is now vulnerable to the broader trends of eroding retirement security. The current market turmoil appears likely to exacerbate this trend. |
In the banking industry, the specific regulatory configuration for a banking institution generally depends on the type of charter the institution chooses. Depository institution charter types include commercial bank and thrift charters: Commercial banks originally focused on the banking needs of businesses but over time have broadened their services. Thrifts include savings banks, savings associations, and savings and loans and were originally created to serve the needs—particularly the mortgage needs—of those not served by commercial banks. Charters may be obtained at the state or federal level. State regulators charter institutions and participate in the institutions’ oversight, but all institutions that have federal deposit insurance have a federal prudential regulator. The federal prudential regulators—which generally may issue regulations, conduct supervision, and take enforcement actions against industry participants within their jurisdiction—are OCC, Federal Reserve, and FDIC, and their basic functions are summarized in table 1. Additionally, FDIC insures deposits in banks and thrifts. Large banking organizations in the United States generally are organized as bank holding companies (BHC), which are companies that can control, among other entities, one or more banks. Typically, a large U.S. parent (or top tier) BHC owns a number of domestic depository institutions that also engage in lending and other activities. A BHC also may own nonbanking and foreign entities that engage in a broader range of business activities, which may include securities dealing and underwriting, insurance, real estate, leasing and trust services, or asset management. A BHC’s nonbank subsidiaries are affiliates of the BHC’s bank subsidiaries. Some large U.S. BHCs have thousands of subsidiaries. The Bank Holding Company Act of 1956, as amended, contains a comprehensive federal framework for the supervision and regulation of BHCs and their nonbank subsidiaries. Generally, any company that seeks to acquire control of an insured bank or BHC shall apply for approval as a BHC with the Federal Reserve. Under the Bank Holding Company Act, BHCs are subject to, among other things, consolidated supervision by the Federal Reserve. Further, the act restricts the activities of the BHC and its affiliates to those that are closely related to banking or, for qualified financial holding companies, activities that are financial in nature. In general, swaps and security-based swaps (collectively referred to as swaps in this report, unless otherwise noted) are types of derivative contracts that involve ongoing exchanges of payments for a specified period. Swaps and other derivatives have one or more “underlyings” (i.e., specified interest rate, security price, commodity price, foreign exchange rate, index of prices or rates, or other variable) and one or more notional amounts (i.e., number of currency units, shares, bushels, pounds, or other units specified in the contract) that help determine the amount of the payments. For example, an end-user seeking to hedge its interest rate risk may enter into an interest rate swap with a dealer to exchange fixed-rate interest payments of 5 percent of $10 million for floating interest payments based on the 3-month London Interbank Offered Rate. Under the terms of the swap, the dealer agrees to make quarterly payments of 5 percent multiplied by $10 million to the end-user, and the end-user agrees to make quarterly payments of the 3-month London Interbank Offered Rate multiplied by $10 million. The notional value of this contract would be $10 million because that is the specified value on which exchanged interest payments are based. Swaps and other derivatives volumes generally are measured by their notional amounts. For example, the notional amount of derivative contracts held by insured U.S. commercial banks and savings associations increased from around $17 trillion in 1995 to around $165 trillion in 2016. However, notional amounts generally do not represent amounts at risk. Financial and nonfinancial firms use swaps and other derivatives to hedge risk, to speculate, or for other purposes, such as to reduce uncertainty. For example, an airline may enter into a commodity swap to lock in its fuel price over a certain time horizon, so that it can better manage its costs. Banks and other end-users that are exposed to maturity, currency, or interest rate mismatches between assets and liabilities may enter into swaps to hedge their exposure. Speculators may enter into equity derivatives to speculate on the direction of equity markets in order to make a profit, understanding that the profit or loss from the swap can be large in comparison to the cost of entering the swap. Unlike futures contracts, which are standardized financial contracts that are traded on exchanges, swaps traditionally have been privately negotiated between two counterparties in the OTC market. Types of swaps include the following: Interest rate swaps are contracts in which two parties agree to exchange interest cash flows or one or more notional principal amounts at certain times in the future according to an agreed-on formula. Banks, corporations, sovereigns, and other institutions use swaps to manage their interest-rate risks or speculate on interest-rate movements. Foreign exchange swaps are simultaneous purchases and sales of a certain amount of foreign currency for two different value dates. Corporations use such swaps to hedge their assets and liabilities incurred as a result of their overseas operations. Investors (e.g., international mutual funds) use such swaps to gain exposure to markets or to hedge currency risk. Commodity swaps are agreements between two counterparties to make periodic exchanges of cash based on notional quantity of a specified commodity or related index. The term “commodity” encompasses agricultural products, base metals, and energy products. Market participants include commodity producers and users, hedge funds, and mutual funds. Equity swaps are transactions in which payments referenced to the return on a certain equity index (e.g., S&P 500) or an equity and an interest rate are exchanged and are usually based on a fixed notional amount. End- users of equity swaps include money managers, hedge funds, insurance companies, corporations, and finance companies. A credit default swap is a contract between a seller and buyer of protection against the risk of default on a debt obligation issued by a reference entity and serves as an insurance policy that protects the buyer against the loss on the debt obligation in case of a default by the debt issuer (i.e., reference entity). The protection buyer makes periodic payments over the contract’s life, and the premium is a percentage of the contract’s notional value. If a credit event occurs (e.g., bankruptcy), the premium payment stops, and the protection seller pays the buyer the notional amount or agreed-to default payment. The debt obligation can include a loan, a bond, an asset-backed security, or a credit index. For example, an insurer that has invested in bonds issued by a company may go to a bank swap dealer to buy protection against the risk of the company defaulting on its bonds. In general, credit default swaps are between institutional investors and dealers. For most OTC derivative transactions, a dealer is one of the two counterparties to the contract. The 102 entities provisionally registered with CFTC as swap dealers (as of April 2017) include U.S. and foreign banks, securities broker-dealers, and futures commission merchants. Some BHCs own two or more swap dealers. Dealers often trade with other dealers, such as to hedge, or offset, risk from their OTC derivatives trades with their client firms or other risks. Section 716 prohibits the provision of federal assistance to banks that engage in certain swap activities but allows them to move, or “push out,” such activities to nonbank affiliates of the bank. As such, a BHC can continue to engage in those swaps through its nonbank subsidiaries. Section 716 of the Dodd-Frank Act does not directly prohibit a bank from engaging in swap activities. Rather, it provides that no federal assistance be provided to any “swaps entity” unless the entity restricts its swap activities to those permitted under the provision. The term “federal assistance” is defined as the use of any advances from any Federal Reserve credit facility or discount window that is not part of a program or facility with broad-based eligibility under section 13(3)(A) of the Federal Reserve Act, FDIC insurance or guarantees for the purpose of (A) making any loan to, or purchasing any stock , equity interest, or debt obligation of any swaps entity; (B) purchasing the assets of any swaps entity; (C) guaranteeing any loan or debt issuance of any swaps entity; or (D) entering into any assistance arrangement (including tax breaks), loss sharing, or profit sharing with any swaps entity. Covered depository institutions, including insured depository institutions, are included within the definition of a swaps entity only if they are registered swap dealers or security-based swap dealers. Because banks do not want to jeopardize their access to federal assistance, section 716 effectively prohibits bank swap dealers from engaging in swap activity unless they restrict that activity to swaps permitted under the provision. The original section 716 covered several types of swap activities: (1) swaps involving rates or reference assets permissible for investment by a national bank, (2) credit default swaps that are cleared by a derivatives clearing organization or a clearing agency, and (3) swaps transactions used for hedging or other similar risk-mitigating activities directly related to the bank’s activities. Consequently, the original section 716 generally prohibited the provision of federal assistance to bank swap dealers that engaged in swap activity involving most equity swaps, commodity swaps referencing physical commodities (except for precious metals), and noncleared credit default swaps, unless the swaps were used for hedging or mitigating bank risk. As shown in figure 1, the original section 716 became effective in July 2013, but the law required the appropriate federal banking agency to permit a transition period of up to 24 months for swap entities that are insured depository institutions to divest or cease certain swap activities. Several banks applied for and were granted 2- year extensions by the Federal Reserve and OCC, and those financial institutions had until July 16, 2015, or later to comply with section 716. Under the statue, these entities had the option of applying for an extension of the transition period for up to 1 additional year. Section 716 was amended in December 2014, before the end of each 2-year transition period that had been granted. The amended section 716 significantly narrowed the scope of the original provision. The amended section 716 prohibits the provision of federal assistance only to bank swap dealers that engage in swap activities involving structured finance swaps (e.g., swaps on asset-backed securities), unless the swaps are used for hedging or unless the asset- backed securities underlying the swaps satisfied credit quality and classification requirements to be set forth by prudential regulators through regulations. Bank swap dealers are permitted to engage in swap activities involving all other types of swaps without losing access to federal assistance, including those that would have been covered by the original section 716. Additionally, like the original section 716, the amended section 716 allowed a covered bank to retain swaps entered into before the bank’s compliance date (called legacy swaps). Thus, it generally prohibits bank swap dealers that were granted 2-year transitions from entering any new structured finance swaps on or after July 16, 2015, without losing access to federal assistance, unless the new swaps generally were used for hedging or risk-management purposes. The original and amended versions of section 716 allowed covered banks to move their swap activities covered under section 716 to their nonbank affiliates, so long as the bank was part of a BHC or savings and loan holding company. Figure 2 provides a simplified example of a BHC that has both bank and nonbank subsidiaries. In the figure, the U.S. commercial bank is a bank swap dealer that engages in section 716 covered swaps activities under either version of the provision. In response, the BHC could move the bank’s covered swap activities to one or both of these nonbank affiliates, including foreign nonbank affiliates. Currently, banks are permitted to structure, trade, or deal in a broad range of exchange-traded and OTC derivatives. For banks to conduct derivatives activities, federal banking regulators generally require the banks to have adequate risk management and measurement systems and controls to conduct the activities in a safe and sound manner, and they must have sufficient capital to support the risks associated with the activities. For example, before a bank conducts derivatives activities, senior management should ensure that all appropriate regulatory approvals are obtained and that adequate operational procedures and risk control systems are in place. After the bank’s initial entry into derivatives activities has been properly approved, any significant changes in such activities or any new derivatives activities should be approved by the board of directors or, as appropriate, senior management. Other specific requirements include the following: Banks should have comprehensive written policies and procedures to govern their use of derivatives. Senior management should establish an independent unit or individual responsible for measuring and reporting derivatives risk exposures. Banks should have comprehensive risk management systems that are commensurate with the scope, size, and complexity of their activities and the risks they assume. Banks should have audit coverage of their derivatives activities adequate to ensure timely identification of internal control weaknesses or system deficiencies. The board of directors should ensure that the bank maintains sufficient capital to support the risk exposures (e.g., market risk, credit risk, liquidity risk, operational risk, legal risk) that may arise from its derivatives activities. Bank swap dealers are subject to their federal banking regulator’s prudential requirements, including minimum OTC swap margin (or collateral) requirements. In addition, as discussed in the next section, banks that engage in swaps or security-based swap activities in amounts above a specified threshold must also register as swap or security-based swap dealers with CFTC or the SEC, respectively. Title VII of the Dodd-Frank Act establishes a new regulatory framework for swaps. The act authorizes CFTC to regulate swaps and SEC to regulate security-based swaps with the goals of reducing risk, increasing transparency, and promoting market integrity in the financial system. Title VII includes the following four major swaps reforms: Registration, capital, margin, and other requirements. Title VII provides for the registration and regulation of swap dealers and major swap participants, including subjecting them to (1) prudential regulatory requirements, such as minimum capital and minimum initial and variation margin requirements and (2) business conduct requirements to address, among other things, interaction with counterparties, disclosure, and supervision. Mandatory clearing. Title VII imposes mandatory clearing requirements on certain swaps, but it exempts, among other things, certain end users that use swaps to hedge or mitigate commercial risk. Exchange trading. Title VII requires certain swaps subject to mandatory clearing to be traded and executed on a regulated trading platform, including an organized exchange or swap execution facility, unless no facility offers the swap for trading. Mandatory reporting. Title VII requires all swaps to be reported to a registered swap data repository or, if no such repository will accept the swap data, to CFTC or SEC, and requires that transaction and pricing data for newly executed swaps be reported to the public. Figure 3 illustrates these reforms and some of the differences between swaps traded on exchanges and cleared through clearinghouses and noncleared swaps. Our analysis shows that of the 15 U.S. banks covered by section 716, 4 had to take steps to comply with the amended provision compared to 11 that would have had to take steps to comply with the original provision. Approximately 1,400 U.S. banks reported holding swaps or other derivatives in the second quarter of 2015, and 15 of them, about 1 percent, had registered with CFTC as swap dealers and were thus covered entities under both versions of section 716. As shown in figure 4, as of September 30, 2016, the 15 covered banks collectively held a total notional amount of around $176 trillion in derivatives, which represented around 99 percent of the derivatives held by all U.S. banks. However, this activity was concentrated among four banks, which collectively held a total notional amount of about $159 trillion in derivatives, or around 90 percent of the derivatives held by the 15 U.S. bank swap dealers. The amended section 716 affected four U.S. bank swap dealers that conducted structured finance swap activities, and we estimated that these banks “pushed out” about $265 billion of such swaps in notional value (or less than 1 percent of the banks’ total derivatives). Because originally covered swaps generally included credit, commodity, and equity swaps, the original section 716 would have affected 11 banks that are swap dealers in these markets. We estimated that these banks continue to hold about $10.5 trillion of such swaps in notional value (or around 6 percent of their total derivatives) due to the section 716 amendment. Our analysis shows that of the 15 U.S. banks registered as swap dealers, 4 of the banks were dealers in structured finance swaps and had to stop such swap activity by July 16, 2015, or lose access to federal assistance under the amended section 716. As discussed in more detail later, the four banks moved their structured finance swap activity to their nonbank affiliates. In that regard, the structured finance swaps entered into by these nonbank swap dealers on or after July 16, 2015, represent the amount of swaps that the four banks “pushed out” to the nonbank affiliates. Based on data collected by swap data repositories and simplifying assumptions, we estimated that nonbank affiliates of the four swap dealers collectively entered into around 16,300 structured finance swaps with a total notional amount of around $265 billion between July 16, 2015, and September 30, 2016. This total is the amount that presumably would have been traded by the four banks if they did not have to push them out to nonbank affiliates to remain eligible for federal assistance. These swaps include only structured finance swaps on asset- backed securities indexes and exclude structured finance swaps on single-name asset-backed securities. Our estimate assumes that one of the four nonbank swap dealer affiliates was a party to every new swap, none of the new swaps were entered into for hedging or risk management purposes, and there were no new structured finance swaps on single- name asset-backed securities. According to our estimate, the amount of swaps affected by the amended section 716 would represent less than 1 percent of the total notional amount of the derivatives held by the four banks as of September 30, 2016 (or around 4 percent of their credit derivatives), if the banks were allowed to hold such derivatives. Our analysis shows that of the 15 U.S. banks registered as swap dealers, 11 banks (including the 4 that were affected by the amended section 716) would have had to take steps to comply with the original provision. The 11 banks are dealers in originally covered swaps and were able to continue to engage in such swap activities (with the exception of certain structured finance swaps) due to the section 716 amendment. Based on Call Report data, we estimated that the 11 bank swap dealers collectively held a total notional amount of around $10.5 trillion in credit, equity, and commodity and other derivatives as of September 30, 2016. This amount, which is almost 40 times larger than our estimate of affected swaps under the amended section 716, approximates the maximum notional amount of covered swaps that the 11 dealers could have had to move out of the banks under the original section 716, but it likely is an overestimate for the reasons discussed later. As shown in figure 5, the total notional amount of derivatives covered by the original section 716 comprises about 6 percent of the 11 banks’ total derivatives notional value. Moreover, 4 of the 11 banks account for 94 percent of the $10.5 trillion estimated notional value. Although our estimate of the amount of swaps affected by the original section 716 is relatively small, our estimate likely is an overestimate for several reasons. First, the original section 716 would have allowed bank swap dealers to continue to hold covered legacy swaps after the provision took effect. Second, it also would have allowed bank swap dealers to use covered swaps for hedging. Third, it would have covered noncleared credit default swap activities but not cleared credit default swap activities. These factors would affect the total notional amount of swaps that would have been moved out of the banks under the original provision, but publicly available data do not allow us to distinguish between (1) legacy swaps and new swaps entered into on or after July 16, 2015, (2) swaps used and not used for hedging, (3) commodity swaps referencing bullion and other commodity swaps, and (4) cleared and noncleared credit default swaps. According to affected BHCs and end-users we interviewed, the steps required to implement the amended section 716 imposed certain costs on BHCs and swap end-users, although BHCs generally indicated that the costs were easily absorbed. In contrast, BHCs and end-users stated that implementation costs would likely have been significantly greater under the original section 716 due to the larger scope of covered swaps and the much larger volume of affected end-users. In addition, because section 716 could cause affected end-users to enter into swaps with the bank’s affiliated nonbank swap dealers—splitting end-users’ swaps into at least two separate portfolios—the efficiency with which dealers and end-users are able to manage their counterparty credit risk can be reduced. These efficiency losses can lead to higher counterparty credit risk or collateral costs and liquidity risk. Because significantly more end-users’ portfolios likely would have been split under the original section 716, the losses in efficiencies likely would have been much greater and likely would have led to larger increases in risk or related collateral costs. However, end- users could mitigate their efficiency losses by having their bank swap dealers move their legacy swaps to the nonbank swap dealer affiliates. To not be subject to the prohibition on federal assistance under the amended section 716, BHCs had to undertake various steps to move the covered swap activity out of the banks and into nonbank subsidiaries or to cease such activity throughout the company. Generally, these steps included (1) identifying swap activity covered by section 716 at the bank swap dealer, (2) moving this swap activity out of the bank into nonbank affiliates or ceasing such activity, and (3) for swaps moved to nonbank affiliates, negotiating new master netting agreements—such as the widely used ISDA Master Agreement published by the International Swaps and Derivatives Association (ISDA)—with affected end-users, as needed. According to stakeholders we interviewed, the actions that BHCs would have been required to take to execute these steps would have been significantly more complicated and costly under the original section 716 for both BHCs and end-users due to the larger scope of covered swaps and the much larger volume of affected end-users relative to the amended provision. As discussed previously, we estimated that the notional value of affected swaps would have been almost 40 times larger under the original versus the amended section 716. In addition, regulators, market experts, and market participants we spoke with noted that the structured finance swap market—that is, the swaps affected by the amended 716—was active before the 2007—2009 crisis but since then has become a relatively small market, with one or two actively traded indices primarily used by some financial end-users, such as hedge funds or investment companies. In contrast, a wide variety and large number of financial and commercial end-users use swaps that were covered by the original section 716—commodity, equity, or noncleared credit default swaps—to manage risks in their businesses. The four banks took action in response to the amended section 716 told us that they generally have not had major difficulties implementing the amended section 716. To comply with the amended section 716, the BHCs of the four banks engaged in structured finance swap activity stopped their banks from engaging in such swap activity and moved the activity to existing nonbank affiliates of the bank that were already registered as swap dealers. The BHCs told us that they primarily incurred legal and operational costs in doing so, but that such costs were generally easily absorbed by the firm and would have been much larger under the original provision. Amended section 716 operational costs. BHCs stated that after identifying affected structured finance swaps at the bank, each BHC also identified one or two existing nonbank swap dealer affiliates of the bank to which it could readily move its bank’s structured finance swap activity. The BHCs stated that this decision was relatively self-evident because they already had registered nonbank swap dealer affiliates that had the infrastructure and processes in place to trade structured finance swaps. Consequently—and also because the volume of swaps affected by the amended section 716 was relatively small—the operational costs of moving the swaps to nonbank subsidiaries were relatively manageable, according to the four BHCs. Amended section 716 legal costs. The BHCs stated they incurred some legal costs in establishing new ISDA Master Agreements when needed. ISDA Master Agreements typically are entered into between two swap counterparties, such as the bank swap dealer and a swap end- user. To trade structured finance swaps with a nonbank swap dealer as a result of section 716 restrictions, an affected end-user had to enter into another ISDA Master Agreement with the nonbank unless an agreement was already in place. The four affected BHCs stated that affected clients generally entered into new ISDA contracts with the nonbank affiliate as needed. Some banks’ stated that they moved legacy swaps to nonbank affiliates per client request. Under the original section 716, implementation costs for the BHCs of the 11 bank swap dealers that would have been affected likely would have been much larger because the original provision covered more types of swaps and the number of affected end-users would have been significantly larger. Original section 716 operational costs. In response to the original section 716, the BHCs that would have been affected stated that they likely would have taken steps similar to those taken by BHCs affected by the amended version. First, BHCs said they would have had to identify affected originally covered swaps at the bank. Then, BHCs generally stated that they were considering whether to move such swap activity to existing nonbank affiliates and/or newly created nonbank affiliates, or whether they should cease dealing originally covered swaps. For example, three BHCs told us that they might have had to move originally covered swaps to multiple nonbank affiliates in the United States and globally because no one nonbank affiliate could have served as a dealer for such swaps. Moreover, two of them and two other BHCs stated that they might not have viable nonbank affiliates that could have absorbed all of the affected activity and might have had to create new nonbank affiliates. In both cases, BHCs stated that they likely would have needed to spend time, divert capital, and duplicate bank swap trading systems and processes at the nonbank affiliates to make them viable. Lastly, a smaller BHC told us that the cost of creating new nonbank affiliates would have been significant and that it likely would have stopped its swap activity. Original section 716 legal costs. BHCs also noted the potential challenges of negotiating a much larger volume of ISDA Master Agreements under the original section 716. For example, a BHC told us that the number of its counterparties affected by the amended section 716 was a few hundred, compared to several thousand that would have been affected under the original section 716. Another BHC stated it had less than 50 swaps in categories covered by the original section 716, but other BHCs stated they had a couple thousand to hundreds of thousands of such swaps. Like the amended section 716, the original section 716 did not require a bank swap dealer to move legacy swaps to its affiliated nonbank swap dealer to remain eligible for federal assistance, but as discussed later, the bank’s clients might have requested their swaps to be moved to the nonbank swap dealer to take advantage of netting efficiencies. According to market participants, negotiating an ISDA Master Agreement could take 1 to 12 months, and some BHCs expected that it would take them between 1 and 2 years to redocument the agreements with all of their affected clients under the original section 716, in part depending on the extent to which clients would have sought renegotiation of contract terms with the nonbank affiliates. In addition, all 11 BHCs likely would have had to negotiate these agreements with thousands of the affected end-users at around the same time. Because section 716 directly affects the relationship between bank swap dealers and end-user clients, both the original and amended provisions involve some operational and legal costs for affected end-users as well. According to two market participants and a regulator, end-users affected by the amended section 716 typically included hedge funds, banks, pension funds, and insurance companies. BHCs and end-users we interviewed stated that they incurred costs establishing new swap trading relationships with nonbank affiliates of the bank, if a relationship did not exist already, and maintaining these relationships. They said that operationally, end-users would have had to ensure their information management systems and processes were able to trade structured finance swaps with the nonbank affiliates of the banks instead of bank swap dealers. They also stated that legally, at least some affected end- users had to enter into new ISDA Master Agreements with nonbanks as a result of the amended section 716. For example, two end-users we spoke with stated that, in doing so, they used the same terms of their contracts with the banks, and one end-user said this process took 4 to 8 weeks. Overall costs to end-users under the original section 716 likely would have been greater than under the amended section 716 because the universe of affected clients would have been much larger. According to BHCs and end-users we interviewed, both financial end-users (such as hedge funds, other banks, insurance companies, and investment companies) and commercial end-users (such as agricultural businesses, airlines, and oil and natural gas producers) use commodity, equity, or noncleared credit default swaps to manage risks in their businesses or for other purposes. They stated, and regulators agreed, that many more end- users would have had to incur operational costs of maintaining trading accounts with more dealers and spend legal resources and time renegotiating ISDA agreements than they would have under the amended section 716. Some BHCs and a market participant stated that at least some affected end-users likely would have asked for better terms rather than simply replicating the terms of their original contract with the banks, as happened under the amended statute. Lastly, some BHCs and end-users we spoke with stated that the original section 716 could have increased the trading costs of affected BHCs enough to increase the overall cost of trading swaps for end-users in the long run. Specifically, they stated that it typically costs nonbank dealers more to engage in swap activity than bank dealers due, in part, to differences in their capital costs. According to these stakeholders, affected BHCs likely would have passed at least part of these higher costs on to end-users, such as in the form of wider swap bid-ask spreads. According to stakeholders we interviewed, because the restrictions under both versions of section 716 may cause affected bank end-users to enter into swaps with the bank swap dealer and its nonbank swap dealer affiliate, end-users may split their swap portfolios into two portfolios (one with each dealer). They stated that this scenario can reduce the efficiency with which bank and nonbank dealers and end-users are able to manage their counterparty credit risk and can lead to higher counterparty credit risk or higher collateral costs and liquidity risk. Because more end-users would have been affected under the original relative to the amended section 716, more swap portfolios could have been split, and the losses in efficiencies likely would have been greater and would have led to larger increases in risk and related collateral costs. However, end-users could mitigate their efficiency losses by having their bank swap dealers move their legacy swaps to the nonbank swap dealer affiliates. Under an ISDA Master Agreement, swaps transactions between the two counterparties under the agreement become part of the same contract and thus part of the same netting set, which allows the parties to combine, or “net,” obligations owed to and from each other under their transactions into a single obligation. The ability to net their obligations should one party default enables swap counterparties to reduce their counterparty credit risk. For example, if a bank and an end-user have two swaps and the end user defaults, the obligations of the parties are terminated and the market-to-market values of the swaps are netted into a single sum owed by, or owed to, the bank. If the marked-to-market value of one swap is positive $100 and the marked-to-market value of the other swap is negative $80, then the counterparty credit risk exposures are as follows: Under an ISDA Master Agreement, the two values are netted against each other, resulting in a single obligation of $20 that the end-user owes to the bank. As a result, the bank has a $20 credit exposure to the end-user, and the end-user has no credit exposure to the bank. The bank would have a $20 claim against the end-user. Without an ISDA Master Agreement, the bank and the end-user are not able to net the marked-to-market values of their swaps. As a result, the bank’s credit exposure to its end-user is $100, and the end- user’s credit exposure to the bank is $80. In the event of an end-user default, the bank would be obligated to pay the end-user the $80 and would have a $100 claim against the end-user. Because of section 716, end-users may split their swap transactions and, in turn, their swap portfolios and netting sets between a BHC’s bank and nonbank swap dealers—reducing the efficiency by which they can manage their counterparty credit risk. Although an ISDA Master Agreement allows a dealer and end-user to bilaterally net their swap obligations between each other, officials from an industry association told us that these agreements generally do not allow a BHC’s bank and nonbank dealers to multilaterally net their obligations with the same end- user. As shown in figure 6, by splitting an end-user’s netting set between a BHC’s two dealers, section 716 can reduce the ability of the counterparties to net their obligations to reduce their counterparty credit risk. Our analysis indicates that the losses in netting efficiencies would likely have been larger under the original section 716, primarily because the original provision would have affected a greater number of end-users and their ISDA agreements. Bank-provided examples indicate that the original section 716 could have had a large effect on counterparty credit risk for end-users that hold both swaps covered and not covered by the provision. For example, one of a bank’s corporate clients would experience a 22 percent increase in its counterparty credit risk exposure if it split its foreign exchange derivatives (not covered by the original provision) and commodity derivatives (covered by the provision) into two netting sets. Similarly, one of a bank’s commercial client’s counterparty credit risk exposure would increase from $0 to $5 million if its interest rate and foreign exchange derivatives were split from its commodity derivatives. Finally, a bank estimated that its counterparty credit risk to a hedge fund would increase by more than 100 percent if the hedge fund split its interest rate and foreign exchange derivatives and equity and credit derivatives into two netting sets. As a market practice, banks and other swap dealers have required certain of their counterparties to post collateral (such as cash or securities) to cover the amount owed on their swap exposures to mitigate counterparty credit risk. Moreover, as discussed in more detail later in this report, pursuant to the Dodd-Frank Act, prudential regulators have imposed margin requirements on noncleared swaps that generally require the counterparty that originates the counterparty credit risk exposure to post collateral to the other party commensurate to the risk. The party that collects the collateral can then use it to absorb losses if the counterparty were to default on the swap. Before section 716 was enacted, if collateral agreements that called for netting of collateral were in place in the example shown in figure 6 discussed previously, then the client would post $20 in collateral with the bank. After section 716, the client would post $100 in collateral with the bank, and the nonbank dealer would post $80 in collateral. Although the additional collateral mitigates the increase in counterparty credit risk for one party, it also increases costs and liquidity risk for the party posting the collateral. The prudential regulators’ OTC swap collateral requirements generally require banks to post and collect collateral to and from other swap dealers and financial end-users, but not commercial end-users. Consequently, both banks and financial end-users likely experienced and would have experienced higher collateral costs under the amended and original section 716 to the extent that the provision reduced or would have reduced netting efficiencies. In contrast, commercial end-users—while they may have posed increased credit risks to banks under the original section 716 due to losses in netting efficiencies—would not necessarily have had to post collateral accordingly. Lastly, swap end-users theoretically could preserve netting efficiencies to a greater extent if they moved all of their swaps under the same netting set to the nonbank affiliate. This action likely would involve moving not only section 716 covered swaps but also all other swaps—such as legacy swaps (i.e., section 716 covered swaps entered into before the effective date of the statute) or interest rate and foreign exchange swaps—to the nonbank dealer. Such action would help preserve a larger part or all of an end-user’s netting set and, thus, the ability to net and not incur additional collateral requirements. Of the four BHCs affected by the amended section 716, two told us that none of their clients asked to move any of the legacy structured swaps to the nonbank affiliates, and two told us that some of their clients asked to move their legacy swaps to the nonbank affiliates. However, under the original section 716 some clients likely would have requested their banks to transfer their legacy commodity, equity, or noncleared credit default swaps or even some interest rate or foreign exchange swaps to the nonbank affiliates to preserve netting benefits. A number of banks could not determine precisely how many and to what extent clients would have done this, partly because the decision is client-driven and made on a facts-and-circumstances basis. Through its restrictions on banks engaging in certain commodity, equity, or noncleared credit default swap activity, the original section 716 would have required 11 U.S. bank swap dealers to cease such activity and thus would have reduced the possibility for such swaps to contribute to these banks’ potential failure. At the same time, this potential benefit likely would have resulted in costs for their BHCs and swap end-users, as discussed earlier. With the amendment to section 716, the 11 U.S. bank swap dealers have been allowed to continue to engage in swap activity, except for certain structured finance swaps, and take on the related risk exposures. However, the 11 banks are required by the Dodd-Frank Act and other regulations to have certain levels of financial resources to support their swap activity and adequate systems to manage the associated risks. Consistent with such regulatory requirements, our analyses indicate that the 11 U.S. banks that would have been affected by the original section 716 held financial resources needed to support their swap-related credit, liquidity, and market risk exposures as of September 30, 2016. If the banks continue to hold such levels of financial resources and maintain adequate risk management systems, as required by their regulators and certain Dodd-Frank Act reforms and related regulations, we believe that losses stemming solely from swaps activity likely can be absorbed by the banks without causing them serious financial distress. However, it is important to note that, as illustrated by Lehman’s failure, derivatives can exacerbate a firm’s financial distress caused by other losses. Although the swap activity that banks continue to engage in as a result of the amendment of section 716 poses some degree of risk (which we discuss in detail in the next section), other Dodd-Frank Act requirements can help banks mitigate this risk. Besides section 716, other Dodd-Frank Act provisions seek to reduce BHCs’ probability of failure by subjecting them, including their banks, to enhanced prudential requirements and to heightened supervision. Since the 1980s, banks have been permitted to engage in various swap and other derivative activities but have been required to have adequate management and measurement systems and controls to conduct the activities in a safe and sound manner, as previously discussed. Banks also have been required to hold certain levels of capital—which acts as a cushion to absorb unexpected losses— to support their derivatives-related risks. More recently, banks have also been subject to the Dodd-Frank Act’s enhanced prudential requirements that are designed, in part, to better ensure that they hold sufficient resources to support their swap activity and maintain risk management and other systems to do so in a safe and sound manner. A number of the Dodd-Frank Act’s prudential and other reforms required the prudential regulators to issue regulations or take steps to help mitigate risks that banks face due to their derivatives activities, such as counterparty credit, liquidity, and market risks, including the following examples (for a more comprehensive discussion of each regulation, see app. III): Capital and leverage requirements. Prudential regulators revised their capital rules, in part to require banks to hold more capital against their derivative credit exposures and, thus, provide a larger cushion to absorb losses from such instruments, including derivatives trading losses and losses from counterparty defaults. Thus, in our view, these requirements help mitigate counterparty credit and market risks. Margin rules. Prudential regulators adopted new margin rules to require swap dealers of noncleared swaps to collect or post collateral (e.g., cash or securities) from or to certain counterparties to help protect each other against losses, including from counterparty default. The collateral that a bank collects from a swap counterparty provides an additional cushion (before using the bank’s own capital) to absorb derivative losses from swaps with that counterparty. Swap margin requirements are more targeted and dynamic than capital requirements, reflecting changes in risk of a specific swap counterparty’s portfolio. Thus, in our view, margin rules help banks mitigate swap counterparty credit risk. However, as discussed earlier, margin requirements can increase liquidity risk for swap counterparties. Single counterparty credit limit for BHCs. The Federal Reserve proposed regulations to limit the aggregate net credit exposure of a BHC with total consolidated assets of $50 billion or more to a single counterparty. These BHCs would be subject to increasingly stringent credit exposure limits. Because the proposal would limit a BHC’s combined exposures to a single counterparty including from swaps and other derivatives, we view the requirement as helping to limit swap counterparty credit risk. Liquidity requirements. Prudential regulators have adopted or proposed rules to impose minimum liquidity requirements and the Federal Reserve conducts supervisory liquidity stress tests on BHCs to help ensure that they have or can raise the funds needed to meet their near-term obligations, including from derivatives. Thus, in our view, liquidity requirements help to mitigate liquidity risk faced by banks because of their swap obligations. Capital planning and stress testing. The Federal Reserve also established supervisory stress test requirements for certain BHCs. These tests generate forward-looking information about a BHC’s capital adequacy under hypothetical scenarios that, among other things, impose market losses, including from derivatives trading. The Federal Reserve uses these stress tests as part of quantitative and qualitative assessments of BHCs’ capital adequacy and capital planning processes. Consequently, in our view, capital planning and stress test requirements can help banks mitigate market and counterparty credit risks. Volcker Rule. The prudential regulators adopted regulations to implement section 619 of the Dodd-Frank Act (also known as the Volcker Rule) which, among other things, allows BHCs and their subsidiaries to engage in swap activity and use swaps to hedge risks but subject to certain restrictions and requirements. Thus, in our view, the Volcker Rule generally seeks to limit the amount of market risk to which swap dealers can be exposed. Figure 7 highlights these and other Dodd-Frank Act requirements that help mitigate the counterparty credit, liquidity, and market exposures that banks face due to their derivatives activities. Based on our analysis, all 15 section 716 covered bank swap dealers or their BHCs are subject to the requirements identified in figure 7’s lighter boxes. In addition, larger, more complex BHCs are subject to additional capital, leverage, or other requirements that may constrain a BHC’s bank from entering into new swaps, for example, if such activity would cause the BHC’s capital to fall below required levels. Based on our analysis, of the 15 covered banks or BHCs, 11 larger, more complex BHCs and their banks are subject to some or all of the additional requirements identified in figure 7’s darker boxes. These 11 are known as Advanced Approaches BHCs under the risk-based capital rules, and 8 of the 11 are BHCs that the Federal Reserve has identified as global systemically important BHCs (GSIB). Due to the amendment of section 716, 11 U.S. bank swap dealers that generally would have been prohibited from receiving federal assistance or required to stop engaging in commodity, equity, or noncleared credit default swap activity continued such swap activity, and the related exposures remained within the banks. Our analyses indicate the 11 U.S. banks that would have been affected by the original section 716 held financial resources needed to support their swap-related credit, liquidity, and market risk exposures as of September 30, 2016. Our results are consistent with the goals of the Dodd-Frank Act’s prudential and other requirements designed to mitigate the risks banks face from their swap activity and to reduce their probability of failure. If banks continue to hold financial resources and maintain adequate risk management systems, as required by their regulators and certain Dodd-Frank Act reforms and regulations, losses stemming solely from the swap activity likely can be absorbed by the banks without causing them serious financial distress. However, as previously stated, it is important to note that derivatives can exacerbate a firm’s financial distress caused by other losses as illustrated by Lehman’s failure. For the 11 U.S. banks, our analyses indicate that the banks held the capital needed to support counterparty credit exposures (accounting for netting but not collateral) from their equity, commodity, or credit derivatives as of September 30, 2016. Our analyses also show that the fair value of the collateral held by banks in relation to their OTC trading derivative counterparties was, on average, sufficient to cover at least 68 percent of net current credit exposures of their derivatives as of that date. These results indicate that as of September 30, 2016—about a year after most banks would have had to comply with the original section 716—the banks had capital to absorb losses from such derivatives, and that such losses likely would have been mitigated to a significant degree with the collateral received from bank OTC derivative counterparties. We used banks’ estimated or reported net trading derivative assets and liabilities as our measure of the banks’ current counterparty credit risk exposure and compared the values to the banks’ capital. For the four largest bank swap dealers, we estimate that their net credit exposures from their equity, commodity, and credit derivatives (not accounting for collateral) constituted from 1 percent to 10 percent of their total capital as of September 30, 2016. As discussed in appendix IV, we estimated the banks’ net credit exposures because the banks do not publicly report such data by type of derivative, and our methodology has important limitations. In addition, we estimate that the four largest bank swap dealers on average collectively held collateral against 99 percent of their collective net current credit OTC derivatives exposures as of September 30, 2016. However, this percentage does not mean that almost all current credit exposure would be mitigated with collateral, as some counterparties over-collateralize and others under-collateralize exposures, and collateral is not fungible across swap counterparties. For the other seven bank swap dealers, we estimate that their actual net current credit exposures (not accounting for collateral) of all their trading derivatives—including swaps not covered under the original section 716—comprised from 4 percent to 16 percent of their total capital as of September 30, 2016. We could not reliably estimate the net trading derivative assets of the seven banks’ equity, commodity, and credit derivatives. As a result, we used the actual total net trading derivative assets, which include interest rate and foreign exchange derivatives that were not covered by the original section 716 and typically comprise the majority of the banks’ trading derivatives. In addition, we estimate that these banks on average collectively held collateral against 68 percent of their collective net current credit OTC derivatives as of September 30, 2016. Again, this percentage does not mean that 68 percent of their current credit exposure would be mitigated with collateral, as some counterparties over-collateralize and others under-collateralize exposures, and collateral is not fungible across swap counterparties. For the 11 U.S. banks, our analyses indicate the banks held the high- quality liquid assets needed to support their equity, commodity, or credit derivatives’ payment and collateral obligations as of September 30, 2016. Derivative liabilities expose banks to liquidity risk, in part because the derivative contracts typically require the banks to make regular payments as agreed in the contracts and post collateral to counterparties as the fair value of the contracts moves in the counterparties’ favor. To assess liquidity risk, we used estimated or reported net derivative liabilities for banks’ trading derivatives as our measure of the banks’ derivatives liquidity risk, and we compared those values against the banks’ high- quality liquid assets. Because banks, like their counterparties, post collateral for some of their derivative liabilities and because our analyses do not account for such collateral, our results likely overestimate the actual derivatives-related liquidity risk exposures. For the four largest bank swap dealers, we estimate that the net derivative liabilities for their equity, commodity, and credit derivatives (not accounting for posted collateral) constituted from less than 1 percent to about 5 percent of the banks’ high-quality liquid assets as of September 30, 2016. We used the same methodology described previously to estimate the net derivative liabilities for the banks’ equity, commodity, and credit derivatives as we did to estimate the net derivative assets for the banks’ equity, commodity, and credit derivatives. For the other seven bank swap dealers, we estimate that the actual total net trading derivative liabilities (including swaps not covered under the original section 716 but not accounting for collateral) constituted from about 1 percent to about 9 percent of their high-quality liquid assets as of September 30, 2016. As discussed earlier, due to data limitations, we could not reliably estimate the net derivative liabilities for the banks’ equity, commodity, and credit derivatives. As a result, we used the actual total net trading derivative liabilities, which included interest rate and foreign exchange derivatives that were not covered by the original section 716 and typically comprise the majority of the banks’ trading derivatives. Our analyses of the 11 banks’ quarterly mark-to-market losses from trading equity, commodity, and credit derivatives between the first quarter of 2007 and the third quarter of 2016 show that banks held the capital needed to support related trading losses. These results provide a backward-looking measure of the market risk associated with the trading of such swaps. For the four largest bank swap dealers, we estimate that quarterly net trading losses from their equity, commodity, and credit derivatives ranged from 5 percent to 7.6 percent of their total capital between the first quarter of 2007 and the third quarter of 2016. For six of the other seven bank swap dealers, we estimate that their quarterly net trading losses from their equity, commodity, and credit derivatives ranged from 0 percent to about 2 percent of their total capital between the first quarter of 2001 and third quarter of 2016. For the other bank, its largest loss during a quarter was around 14 percent of its capital. Value-at-risk (VaR), which is a forward-looking measure of market risk generally posed by derivatives and other trading activities, suggests that the 11 banks have the capital needed to support expected losses from derivatives under regular market conditions. Banks control market risk by establishing limits against potential losses using VaR models. The models use historical data to quantify the potential losses from adverse market moves in normal markets. Based on our analyses, the reported VaR measures for the BHCs of the four largest bank swap dealers indicate that the market risk from each BHC’s trading activities, which include the BHC’s section 716 bank’s derivatives trading activities, is a small percentage of each of the four bank’s capital: for example, ranging from 0.02 percent to 0.22 percent of their capital in the third quarter of 2016. In addition, based on results from the Federal Reserve’s supervisory stress tests, the BHCs of the 11 banks had sufficient capital to cover trading losses, including from their banks’ trading derivatives, under stressed market conditions. The BHCs of the 11 bank swap dealers are subject to the Federal Reserve’s stress tests, which evaluate the BHCs’ revenues, losses, and ultimately their capital levels under baseline, adverse, and severely adverse scenarios. In its 2015 and 2016 reviews, the Federal Reserve did not object to any of the capital plans, including the supervisory stress test results, of the 11 BHCs. For example, under the stress tests, all 11 BHCs were able to maintain at least minimum regulatory capital requirements under stressed scenarios. Section 716 seeks to reduce the risk of the federal government having to provide assistance backed by taxpayers to cover losses of a failed bank, but other Dodd-Frank Act provisions also mitigate this risk. While the Dodd-Frank Act’s prudential reforms discussed earlier seek to lower the probability of failure of large BHCs or their banks, the act’s resolution reforms seek to reduce the risk that the failure of a large BHC would adversely affect U.S. financial stability. Federal banking regulators and large BHCs are developing resolution strategies that seek to resolve a large BHC, if it were to fail, in an orderly manner and without federal assistance. For example, under the resolution strategies being developed by the BHCs with the four largest bank swap dealers, only the BHC would enter bankruptcy; its bank and other subsidiaries would remain solvent. These strategies, if successful, could help enable the BHC and its bank swap dealer to unwind or sell its swaps in an orderly manner and avoid value destruction. A bank’s swaps may not always result in losses that reduce its resolution value because swaps and other derivatives can be either assets or liabilities. In resolution, a failed bank’s trading derivatives portfolio could be (1) a net asset that increases the bank’s resolution value or (2) a net liability that decreases the failed bank’s resolution value. Because banks hedge market risk, their trading derivative assets and liabilities typically are close to each other in value. As discussed earlier, the Volcker Rule also serves to help minimize the market risk to which banks can be exposed through their swaps activity, in part by limiting the extent to which the value of their trading derivative assets can differ from the value of their trading derivative liabilities. Consequently, if a bank can wind down its trading derivatives portfolio in an orderly manner, it could avoid value destruction, if any. However, as illustrated by the failure of Lehman, the legal right of a bank’s swaps counterparties to terminate their swaps early if the bank or its BHC were to fail can result in the disorderly unwinding of the bank’s swaps and cause the bank to suffer avoidable losses on its swaps that decrease the bank’s resolution value. We found that prudential regulators are implementing the Dodd-Frank Act’s resolution reforms that seek to help ensure that the largest BHCs, if they were to fail, can be resolved in an orderly manner and avoid asset fire sales and value destruction. (See app. V for a more detailed discussion of Dodd-Frank Act resolution reforms in relation to BHCs with a bank swap dealer subsidiary.) Before the act, the government generally had two options to address the potential failure of a systemically important BHC or other nonbank financial firm: (1) allow it to enter bankruptcy or (2) provide it with aid. The act preserved bankruptcy as the preferred option and required the large BHCs to develop resolution plans describing how they can be resolved under the U.S. Bankruptcy Code in a rapid and orderly manner. In the public sections of their resolution plans, the BHCs of the four largest bank swap dealers generally have stated they have adopted the Single Point of Entry (SPOE) strategy as their preferred resolution strategy under the U.S. Bankruptcy Code. Under the SPOE strategy, only the top-tier BHC would enter bankruptcy. The BHC would use its financial resources, as needed, to support and recapitalize its operating subsidiaries to keep them solvent and preserve their going-concern value. For example, a loss that caused a BHC to fail would be passed up from the subsidiary that incurred the loss and absorbed by the BHC’s equity holders and unsecured creditors, which would have the effect of recapitalizing the BHC’s subsidiary. By keeping their bank subsidiaries solvent in the event of their failure, the BHCs could enable their banks to wind down or sell their swaps in an orderly manner and preserve their value. If one of the BHCs was not able to keep its bank solvent under its resolution strategy, FDIC would resolve the bank separately under the Federal Deposit Insurance Act (outside of the BHC’s resolution strategy) and could transfer the bank’s swaps to a solvent company to preserve their value. We found that the four U.S. BHCs, along with other resolution plan filers, have faced a number of challenges and obstacles developing their resolution plans. The four BHCs are continuing to revise their plans to address such challenges and obstacles, and regulators have proposed or finalized regulations to help improve the ability of the BHCs to execute their plans. According to the Federal Reserve and FDIC, resolution planning cannot guarantee that a BHC’s resolution would be executed smoothly, but the preparations can help ensure that the BHC could be resolved under bankruptcy without government support or imperiling the broader financial system. In 2016, we concluded that whether the plans of the largest BHCs actually would facilitate their rapid and orderly resolution under the U.S. Bankruptcy Code is uncertain, in part because none has used its plan to go through bankruptcy. In cases where resolution of a large BHC under the U.S. Bankruptcy Code may result in serious adverse effects on U.S. financial stability, the Dodd-Frank Act’s Orderly Liquidation Authority serves as the backstop alternative. Orderly Liquidation Authority gives FDIC the authority, subject to certain constraints, to resolve large financial companies outside of the bankruptcy process. Since 2013, FDIC has been developing a SPOE strategy to implement this authority. FDIC would be appointed receiver of the top-tier holding company and establish a bridge financial company into which it would transfer the holding company’s assets. The bridge company would continue to provide the holding company’s functions, and the company’s subsidiaries would remain operational. As its SPOE strategy has evolved, FDIC has focused on developing multiple options for liquidating the subsidiaries, such as by winding down or selling subsidiaries or selling a subsidiary’s assets. Importantly, FDIC is authorized to transfer swaps and other qualified financial contracts to the bridge company or another solvent financial company. According to FDIC, the agency intends to maximize the use of private funding in a systemic resolution, and the law expressly prohibits taxpayer losses from the use of Orderly Liquidation Authority. We provided a draft of this report to CFTC, the Federal Reserve, FDIC, OCC, and SEC for review and comment. CFTC, the Federal Reserve, FDIC, OCC, and SEC provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and members, CFTC, the Federal Reserve, FDIC, OCC, and SEC. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. We examined the effects of the amended and original versions of section 716 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). Specifically, we examined (1) the number of U.S. banks and the value of their swaps affected under the amended section 716 and that would have been affected under the original section 716; (2) the actual and potential costs or negative effects of the amended and original section 716 for U.S. banks and swap end-users, (3) U.S. banks’ risks associated with swap activities that continue to be carried on by the banks due to the section 716 amendment and mitigating factors, and (4) the effects of section 716 and other Dodd-Frank Act requirements on risk to taxpayers in the event of a bank failure. To examine the number of U.S. banks and the value of their swaps affected under the amended section 716 and that would have been affected under the original section 716, we reviewed both versions of the provision; analyses of section 716 prepared by the federal bank regulators (the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), and Office of the Comptroller of the Currency (OCC)) and four large banks; regulations issued by the Commodity Futures Trading Commission (CFTC) and Securities and Exchange Commission (SEC) on the registration of swap and security-based swap dealers and major swap and security-based swap participants; list of entities provisionally registered as swap dealers with CFTC; and reports, studies, and other materials on section 716, swaps, or asset- backed securities issued by GAO, law firms, market participants, and others. We also interviewed federal regulators, including the Federal Reserve, FDIC, OCC, CFTC, and SEC; an industry association; and the 15 U.S. banks that were provisionally registered as swap dealers with CFTC and thus were covered entities under both versions of section 716. According to 4 of the 15 U.S. banks registered as swap dealers, they were engaged in structured finance swaps activity and thus affected by the amended section 716. In comparison, according to 11 of the 15 U.S. banks registered as swap dealers (including the 4 banks that were affected by the amended section 716), they were engaged in equity, commodity, or noncleared credit default swaps activities and thus would have been affected by the original section 716 had it not been amended. To estimate the notional amount of swaps affected by the amended section 716—that is, the swap activity in which the four affected U.S. banks stopped engaging due to the amended section 716—we used data from SwapsInfo.com, a website managed by the International Swaps and Derivatives Association, Inc. (ISDA). The site uses publicly disseminated data from swap data repositories to which registered swaps dealers in the United States are required to provide such information. ISDA’s SwapsInfo.com captures data on credit default swap transactions, including some of those covered under the amended section 716—those structured finance swaps based on groups or indexes primarily comprised of asset-backed securities. However, the data do not include structured finance swaps on single-name asset-backed securities. Based on the data provided by ISDA’s SwapsInfo.com, we calculated the total notional value of new structured finance swap transactions that were executed between July 16, 2015, and September 30, 2016, and reported to U.S. swap data repositories. We used the total notional amount as our estimate of the volume of structured finance swaps affected by the amended section 716 based on the assumption that the nonbank affiliates of the four U.S. banks affected by the amended section 716 were on one side of every new transaction and that no U.S. bank swap dealer entered into a new structured finance swap for hedging or risk management purposes. On one hand, our estimate could overestimate the amount of swaps affected by the amended section 716, in part because some of the transactions may not have involved one of the nonbank affiliates. On the other hand, our estimate could underestimate the amount, in part because it does not include all structured finance swaps entered into during our time period. To estimate the notional amount of swaps affected by the original section 716—that is, the swap activity in which the 11 affected U.S. banks would have stopped engaging due to the original section 716 if it had gone into effect—we used data from the Consolidated Reports of Condition and Income (Call Reports) as of September 30, 2016. Specifically, banks report the notional amount of their interest rate, foreign exchange, equity, commodity and other, and credit derivatives in their Call Reports, and the equity, commodity, and credit default swaps covered under the original section 716 are subsets of the derivatives reported in the Call Reports. In that regard, we used the notional amounts reported by the banks for their equity, commodity, and credit derivatives as of September 30, 2016, to estimate the total notional amount of swaps that would have been affected by the original section 716. Our estimate likely overestimates the total notional amount of swaps that would have been affected by the original section 716, because the estimate includes swaps that might not have been required to be pushed out to retain eligibility for federal assistance, such as (1) swaps used for hedging, (2) swaps entered into before affected banks were required to comply with section 716 (i.e., legacy swaps), or (3) cleared credit default swaps. To examine the costs or negative effects of the amended and original versions of section 716 for U.S. banks and swap end-users, we reviewed and analyzed the 2-year transition applications submitted by banks to the Federal Reserve or OCC; OCC examinations of and guidance provided to banks covering section 716; documents on the ISDA Master Agreement and credit support annex; regulations issued by the Federal Reserve, FDIC, OCC, CFTC, and SEC, including on margin or capital requirements for swap and security-based swap dealers; and reports or other materials addressing the implementation of 716 or related issues published by consulting firms, credit rating agencies, and law firms. In addition, we interviewed the 15 section 716 covered banks, 7 non-generalizable end- users of swaps judgmentally selected based on their use of swaps covered under the original or amended section 716, 3 credit rating agencies that issued analyses on section 716 or structured finance swaps, and 3 academics whose research focused on the derivatives markets or section 716. To examine the banks’ risks associated with swap activities that continue to be carried on by the banks due to the section 716 amendment and the effects of section 716 and other Dodd-Frank Act requirements on risk to taxpayers in the event of a bank failure, we reviewed the Dodd-Frank Act’s prudential and resolution reforms and related regulations, including on risk-based and leverage capital requirements, liquidity requirements, total loss-absorbing capacity, global systemically important bank holding companies, the Volcker rule, orderly liquidation authority, and resolution plan requirements; joint feedback and guidance provided by the Federal Reserve and FDIC to bank holding companies on their resolution plans; Federal Reserve’s and OCC’s bank examination manuals and related derivatives guidance; publicly available regulatory filings submitted by U.S. banks registered as swap dealers or their parent holding companies, including SEC annual or quarterly filings and resolution plans; and industry, academic, and other studies or reports examining the role of derivatives in the recent financial crisis and ways to mitigate risks posed by derivatives under the U.S. Bankruptcy Code. To analyze credit, liquidity, and market risks associated with swaps covered under the original section 716 for the 11 affected U.S. banks, we primarily used Call Report data, including the net positive and negative fair values of their trading derivatives, fair value of their collateral collected for their trading derivatives, and quarterly net gains or losses from their trading derivatives, total risk-based capital. For more information on our methodology, our results, and the limitations of our analysis, see appendix IV. In addition, we interviewed federal banking regulators, banks registered as swap dealers, and others mentioned above about the risks related to the amended and original section 716. As discussed earlier, we used data from the Call Reports, SEC annual and quarterly filings, and SwapsInfo.com to estimate the total notional value of swaps affected by the amended and original section 716 and to measure and assess the credit, liquidity, and market risks raised by swaps covered under the original section 716. We assessed the reliability of the data by interviewing knowledgeable officials, reviewing relevant documentation, or testing the data for missing or incorrect values. We determined the data were sufficiently reliable for our reporting objectives. We conducted this performance audit from March 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In their Consolidated Reports of Condition and Income (Call Reports), banks report information about their derivatives, including their notional amounts, gross and net derivative assets and liabilities (also called derivative receivables and payables, or positive and negative fair values of derivatives), and amounts of associated collateral. Such publicly available information can be used to assess how a bank’s derivatives can affect its risk exposures. (See app. IV for estimates of certain derivatives risks using swap dealer banks’ public financial statements). In this appendix we analyze the relationship among these reported derivatives measures and derivative risks. We explain why derivative notional amounts do not generally represent derivatives risks, how the gross and net values of derivative assets and liabilities can help approximate certain risks associated with derivatives, and how collateral received or paid can further reduce such risks. Swaps and other derivatives can be assets or liabilities. As explained in the sections that follow, a bank’s counterparty credit risk associated with its derivatives can be estimated with varying levels of precision by calculating the value of its (1) gross derivative assets, (2) derivative assets after accounting for netting, and (3) net derivative assets after accounting for the collateral collected from counterparties on those derivatives. A bank’s liquidity risk associated with its derivatives can be estimated by calculating the value of its (1) gross derivative liabilities, (2) derivative liabilities after accounting for netting, (3) and the net derivative liabilities after accounting for the collateral posted to counterparties for those derivatives. Because the dollar amounts associated with these derivatives measures can vary significantly for a given bank, it is important to understand how the measures are related to counterparty credit risk and liquidity risk in order to accurately estimate such risks. For example, the derivatives held by four U.S. banks account for the vast majority of derivatives held by U.S. banks. As of September 30, 2016, the different reported derivatives measures for these four large bank swap dealers were as follows: The notional amounts of their derivatives ranged from around $22 trillion to $51 trillion. Their gross derivative assets ranged from around $395 billion to $1.1 trillion. Their net derivative assets ranged from around $12 billion to $65 billion, representing 1 percent to 7 percent of their gross derivative assets. Their gross derivative liabilities ranged from around $394 billion to $1.1 trillion. Their net derivative liabilities ranged from around $5 billion to $53 billion, representing 1 percent to 5 percent of their gross derivative liabilities. The value of the collateral the four banks held against their derivative assets (for over-the-counter (OTC) derivatives) ranged from 87 percent to 110 percent of their net derivative assets. However, these results overestimate the extent to which the collateral would mitigate credit risk as some counterparties over-collateralize and others under-collateralize exposures, and collateral is not fungible across swap counterparties. Banks typically require hedge funds to post an amount of collateral greater than the value they are owed (i.e., greater than the net asset amount of the derivatives with that counterparty), but banks may not require commercial firms to post collateral. While a bank’s total held collateral may nearly equal the total value of its net derivative assets, the bank still may have uncollateralized derivative assets from swaps with commercial firms. Notional amounts alone do not provide useful measures of a bank’s credit, liquidity, or market risks associated with its derivatives. The notional amount of a derivative contract is a reference amount that is used with the contract’s other terms to calculate payments. Notional amounts generally are measured in dollar amounts but can reference other amounts, such as the number of currency units, shares, bushels, or pounds. Counterparties generally do not exchange the notional amounts except in certain circumstances for certain types of credit derivatives. The examples that follow show the role that notional amounts play in an interest rate derivative contract and a credit default swap contract. In both examples, the notional amount is a dollar amount. In the interest rate derivative example, the notional amount is not exchanged. In the credit default swap example, the notional amount is exchanged. Example 1—Interest Rate Swap. Company C wants to hedge its risk with a security paying a floating rate and enters into a 1-year interest rate swap with Bank B. Under the swap, Bank B agrees to make quarterly fixed payments of 5 percent multiplied by $10 million to Company C, and Company C agrees to make quarterly floating payments of 3-month London Interbank Offered Rate (LIBOR) multiplied by $10 million to Bank B. The swap’s notional amount is $10 million. Table 2 shows the quarterly amounts that Bank B owes Company C, the quarterly amounts that Company C owes Bank B, and the net cash flows between the two counterparties. Bank B and Company C do not exchange the notional amount. Example 2—Credit Default Swap. Insurer I invested $10 million in Company C’s bonds and entered into a credit default swap with Bank B to protect itself against a loss if Company C defaults on its debt. Under the swap, Insurer I agrees to make quarterly payments of 5 percent of $10 million to Bank B, as long as Company C (a third party that is not a party to this contract) does not default on its bonds, and Bank B agrees to pay Insurer I $10 million in exchange for Company C’s bonds if Company C defaults. The contract terminates in 5 years, or earlier if Company C defaults. The swap’s notional amount is $10 million. Table 3 shows that Insurer I made quarterly payments to Bank B for 6 quarters until Company C defaulted. In the seventh quarter, Bank B pays Insurer I $10 million, and Insurer I delivers Company C’s bonds to the bank. Although Bank B paid Insurer I the notional amount, it received $750,000 in quarterly payments and Company C’s bonds, which could have some recovery value. Gross and net values of derivatives assets and liabilities can help approximate certain risks associated with derivatives. As mentioned earlier, swaps and other derivatives can be assets or liabilities. To see if a derivative represents an asset or a liability to the bank, a bank estimates the fair value of the contract. The fair value of a derivative contract is the price at which the contract would be transferred in an orderly transaction—one that occurs under sufficient time and exposure to the market to allow for usual or customary marketing activities to unfold—between market participants in its principal (or most advantageous) market. Generally, bank swap dealers recalculate the fair market value of their derivatives contracts based on current market prices (called marking to market) on a daily basis. A bank’s total gross derivative assets and liabilities are an initial approximation of derivatives risks as follows: The total for all contracts with positive fair values to the bank is the gross value of its derivative assets. Counterparty credit risk is the potential for financial losses resulting from the failure of a counterparty to perform on an obligation. Thus, a bank’s gross derivative assets—or the gross value of what it is owed on its derivatives—represents an initial measurement of the bank’s counterparty credit exposure associated with its derivatives. The total for all contracts with negative fair values to the bank is the gross value of its derivative liabilities. Liquidity risk is the risk to an institution’s financial condition from its inability to meet its contractual obligations. Similarly, a bank’s gross derivative liabilities—or the gross value of what it owes on its derivatives—represents a measurement of the bank’s liquidity risk exposure associated with its derivatives. Accounting for the ability to net obligations with a derivatives counterparty better approximates risks associated with derivatives. When a bank has entered into multiple derivative contracts with the same counterparty that are covered by a legally enforceable master netting agreement, the fair values of all of the contracts with that counterparty—both positive and negative—can be combined into a single net positive or negative fair value of all the contracts with that counterparty. That is, the combined fair values of the contracts under an enforceable master netting agreement with a counterparty result in a net asset or a net liability for the bank with respect to that counterparty. This reduces counterparty credit risk and, possibly, liquidity risk because netting can reduce or eliminate exposures to a particular counterparty. For example, table 4 shows Bank B has three outstanding derivatives with Company C under a legally enforceable master netting agreement, allowing the contracts with positive and negative fair values to be combined into a net derivative asset of $845,000. Bank B also has two outstanding derivatives with Insurer I under a legally enforceable master netting agreement, resulting in a net derivative liability of $10,000. For the swaps under the same legally enforceable master netting agreement with Company C, the gross counterparty credit exposures to the company are reduced from $1,070,000 to $845,000. For the swaps under the same legally enforceable master netting agreement with Insurer I, counterparty credit exposures are eliminated by netting. Better measures of counterparty credit risk and liquidity risk would take into account the value of the collateral received and paid by the bank, respectively. As a market practice and more recently as a regulatory requirement, swap dealers and other counterparties have used collateral arrangements to mitigate counterparty credit risk. Under one type of collateral arrangement, both counterparties post collateral (e.g., cash or liquid securities) when they enter a derivative transaction and each counterparty posts additional collateral based on the periodic marking to market of the position. The counterparty whose position has a negative fair value would post collateral with its counterparty. Collateral provides protection to both parties in the event of a default on a transaction of the other party, because the collateral receiver has recourse to the collateral and can thus make good some or all of the loss suffered before having to tap into its own capital to cover losses. The collateral held by a bank helps the bank mitigate its credit risk exposure to the counterparty that provided the collateral. Similarly, the collateral paid by the bank to a counterparty helps mitigate the strain that future swap obligations with that counterparty may pose on the bank. Example 3: Bank B’s Derivatives Portfolio after Accounting for Netting and Collateral. Table 5 shows the total notional amounts and total gross derivative assets and liabilities of Bank B’s derivatives and the effects of netting and collateral on Bank B’s counterparty credit risk exposure. The total notional amount of derivatives contracts with positive fair value is $55 million, and the total gross positive fair value of the contracts (i.e., the bank’s value of its gross derivative assets) is $1.54 million. The total notional amount of derivatives contracts with negative fair value is $45 million, and the total gross negative fair value of the contracts (i.e., the bank’s value of its gross derivative liabilities) is about $795,000. After accounting for netting, gross derivative assets are reduced to $975,000 and gross derivative liabilities are reduced to $430,000. After accounting for collected and posted collateral, the total net derivative assets are $878,000, and total net derivative liabilities are $349,000. As mentioned earlier, these are more accurate measures of counterparty credit risk and liquidity risk for bank B, because they measure the bank’s outstanding risks after taking into account netting and collateral received and paid. In addition to section 716, the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) includes other provisions that serve to limit, reduce, or mitigate risks faced by banks because of their swap or security-based swap (collectively referred to as swaps, unless otherwise noted) activities. Specifically, the Dodd-Frank Act establishes a framework to address the financial stability risks associated with major financial companies. Part of this framework seeks to reduce major financial companies’ probability of failure, including from their swap activities, by requiring the Board of Governors of the Federal Reserve System (Federal Reserve) to subject them to enhanced capital, liquidity, and other prudential requirements and to heightened supervision. In addition, the Dodd-Frank Act also establishes a new regulatory framework specifically for swaps to reduce risk, increase transparency, and promote market integrity in swaps markets. Under the new framework, banks that deal swaps or security-based swaps in amounts above a specified threshold must register as swap or security-based swap dealers with the Commodity Futures Trading Commission (CFTC) or the Securities and Exchange Commission (SEC), respectively. These bank swap dealers also are subject to margin, capital, and other requirements set by their respective federal prudential banking regulator: the Federal Reserve, the Office of the Comptroller of the Currency (OCC), or the Federal Deposit Insurance Corporation (FDIC). Federal prudential banking regulators established an integrated regulatory capital framework by implementing many aspects of the Basel III regulatory capital reforms and the Dodd-Frank Act’s prudential reforms. The reforms include implementing a number of minimum risk- based capital and leverage requirements and a capital conservation buffer for banking organizations, including U.S. banks and their holding companies (see table 6). In addition, prudential regulators have imposed more stringent capital and leverage requirements on larger, more complex firms that serve as an additional capital buffer. These firms include (1) large, internationally active bank holding companies (BHC), also referred to as Advanced Approaches BHCs, and (2) global systemically important BHCs (GSIB). There are 15 U.S. banks that are registered with CFTC as swap dealers and thus are covered by the amended section 716 of the Dodd-Frank Act. As of September 30, 2016, 11 of the banks were subsidiaries of Advanced Approaches BHCs, and of these 11 BHCs, 8 of them also were GSIBs. A BHC that does not hold capital sufficient to meet or exceed its combined buffer level is subject to restrictions on capital distributions and discretionary bonus payments to executives, which become progressively stricter as its capital level falls deeper into the buffer. The additional capital buffer requirements are the following: Supplementary leverage ratio: Generally, Advanced Approaches BHCs (including GSIBs) and their bank subsidiaries, must maintain a supplementary leverage ratio of at least 3 percent on top of the minimum leverage ratio requirement described in table 6. The Advanced Approaches BHCs that are GSIBs also must maintain a leverage buffer of 2 percentage points on top of the 3 percent. Additionally, bank subsidiaries of GSIBs must maintain a supplementary leverage ratio of at least 6 percent to be considered “well capitalized” for purposes of Prompt Corrective Action. GSIB capital surcharge: The Federal Reserve established criteria for identifying a GSIB based on indicators in broad categories that are correlated with systemic importance, such as size, interconnectedness, cross-jurisdictional activity, substitutability, and complexity. The rule also imposed a risk-based capital surcharge for identified GSIBs based on calculations of risk derived from methods detailed in the rule. When the Federal Reserve issued the final rule on July 20, 2015, it estimated surcharges for the eight GSIBs it identified ranging from 1.0 percent to 4.5 percent of each firm’s total risk-weighted assets. Countercyclical capital buffer: Advanced Approaches BHCs and their banks are also subject to additional capital buffer requirements that expand the uniform capital conservation buffer in times of increasing financial vulnerabilities. Market risk capital rule: BHCs and banks with significant trading operations are required to report their market risk-weighted assets and include this amount in the total risk-weighted assets amount used to calculate their capital ratios. In 2015 and 2016, all section 716 banks and their BHCs were market risk firms. Under the capital and leverage ratio requirements (the minimum capital and leverage ratios and the supplementary leverage ratio) a BHC or bank’s weighted or unweighted derivatives exposures will increase the denominator of the ratios and, thus, require the BHC or bank to hold additional capital (as specified in the numerators of the ratios) to comply with the requirements. The capital buffer requirements (the capital conservation buffer and the countercyclical buffer) effectively increase the minimum ratio requirements, consequently increasing the required capital that covered BHCs or banks have to hold. In addition, the market risk capital rule and the more stringent Basel III risk weights on certain types of risky assets, including derivatives, increase risk-weighted assets which in turn increase the denominator of many of the ratio requirements. Because capital provides an institution with a cushion to absorb losses from its various activities, including derivatives trading, the capital and leverage requirements identified above help covered banks and their BHCs mitigate losses from swaps activity. The Federal Reserve also established supervisory stress test requirements for certain BHCs and certain banks, in part as a result of Dodd-Frank Act reforms. Dodd-Frank Act stress tests (DFAST) generate forward-looking information about a BHC’s capital adequacy and are used, in part, to project how hypothetical baseline, adverse, and severely adverse scenarios would affect the BHC’s revenues and losses and ultimately its capital levels. The Federal Reserve also uses the Comprehensive Capital Analysis and Review (CCAR), which builds on information from DFAST, to quantitatively and qualitatively evaluate the capital adequacy and capital planning processes of large BHCs. Under CCAR, the Federal Reserve may object to a BHC’s capital plan on either quantitative or qualitative grounds. A quantitative objection is made when the stress test reveals that a firm would not be able to maintain its post- stress capital ratios above the regulatory minimum levels over the planning horizon, taking into account its planned capital distributions. The Federal Reserve may object on qualitative grounds if it finds that the BHC’s capital planning processes are not sufficiently reliable. If the Federal Reserve objects on quantitative or qualitative grounds, the BHC may not make any capital distributions without the Federal Reserve’s permission. As required under the Dodd-Frank Act, the Federal Reserve annually defines three stress test scenarios—baseline, adverse, and severely adverse—that it uses for the supervisory stress test and requires DFAST BHCs to use in their annual company-run tests. The scenarios consist of hypothetical projections for macroeconomic and financial variables, such as measures of the unemployment rate, gross domestic product, housing and equity prices, interest rates, and financial market volatility. The stress tests’ post-stress capital ratios, which are an important output of the stress tests, reflect projections of risk-weighted assets and balance sheet and income statement items under the stress scenarios and measure the amount of capital a BHC would have available to cover unexpected losses. Federal Reserve staff told us that stress tests do not separately stress a BHC’s over-the-counter (OTC) derivatives portfolios. However, the stress tests are a forward-looking method to help ensure that a BHC has sufficient capital to withstand losses, including from OTC derivatives, under stressed scenarios. In addition, BHCs with large trading operations, including from derivatives, are subject to additional components in the severely adverse and adverse DFAST scenarios designed to stress their trading and private equity (in the case of the global market shock), or counterparty positions (in the case of the counterparty default component). All section 716 covered banks’ BHCs are subject to DFAST and CCAR stress tests. Six of the covered banks’ BHCs are subject to the global market shock component, and eight of the covered banks’ BHCs are subject to the counterparty default component in their adverse and severely adverse scenarios. Lastly, the Dodd-Frank Act also requires banks and other financial companies with $10 billion in assets or more to conduct annual stress tests pursuant to regulations prescribed by their respective primary financial regulatory agencies. All of the banks covered by section 716 are subject to such company-run stress tests. Title VII of the Dodd-Frank Act provides for the registration and regulation of swap dealers and major swap participants and subjects them to CFTC, SEC, and prudential regulatory requirements, such as minimum capital and minimum initial and variation margin requirements (also referred to as collateral requirements, because margin requirements are satisfied by collecting or posting collateral such as cash or certain securities). Prudential regulators’ collateral requirements mandate the exchange of initial and variation margin for noncleared swaps between bank swap dealers and certain counterparties. The amount of required margin varies based on the risk posed by a covered swap entity’s counterparty. Initial margin protects the collecting party from the potential future exposure that could arise from changes in the mark-to-market value of the contract in the event that the margin-posting party defaults. The amount of initial margin reflects the size of the potential future exposure. A covered swap entity generally must post and collect initial margin when it engages in noncleared swaps with another swap entity or with a financial end-user with material swaps exposures. Swap transactions used by other end-users to hedge or mitigate commercial risk are exempt from initial margin requirements. If the end-user is not using the swap for hedging purposes, a covered swap entity must collect initial margin that has been determined to appropriately address the credit risk posed by the counterparty and the risks of such swap. Variation margin protects the transacting parties from the current exposure that has already been incurred by one of the parties from changes in the mark-to-market value of the contract after the transaction has been executed. The amount of variation margin reflects the size of this current exposure. A covered swap entity generally must post and collect variation margin on trades with other swap entities or with financial end-users. Swap transactions used by commercial (i.e., non-financial) end-user counterparties to hedge or mitigate commercial risk are exempt from collateral requirements. If the commercial end-user is not using the swap for hedging purposes, a covered swap entity must collect variation margin that has been determined to appropriately address the credit risk posed by the counterparty and the risks of such swap. The prudential regulators also are establishing a new liquidity framework for U.S. BHCs, as well as certain savings and loan holding companies and large insured depository institution subsidiaries, by implementing Basel III and Dodd-Frank Act liquidity requirements. The reforms include two new quantitative liquidity standards: the Liquidity Coverage Ratio (LCR) and the proposed Net Stable Finding Ratio (NSFR) (see table 7). The LCR standard is designed to promote the short-term resilience of the liquidity risk profile of large banking organizations and to improve the banking sector’s ability to absorb shocks arising from economic and financial stress over a short term. The proposed NSFR rule focuses on the stability of a company’s funding structure over a longer, one-year horizon. The LCR generally applies to banking organizations with $250 billion or more in total consolidated assets or $10 billion or more in on- balance sheet foreign exposure and their subsidiary depository institutions that have assets of $10 billion or more. The LCR final rule also applies a less stringent, modified LCR to BHCs and certain savings and loan holding companies that do not meet these thresholds but have $50 billion or more in total assets. Covered companies must hold high-quality liquid assets at least equal to 100 percent (70 percent for the modified LCR) of their net cash outflows over a 30-day stress period. As proposed, the NSFR would apply to bank holding companies, certain savings and loan holding companies, and depository institutions that have $250 billion or more in total consolidated assets or $10 billion or more in total on- balance sheet foreign exposure, and to their consolidated depository institution subsidiaries that have total consolidated assets of $10 billion or more. The proposed rule also would apply a less stringent, modified NSFR to BHCs and certain savings and loan holding companies that do not meet these thresholds but have $50 billion or more in total consolidated assets. The proposal would require covered companies to maintain available stable funding that equals or exceeds 100 percent (or 70 percent in the case of modified NSFR) of its required stable funding on an ongoing basis. Under the liquidity requirements, a BHC’s derivative activity can increase the denominator of the ratios and, thus, require the BHC, savings and loan holding company, bank, or thrift to hold more liquid assets or stable funding to comply with the requirements. In the case of the LCR, the denominator of the ratio can increase with (1) net derivative cash outflows (i.e., the amount, if greater than zero, of the payments and collateral made or delivered to each counterparty, less the sum of payments and collateral due from each counterparty, if subject to a valid qualifying master netting agreement), or (2) net collateral outflows (i.e., outflows related to changes in collateral positions that could arise during a period of financial stress). In the case of the NSFR, the denominator increases if an aggregated measure of a covered company’s derivatives portfolio is a net asset, as the regulators believe such assets require full stable funding. The denominator also increases based on a measure of gross derivative values that are liabilities to account for potential changes in the value of the derivatives that may require the firm to post additional collateral or settlement payments. In addition, the Federal Reserve launched in 2012 the Comprehensive Liquidity Assessment and Review (CLAR) for GSIBs and other large firms. According to Federal Reserve staff, CLAR is a supervisory annual quantitative and qualitative assessment of a GSIB’s and other large firms’ liquidity positions and liquidity risk management practices. Under CLAR, the Federal Reserve evaluates firms’ liquidity positions both through a range of supervisory liquidity metrics and through analysis of firms’ internal stress tests that each firm uses to make funding decisions and to determine its liquidity needs. According to Federal Reserve staff, in evaluating the firms’ stress testing practices the Federal Reserve has focused on assumptions regarding liquidity needs for derivatives trading, among other issues. Unlike the capital stress tests, CLAR does not include specific standardized minimum liquidity ratios based on stress tests. But according to Federal Reserve staff, through supervisory direction, stress test ratings downgrades, or enforcement actions, the Federal Reserve directs firms with weak liquidity positions under CLAR’s liquidity metrics to improve their practices and, as warranted, their liquidity positions. The Federal Reserve has proposed regulations imposing single counterparty credit limits for BHCs with total consolidated assets of $50 billion or more. The proposal would limit the aggregate net credit exposure, including credit exposure from swaps and other derivatives, of a BHC with total consolidated assets of $50 billion or more to a single counterparty. For U.S. BHCs, the proposed credit exposure limits are as follows: (1) A GSIB would be required to limit its aggregate net credit exposure to another GSIB or to a nonbank financial company supervised by the Federal Reserve to 15 percent of its tier 1 capital, and to other counterparties to 25 percent of its tier 1 capital, (2) an advanced approaches firm that is not a GSIB would be required to limit its aggregate net credit exposure to a counterparty to 25 percent of its tier 1 capital, and (3) any other covered BHC would have to limit its exposure to a counterparty to 25 percent of its consolidated capital stock and surplus. Additionally, in an effort to restrain risk taking at BHCs and to reduce the potential for these entities to require federal support because of their speculative trading activity, section 619 of the Dodd-Frank Act (also known as the Volcker Rule) prohibits banking entities from engaging in proprietary trading, subject to certain exceptions. Proprietary trading generally refers to using the institution’s own funds to profit from short- term price changes and includes derivatives trading. The prohibition applies broadly to banking entities that are registered swap dealers. Exceptions from the prohibition exist for derivatives transactions entered into for purposes of risk-mitigating hedging, market-making, or underwriting. Consequently, section 619 and section 716 have some similarities, although they are different in their scope of covered entities or products. Under section 619, banking entities can engage in proprietary trading in derivatives if they meet the requirements of a permitted activity, including market-making or risk-hedging; under section 716, only swap entities have additional restrictions regarding the types of swap activities in which they may engage. While section 716 applies to bank swap dealers, the Volcker Rule generally restricts proprietary trading by insured depository institutions and companies that control insured depository institutions and their affiliates and subsidiaries. In this regard, the Volcker Rule seeks to limit the amount of speculative derivatives exposures that can generate large gains but also unmanageably large losses throughout a BHC, as was the case with American International Group, Inc. (AIG) during the 2007—2009 crisis. Table 8 summarizes Dodd-Frank Act requirements imposed on bank swap dealers or their BHCs that serve to help reduce their probability of failure. The original section 716 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) generally prohibited the provision of federal assistance to banks registered as swap dealers that engaged in equity swaps, commodity (except for precious metals) swaps, and noncleared credit default swaps activity, unless, among other things, the institution limited its swap activities to hedging and other similar risk mitigating activities directly related to the institution’s activities. In December 2014, section 716 was amended before the transition periods for complying with the original provision expired, and the provision’s scope was reduced to cover only structured finance swaps activity (e.g., swaps on asset-backed securities), unless the swaps were undertaken for hedging or risk management purposes. To analyze the risks associated with swaps covered under the original section 716, we focused on the 11 U.S. banks that were registered as swap dealers and dealt equity, commodity, or noncleared credit default swaps before the original section 716 was amended. Had section 716 not been amended, these 11 U.S. banks would have had to stop engaging in swaps activity for such swaps on or before July 16, 2015, generally when their transition periods expired, in order to retain access to federal assistance. With the amendment, the 11 bank swap dealers were allowed to continue to deal equity, commodity, or noncleared credit default swaps (with the exception of certain structured finance swaps due to the section 716 amendment). We analyzed how equity, commodity, and credit derivatives affected the counterparty credit, liquidity, and market risks of the 11 bank swap dealers from July 16, 2015, through September 30, 2016. To analyze counterparty credit and liquidity risks associated with swaps covered under the original section 716, we primarily used data from the 11 U.S. banks’ Consolidated Reports of Condition and Income (commonly referred to as Call Reports). As discussed in appendix II, an initial measurement of a bank’s counterparty credit risk is the sum of a bank’s derivative contracts that have a positive fair value, called gross derivative assets. Similarly, a measurement of a bank’s liquidity risk from its derivatives is the sum of the bank’s derivative contracts that have a negative fair value, called gross derivative liabilities. In Call Reports, banks report gross derivative assets and liabilities by type of underlying— interest rate, foreign exchange, equity, commodity, and credit derivatives. However, gross derivative assets and liabilities can significantly overestimate a bank’s counterparty credit or liquidity exposures, because they do not account for netting that can significantly reduce such risks. As discussed in appendix II, a bank that has multiple derivative contracts with the same counterparty under a legally enforceable master netting agreement can combine all contracts’ gross positive and negative fair values (i.e., gross assets and liabilities) into a single net positive or negative fair value (i.e., net asset or liability) with that counterparty. Such netted derivative assets and liabilities across a bank’s counterparties is the primary metric that the Office of the Comptroller of the Currency uses to evaluate banks’ counterparty credit risk from their derivatives. In Call Reports, banks report net derivative assets and liabilities of their trading derivatives in aggregate and not by type of underlying (i.e., interest rate, foreign exchange rate, equity, or credit derivative contracts). Because interest rate and foreign exchange swaps were not covered under the original section 716, such data cannot be used to measure a bank’s counterparty credit or liquidity risk on a net basis for only its swaps covered under that version of the provision. In light of the data limitations, we took two approaches to measure the 11 banks’ counterparty credit and liquidity risks from their trading derivatives. For the four largest bank swap dealers, which account for around 90 percent of all derivatives held by U.S. banks, we used a methodology to estimate net derivative assets and liabilities for the swaps covered and not covered under the original section 716. As mentioned earlier, interest rate and foreign exchange derivatives were not covered by the original section 716 but in September 30, 2016, accounted for over 90 percent of each of the four banks’ total derivatives notional amounts. Thus, not excluding such derivatives from our counterparty credit and liquidity risk measures would significantly overestimate risks arising solely from section 716 covered swaps. For the 7 other bank swap dealers, we used a simpler but less precise approach that included their interest rate and foreign exchange derivatives. For the four largest bank swap dealers (Bank of America, N.A., Citibank, N.A, Goldman Sachs USA Bank, and JPMorgan Chase Bank, N.A.), we developed a methodology using the gross derivative assets and liabilities of their trading derivatives as reported in the September 30, 2016, Call Reports to estimate net derivative assets and liabilities by type of underlying. Our methodology included the following steps. Of the four banks, only one of their bank holding companies (BHC) reports gross trading derivative assets and liabilities by type of underlying in its annual and quarterly filings with the Securities and Exchange Commission (SEC). We divided the reported BHC’s net assets and liabilities for its interest rate, foreign exchange, equity, commodity, and credit derivatives by their respective gross assets and liabilities to develop “netting ratios” for each type of underlying, covering data from 2009 through 2016. For example, to calculate the netting ratios for interest rate derivative assets and liabilities, we did the following: We divided (1) interest rate derivative net assets by interest rate derivative gross assets, and (2) interest rate derivative net liabilities by interest rate derivative gross liabilities. We calculated the minimum, median, and maximum netting ratios over the selected period, resulting in three netting ratios for derivative assets under each type of underlying (i.e., interest rate, foreign exchange equity, commodity, and credit derivatives) and three netting ratios for derivative liabilities under each type of underlying. We calculated minimum, median, and maximum netting ratios over the time period, because the banks’ netting ratios may differ from the BHC’s netting ratios. For each bank, we multiplied the minimum, median, and maximum netting ratios by the bank’s respective gross derivative assets or liabilities by type of underlying (as reported in the September 30, 2016, Call Reports). These calculations produced a range of minimum, median, and maximum estimates of net trading derivative assets and liabilities by type of underlying for each bank. We summed the minimum, median, and maximum estimates to produce three estimates of total net derivative assets and liabilities for each bank. To determine whether we should use the minimum, median, or maximum estimate, we compared each estimated total against the total net derivative assets and liabilities for each bank’s trading derivatives, as reported in the September 30, 2016, Call Reports. We selected the estimates closest in value to the actual reported values. We used the minimum estimated totals for Goldman Sachs USA Bank, the median estimated totals for JPMorgan Chase Bank N.A., and the maximum estimated totals for Citibank N.A., and Bank of America N.A. We also compared each bank’s actual netting ratios and our estimated netting ratios on a portfolio basis (e.g., total net derivative assets and liabilities divided by total gross derivative assets and liabilities). JPMorgan Chase Bank N.A.’s, Citibank N.A.’s, and Bank of America N.A.’s actual and estimated netting ratios differed by less than a half of a percentage point. Our estimated netting ratios for Goldman Sachs USA Bank were 2.0 percentage points and 1.5 percentage points higher than its actual netting ratios. After selecting the estimates of the net derivatives assets and liabilities for each bank’s total trading derivatives that was closest in value to the bank’s actual reported values, we then used the estimates to measure the bank’s net exposures to swaps covered under the original section 716. For derivative assets and liabilities, each total net estimate is comprised of net estimates of the bank’s interest rate, foreign exchange, equity, commodity, and credit derivatives. We added the estimated net derivative assets of each bank’s equity, commodity, and credit derivatives to estimate each bank’s counterparty credit exposure associated with section 716 originally covered swaps. Similarly, we added the estimated net derivative liabilities of each bank’s equity, commodity, and credit derivatives to estimate each bank’s liquidity exposure associated with section 716 originally covered swaps. Our methodology assumes that the four banks’ netting ratios are comparable to the netting ratios of the BHC that reported gross and net derivatives assets and liabilities by underlying. To the extent this assumption does not hold true, such as because of differences in the composition of the banks’ derivatives trading portfolios or counterparties, our estimates would be adversely affected. As discussed earlier, to assess the reasonableness of our assumption and estimates, we compared our estimates of each bank’s total net derivative assets and liabilities with each bank’s actual total net derivative assets and liabilities. Also, we recognize that our estimates likely overestimate the banks’ counterparty credit and liquidity exposures associated with section 716 originally covered swaps, in part because they do or likely include (1) swaps that were used for hedging and, thus, would have been permissible under the original section 716, (2) swaps that the banks entered into before section 716 would have taken effect and thus could have been retained by the banks, and (3) swaps that were not covered by the original section 716, such as commodity swaps referencing bullion or cleared credit default swaps. For the other seven bank swap dealers that would have had to stop engaging in swaps activity for swaps covered by the original section 716 had it not been amended, in order to retain access to federal assistance, we used the total net derivatives trading assets and liabilities as reported in the September 30, 2016, Call Reports. As discussed earlier, such data include interest rate and foreign exchange derivatives that were not covered by the original section 716. As with the four largest bank swap dealers, the majority of the derivatives of the other seven dealers are interest rate and foreign exchange derivatives. However, they hold significantly less derivatives than the four largest bank swap dealers. Because of such differences, we could not use our netting ratios to estimate the net derivative assets and liabilities of the seven banks’ equity, commodity, and credit derivatives based on their reported gross derivative assets and liabilities. As a result, our measures of the derivatives-related counterparty credit and liquidity risks associated with section 716 originally covered swaps for these seven bank swap dealers overestimate the actual counterparty credit and liquidity risk they face from those swaps. Counterparty credit risk is the potential for financial losses resulting from the failure of a borrower or counterparty to perform on an obligation. For the 11 U.S. banks, our analyses indicate that the banks held the capital needed to support counterparty credit exposures (accounting for netting but not collateral) from their equity, commodity, or credit derivatives as of September 30, 2016. Our analyses also show that the fair value of the collateral held by banks in relation to their over-the-counter (OTC) trading derivative counterparties was, on average, sufficient to cover at least 68 percent of net current credit exposures of their derivatives. These results indicate that the banks had capital to absorb potential losses from their swaps covered by the original section 716 and that such losses likely would have been mitigated to a significant degree with the collateral received from bank OTC derivative counterparties. For the four largest bank swap dealers, our analyses indicate that their estimated net counterparty credit exposures from their swaps covered by the original section 716 comprise from around 1 percent to 10 percent of their total capital as of September 30, 2016. In addition, the four largest bank swap dealers on average collectively held collateral against 99 percent of their collective net current credit OTC derivatives exposures (see table 9). However, this percentage does not mean that almost all current credit exposure would be mitigated with collateral, as some counterparties overcollateralize and others undercollateralize exposures, and collateral is not fungible across swap counterparties. For the seven other bank swap dealers, our analyses shows that their net counterparty credit exposures from all of their trading derivatives— including swaps not covered under the original section 716—comprised from around 4 percent to 16 percent of their total capital as of September 30, 2016. In addition, these banks, on average, collectively held collateral against 68 percent of their collective net current credit OTC derivatives exposures (see table 10). Again, this percentage does not mean that 68 percent of their current credit exposure would be mitigated with collateral, as some counterparties over-collateralize and others under-collateralize exposures, and collateral is not fungible across swap counterparties. Liquidity risk is risk to an institution’s financial condition from its inability to meet its contractual obligations. Derivatives liabilities expose banks to liquidity risk, in part because the derivative contracts typically require the banks to make regular payments as agreed in the contracts and post collateral to counterparties as the value of the contracts moves in the counterparties’ favor. Net derivative liabilities, however, do not take into account collateral that the bank may have already posted to its counterparties (and thus would be available to counterparties to absorb losses). For the 11 U.S. banks, our analyses indicate the banks held high-quality liquid assets needed to support their equity, commodity, or credit derivatives’ payment and collateral obligations as of September 30, 2016. This result suggests that the banks would have had liquidity to meet the obligations from their equity, commodity, and credit derivatives. To assess liquidity risk, we used estimated or reported net derivative liabilities for banks’ trading derivatives as our measure of the banks’ derivatives liquidity risk, and we compared those values with the banks’ high-quality liquid assets. For the four largest bank swap dealers, our analyses indicate that the estimated net derivative liabilities for their equity, commodity, and credit derivatives (not accounting for posted collateral) constituted from less than 1 percent to about 5 percent of the banks’ high-quality liquid assets as of September 30, 2016. Because banks have posted collateral for some of these derivatives and because our analyses do not account for such posted collateral, our percentages overestimate the actual derivatives-related liquidity risk exposures. For the other seven bank swap dealers, our analyses show that the actual total net trading derivative liabilities (including swaps not covered under the original section 716 but not accounting for collateral) constituted from about 1 percent to about 9 percent of their banks’ high- quality liquid assets as of September 30, 2016. The total includes interest rate and foreign exchange derivatives, which were not covered by the original section 716 and typically comprise the majority of the banks’ trading derivatives. Market risk is the potential for financial losses due to the increase or decrease in the value or price of an asset or liability resulting from broad movements in prices such as changes in interest rates, foreign exchange rates, equity prices, or commodity prices. To estimate market risks associated with swaps, we analyzed the quarterly net gains or losses from trading commodity, equity, and credit derivatives and cash instruments for the 11 banks that would have been required to stop engaging in activity for such swaps, or lose access to federal assistance, under the original section 716 from the first quarter of 2007 through the third quarter of 2016. Our analyses of the 11 banks’ quarterly mark-to- market losses from trading equity, commodity, and credit derivatives between the first quarter of 2007 and the third quarter of 2016 show that banks held the capital needed to support related trading losses. For the four largest bank swap dealers, our analysis found that quarterly net losses did not exceed 7.6 percent of any of the bank’s capital from the first quarter of 2007 through the third quarter of 2016 (see fig. 8). For the other seven bank swap dealers, our analysis found that their quarterly net losses ranged from 0 percent to about 2 percent of any bank’s capital for six of the seven banks between the first quarter of 2001 and third quarter of 2016. For the other bank, its largest loss during a quarter was around 14 percent of its capital (see fig. 9). More forward-looking measures of market risk posed by derivatives suggest that the expected losses from derivatives may be relatively small under regular and stressed market conditions. First, banks primarily control market risk in trading operations by establishing limits against potential losses using value-at-risk models (VaR). The models use historical data to quantify the potential losses from adverse market moves in normal markets. The reported VaR measures for the BHCs of the four largest bank swap dealers indicate that the market risk from each BHC’s trading activities, which includes its section 716 bank’s derivatives activities, is less than 1 percent of their capital: for example, ranging from 0.02 percent to 0.22 percent of their capital in the third quarter of 2016. Second, as discussed in appendix III, the Board of Governors of the Federal Reserve System’s (Federal Reserve) supervisory stress tests estimate losses that large BHCs may suffer, including from their derivatives, under stressed market conditions. The BHCs of the 11 bank swap dealers are subject to the Federal Reserve’s stress tests, which evaluate the BHCs’ revenues and losses and ultimately their capital levels under baseline, adverse, and severely adverse scenarios. In its 2015 and 2016 reviews, the Federal Reserve did not object on quantitative or qualitative grounds to any of the capital plans, including the supervisory stress test results, of the 11 BHCs. All 11 BHCs were able to maintain at least minimum regulatory capital requirements under stressed scenarios and had no significant deficiencies in their capital planning processes. In addition, 6 of the 11 BHCs are subject to the additional global market shock component, and 8 of the 11 BHCs are subject to the counterparty default component in their adverse and severely adverse scenarios. Prudential regulators are implementing the Dodd-Frank Wall Street Reform and Consumer Protection Act’s (Dodd-Frank Act) resolution reforms to help ensure that large bank holding companies (BHC), including their banks, can be resolved in an orderly manner, if necessary. These reforms, if successful, can help BHCs with banks that are large swap dealers wind-down their swaps in an orderly manner and preserve their value. Fifteen U.S. banks are provisionally registered as swap dealers with the Commodity Futures Trading Commission. However, four U.S. bank swap dealers—Bank of America, N.A.; Citibank, N.A.; Goldman Sachs Bank USA; and JPMorgan Chase Bank N.A.—account for the large majority of derivatives held by U.S. banks. These bank swap dealers are subsidiaries of BHCs that the Board of Governors of the Federal Reserve System (Federal Reserve) has identified as global systemically important BHCs (GSIB) in light of the threat their failure or material financial distress would pose to U.S. financial stability. This section’s discussion and analyses primarily focus on the four U.S. GSIBs and their bank swap dealers. In the event of their failure, the four BHCs with the largest U.S. bank swap dealers plan to enter bankruptcy but keep their operating subsidiaries (e.g., banks and broker-dealers) solvent, in part to help them wind-down their swaps in an orderly manner. The Dodd-Frank Act requires certain institutions, including the four BHCs, to develop resolution plans for rapid and orderly resolution in the event of material financial distress or failure. According to the Federal Reserve and the Federal Deposit Insurance Corporation (FDIC), resolution planning cannot guarantee that a BHC’s resolution would be executed smoothly, but the preparations can help ensure that the BHC could be resolved under bankruptcy without requiring government support or imperiling the broader financial system. We concluded in 2016 that whether the largest BHCs’ resolution plans would facilitate their rapid and orderly resolution under the U.S. Bankruptcy Code is uncertain, in part because none has used its plan to go through bankruptcy. Since 2012, the four U.S. BHCs with the largest swap dealers, along with other large U.S. BHCs, have submitted resolution plans annually to the Federal Reserve and FDIC. Through their review of the plans, the regulators have provided additional guidance and feedback based on their review and expectations. Based on their review of the 2015 plans submitted by these four BHCs, the regulators jointly determined that two of the plans were not credible or would not facilitate an orderly resolution under the U.S. Bankruptcy Code. The regulators sent these two BHCs feedback letters that identified the plan deficiencies and required corrective actions. In addition, in their feedback letters, the regulators identified shortcomings in all four of the BCHs’ resolution plans and directed them to address the shortcomings in their plans submitted by July 1, 2017. As summarized in table 11, the regulators jointly identified in their feedback letters to the four BHCs a deficiency or shortcoming with each one’s 2015 plan to wind-down its derivatives in an orderly manner. Following their review of the 2015 resolution plans, the Federal Reserve and FDIC issued new guidance to all of the BHCs required to submit resolution plays by July 1, 2017. As part of the guidance, the regulators included a section on derivatives and trading activities that applied to the four U.S. BHCs with the largest bank swap dealers. According to the guidance, a dealer’s plan to stabilize and wind down a large derivative portfolio in an orderly manner following the BHC’s bankruptcy raises a number of significant issues that the four U.S. BHCs should address in their 2017 plans. As summarized in table 12, the four U.S. BHCs reported in the public sections of their 2016 plan filings a high-level summary of selected actions that they have taken. In the public sections of their resolution plans, the four U.S. BHCs with the largest bank swap dealers generally have adopted the Single Point of Entry (SPOE) strategy as their preferred resolution strategy under the U.S. Bankruptcy Code. Under the SPOE strategy, only the top-tier BHC would enter bankruptcy. The BHC would use its financial resources, as needed, to recapitalize and support its operating subsidiaries to keep them solvent and preserve their going-concern value. For example, a loss that caused a BHC to fail would be passed up from the subsidiary that incurred the loss and would be absorbed by the BHC’s equity holders and unsecured creditors, which would have the effect of recapitalizing the BHC’s subsidiary. As shown in figure 10, the SPOE resolution approach serves to enable a BHC’s subsidiaries to continue to operate while the BHC enters bankruptcy, reducing the potential for negative impact on its customers and the overall economy. In the example, the bank transfers losses up to its BHC in the event of distress, and only the BHC enters bankruptcy. As permitted by the bankruptcy court, the BHC transfers its subsidiaries to a new BHC, and these subsidiaries are then sold or wound down in an orderly manner. While the four U.S. BHCs face a number of obstacles or challenges in implementing their SPOE strategies, they and prudential regulators are taking actions to address such obstacles or challenges. For example, the Federal Reserve has finalized a rule to help ensure that the BHCs have sufficient financial resources to implement their SPOE strategies. Also, the regulators and BHCs are reducing the ability of swap counterparties to a BHC’s bank swap dealer to terminate their swaps early in the event of the BHC’s filing for bankruptcy and cause a disorderly wind-down of the bank swap dealer’s swaps and other qualified financial contracts. Total Loss-Absorbing Capacity. To implement their SPOE strategies, the four U.S. BHCs with the largest bank swap dealers must have sufficient financial resources to absorb losses by their banks or other operating subsidiaries and prevent them from failing. In January 2017, the Federal Reserve finalized its total loss-absorbing capacity rule, the objective of which is to reduce the financial impact of a failure by requiring companies to have sufficient loss-absorbing capacity. The rule requires, among other things, covered BHCs to maintain an outstanding minimum level of eligible external total loss-absorbing capacity comprised of capital issued by the BHC and eligible external long-term debt. The term “external” conveys that the requirement would apply to loss-absorbing instruments issued by the GSIB to third-party investors, and the instrument would be used to pass losses from the BHC to the third-party investors in bankruptcy or other resolution. For example, while a bank or other subsidiary would pass up its losses to its BHC in the event of distress, the BHC would pass its losses in the event of distress to its equity holders and unsecured creditors, including external long-term debt holders. Cross-Default Rights and ISDA Stay Protocol. Even if the four U.S. BHCs with the largest bank swap dealers had sufficient financial resources to keep their banks solvent under their SPOE strategies, the potential for their banks’ counterparties to terminate their swaps early under their International Swaps and Derivatives Association (ISDA) Master Agreements could undermine the banks’ ability to wind down or sell their swaps in an orderly manner. Under an ISDA Master Agreement, a solvent bank’s counterparties may exercise their cross- default rights to terminate their swaps with the bank early if the bank’s BHC files for bankruptcy. As illustrated by the failure of Lehman Brothers, such counterparty actions could result in a disorderly unwinding of the bank’s swaps that causes the bank to suffer avoidable losses on its swaps and contributes to its failure. For example, counterparties to whom the bank owes money may terminate their swaps early, and counterparties that owe the bank money may not terminate their swaps but may suspend their swap-related payments—exposing the bank to price risk and reducing the bank’s liquidity. Banking regulators and derivatives market participants have taken steps to address the threat that early terminations of swaps can pose to a BHC’s orderly resolution. Working with its members, U.S. and foreign regulators, and others, ISDA published protocols in 2014 and 2015 that enable parties to ISDA Master Agreements and certain other financial contracts to amend their financial contracts, in effect, to recognize the applicability of special resolution regimes (including Orderly Liquidation Authority discussed subsequently) and to restrict cross-default provisions to facilitate orderly resolution under the U.S. Bankruptcy Code. For example, provided certain conditions are met, parties that adhere to the 2015 protocol generally would be prohibited from exercising their cross- default rights to terminate early with a BHC’s bank if the bank’s BHC entered bankruptcy. In 2016, the Federal Reserve, FDIC, and OCC separately proposed rules that generally require a U.S. GSIB and its subsidiaries to amend their swaps (and other qualified financial contracts), so that their counterparties would be stayed from exercising their cross-default rights based on the GSIB’s or its subsidiary’s entry into resolution. The proposed rules would require GSIBs and their subsidiaries to amend the contractual default provisions of the financial contracts, including by adhering to the ISDA 2015 protocol. The four U.S. BHCs with the largest bank swap dealers (and their bank swap dealers) have adhered to the protocol in order to enhance their ability to implement their SPOE strategy and avoid a disorderly wind-down of their swaps. Although the four U.S. BHCs with the largest bank swap dealers plan to keep their banks solvent under their SPOE strategies, circumstances could arise in which a BHC lacks the financial resources to absorb losses suffered by its bank. If the BHC’s bank is insolvent and cannot be recapitalized by the BHC, the bank would be resolved by FDIC under the Federal Deposit Insurance Act. Federal assistance backed by taxpayers could be needed to help temporarily support FDIC’s Deposit Insurance Fund if the failed bank’s losses, for example, were large enough to deplete the fund. However, FDIC could use its authority under the Federal Deposit Insurance Act to help preserve the value of the bank’s swaps and reduce taxpayer risk. For example, under its statutory authority, FDIC may transfer a failed bank’s swaps and other derivatives to a bridge bank or other financial company within 1 business day after the bank’s failure, preventing the exercise of the default rights of the bank’s counterparties to terminate their swaps. As a result, FDIC could avoid the selective terminations of swaps by the failed bank’s counterparties and, in turn, the value destruction that such terminations could produce, as was the case in Lehman’s failure. In cases where the failure of a large BHC and its resolution under the U.S. Bankruptcy Code would have serious adverse effects on U.S. financial stability, the Dodd-Frank Act’s Orderly Liquidation Authority serves as the backstop alternative. Orderly Liquidation Authority gives FDIC the authority, subject to certain constraints, to resolve large financial companies outside of the bankruptcy process. Since 2012, FDIC has been developing a SPOE strategy to implement its Orderly Liquidation Authority. Under its SPOE strategy, FDIC would be appointed receiver of the top-tier U.S. holding company and establish a bridge financial company into which it would transfer the holding company’s assets to preserve their value. The bridge company would continue to provide the holding company’s functions, and the company’s subsidiaries would remain operational. As its SPOE strategy has evolved, FDIC has focused on developing multiple options for liquidating the subsidiaries, such as by winding down or selling subsidiaries or selling a subsidiary’s assets. Title II authorizes FDIC to transfer swaps and other qualified financial contracts to the bridge company or another solvent financial company. To give FDIC time to make such transfers and to avoid a disorderly wind- down of swaps, Title II generally prohibits counterparties to qualified financial contracts from exercising their default rights with the holding company or its subsidiaries. By keeping the holding company’s subsidiaries solvent and preventing swap terminations, FDIC could minimize market disruptions and preserve the value of the swaps. According to FDIC, the agency intends to maximize the use of private funding in an Orderly Liquidation Authority resolution and expects the bridge financial company and its subsidiaries to obtain funding from customary sources of liquidity in the private markets. If private-sector funding cannot be obtained, the Dodd-Frank Act provides for an Orderly Liquidation Fund to serve as a back-up source of liquidity support that would be available only on a fully secured basis. Ultimately any Orderly Liquidation Fund borrowings are to be repaid either from recoveries on the assets of the failed firm or, in the event of a loss on the collateralized borrowings, from assessments against the eligible financial companies. The law expressly prohibits taxpayer losses from the use of Orderly Liquidation Authority. As amended, section 716 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act)—also known as the “swaps push-out rule”—effectively required banks registered as swap dealers or security-based swap dealers to stop engaging in certain types of swaps or security-based swap activities, or be prohibited from receiving federal assistance. Officials from four banks told us that they engaged in structured finance swaps activity and moved such activity to their nonbank swap dealer affiliates by July 2015, when their 2-year extension periods expired. These four banks are supervised by the Office of the Comptroller of the Currency (OCC) or the Board of Governors of the Federal Reserve System (Federal Reserve). Regulators and the four banks told us they have not had major difficulties overseeing or implementing the amended section 716, respectively. Regulators stated that they assess compliance, including with section 716 requirements, through ongoing supervision and examinations. Unlike section 619 (also referred to as the Volcker rule) and some other Dodd-Frank Act provisions, section 716 does not require prudential regulators to issue any rules. Federal Reserve and OCC told us that they chose not to issue any rules to implement the amended section 716 because they perceive the provision’s requirements to be sufficiently clear. For example, the amended section 716 defines the term “structured finance swap” as a swap or security-based swap based on an asset- backed security (or group or index primarily comprised of asset-backed securities); as a result, Federal Reserve and OCC said that they did not need to issue a rule to define the term. Although the amended section 716 does not require the prudential regulators to issue rules, it permits them to issue a joint rule to make additional exemptions to section 716 restrictions on structured finance swap activity. However, the regulators told us that they do not currently plan to issue any such rules. The four bank swap dealers told us that they have not encountered any major challenges in complying with the amended section 716 and do not need guidance from the prudential regulators. Similarly, bank swap dealers that engaged in structured finance swaps activity told us that they were able to move their structured finance swaps activity to their affiliated broker-dealers to comply with the amended provision. The banks relied on their legal teams to identify which units within their banks traded covered swaps and on their operations teams to implement controls to prevent these units from trading impermissible swaps. Banks told us that, because they also must comply with Volcker Rule restrictions, they use the Volcker Rule’s definition of risk-mitigating hedging to interpret section 716’s exemption. Regulators conduct onsite supervision of banks within their jurisdiction, including those affected by the amended section 716. The regulators’ onsite supervision includes monitoring activities, assessing risks, completing core assessments, and communicating with bank management throughout the supervisory cycle. Examiners regularly review management information system reports and profit and loss reports from bank dealers’ trading desks to identify any structured finance swap activity requiring further investigation. For example, Federal Reserve and OCC staff told us that they can detect compliance issues related to section 716 through their supervision of the banks’ and bank holding companies’ (BHC) compliance with, among other things, the Volcker Rule’s reporting requirements. Federal Reserve and OCC told staff us that they take a risk-based supervisory approach and would weigh the volume and complexity of trades associated with section 716 in that overall approach. OCC conducts targeted examinations in various areas, including for section 716. These targeted examinations generally include reviewing the banks’ policies, associated controls, and governance framework for complying with statutory requirements, including section 716, and meeting with key personnel across the bank’s affected business lines and independent control functions to assess bank readiness. In addition to the contact name above, Richard Tsuhara (Assistant Director), Silvia Arbelaez-Ellis (Analyst-in-Charge), Jessica Artis, Rachel DeMarcus, Risto Laboski, Courtney L. LaFountain, Marc W. Molino, Patricia Moye, Jennifer Schwartz, and Kwame Som-Pimpong made significant contributions to this report. | Given the role of derivatives in contributing to the 2007—2009 financial crisis, the Dodd-Frank Act includes various provisions that subject the swap market and its participants to greater regulation, including section 716. Proponents of section 716 sought to prohibit banks from engaging in riskier swap activities that could cause the banks to need federal assistance backed by taxpayers. Opponents of section 716 maintained that swaps trading by banks did not significantly contribute to the financial crisis. In late 2014, section 716 was amended to narrow its scope of prohibited swap activities. Banks generally were required to begin complying with the amended section 716 in July 2015. GAO was asked to examine various effects of the amended and original versions of section 716. This report examines the provision's effect on U.S. banks and their BHCs, end-users of swaps, and taxpayers in light of other Dodd-Frank Act reforms. GAO analyzed publicly available data on swaps and derivatives held by banks and their BHCs and reviewed laws and regulations applicable to swaps as well as academic, industry, and GAO reports, research, and other materials. GAO also interviewed federal banking and swaps regulators, 15 U.S. banks that were registered as swap dealers and thus covered by section 716, end-users that were or would have been affected by section 716, an industry association, and experts, such as academics researching the swaps market. Since the 1980s, banks have been engaging in swaps: financial contracts (derivatives) in which two parties “swap,” or exchange, payments based on changes in asset prices or other values. A variety of firms (end-users) use swaps to hedge risk, to speculate, or for other purposes. For example, an airline may use swaps to lock in its fuel price to hedge against a future price rise. End-users engage in swaps through swap dealers, and some large banks act as swap dealers, exposing them to risks. Section 716 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act)—also known as the “swaps push-out rule”—requires banks registered as swap dealers, in effect, to stop engaging in certain swap activities to remain eligible for federal financial assistance but allows them to “push out” such activities to nonbank affiliates within the same bank holding company (BHC). As originally enacted, section 716 would have covered certain equity, commodity, and credit default swaps activities, but amendments made in 2014 now cover only certain swap activity based on asset-backed securities. GAO analyses of the effects of the amended and original versions of section 716 on U.S. banks and their BHCs, swap end-users, and taxpayers in light of other Dodd-Frank Act reforms found the following: A significantly larger volume of swaps would have been pushed out under the original section 716. The amended section 716 affected four U.S. banks and caused them to push out an estimated $265 billion of swaps in notional value as of September 30, 2016, or less than 1 percent of their total derivatives. The original version would have affected 11 U.S. banks (including the 4 banks) and could have affected an estimated $10.5 trillion of swaps in notional value, or about 6 percent of their total derivatives, if the provision had not been amended. Section 716 increases risks and costs for BHCs and end-users . Under the amended version, banks moved their covered swap activities to nonbank affiliates, requiring the affiliates and clients to incur legal and operational costs. Banks and end-users told GAO that moving the swaps can increase their risks and, in turn, costs. Such risks and costs likely would have been greater under the original version because of its broader scope. Other Dodd-Frank Act provisions mitigate risks . Section 716 seeks to reduce a bank's risk of failure and potential need for federal assistance, but the act's other reforms also seek to mitigate such risks. For example, regulators have subjected banks to enhanced prudential and other requirements that can help to mitigate their swap-related risks. Consistent with such requirements, GAO's analyses indicate the 11 U.S. banks that would have been affected by the original section 716 held financial resources needed to support their swap-related credit, liquidity, and market risk exposures as of September 30, 2016. Federal banking regulators and BHCs with the largest bank swap dealers are continuing to develop resolution strategies that seek to resolve a large BHC in an orderly manner and without federal assistance if it were to fail. These strategies, if successful, can help BHCs to wind-down or sell their swaps in an orderly manner and avoid value destruction. |
Initial response to a public health emergency of any type, including a bioterrorist attack, is generally a local responsibility that could involve multiple jurisdictions in a region, with states providing additional support when needed. The federal government could also become involved in investigating or responding to an incident. In addition, the federal government provides funding and resources to state and local entities to support preparedness and response efforts. Response to a release of a biological agent, whether covert or overt, would generally begin at the local level, with the federal government becoming involved as needed. Having the necessary resources immediately available at the local level to respond to an emergency can minimize the magnitude of the event and the cost of remediation. In the case of a covert release of a biological agent, it could be hours or days before exposed people start exhibiting signs and symptoms of the disease. Figure 1 presents the probable series of responses to such a bioterrorist incident. Just as in a naturally occurring outbreak, exposed individuals would seek out local health care providers, such as private physicians or medical staff in hospital emergency departments or public clinics. Health care providers would report any illness patterns or diagnostic clues that might indicate an unusual infectious disease outbreak associated with the intentional release of a biologic agent to their state or local health departments. Local and state health departments would collect and monitor data, such as reports from health care providers, for disease trends and outbreaks. Clinical samples would be collected for laboratorians to test for identification of illnesses. Epidemiologists in the health departments would use the disease surveillance systems to provide for the ongoing collection, analysis, and dissemination of data to identify unusual patterns of disease. The federal government could also become involved, as needed, in investigating or responding to an incident. For certain high-risk diseases, such as the Ebola virus, sample testing would be done at a federal Biosafety Level 4 laboratory equipped to handle dangerous and exotic biological agents. CDC has one such laboratory for testing of these dangerous agents. CDC also provides state and local jurisdictions with assistance on epidemiological investigations and treatment advice. Other federal agencies may also assist state and local jurisdictions in the investigation of and response to bioterrorism and other public health emergencies. Prior to January 2002, HHS distributed funds for bioterrorism preparedness through two main programs. From 1999 to through 2001 it funded state and local health departments through CDC’s Bioterrorism Preparedness and Response Program. From 1996 through 2001 it provided funding to local jurisdictions, targeting police, firefighters, emergency medical responders, hospitals, and public health agencies through the Metropolitan Medical Response System (MMRS) of the Office of Emergency Response (OER), formerly the Office of Emergency Preparedness, which was transferred to the Department of Homeland Security on March 1, 2003. CDC and HRSA are expanding or developing programs to help state and local governments, as well as hospitals and other health care entities, improve preparedness for and response to bioterrorism and other emergencies. In January 2002, HHS announced the allocation of $1.1 billion through CDC, HRSA, and OER for state and local bioterrorism preparedness. This funding supports three separate but related efforts—CDC’s Public Health Preparedness and Response for Bioterrorism program, HRSA’s Bioterrorism Hospital Preparedness Program, and OER’s MMRS program. States applying for funding through cooperative agreements under CDC’s Public Health Preparedness and Response for Bioterrorism program and HRSA’s Bioterrorism Hospital Preparedness Program were required to submit bioterrorism preparedness plans to HHS by April 15, 2002. All 50 states and four major municipalities applied for and received funding through these cooperative agreements. The noncompetitive cooperative agreements provide that CDC and HRSA funds must be used to supplement and not supplant any current federal, state, and local funds that would otherwise be used for bioterrorism and other public health preparedness activities and that these activities should be coordinated with any MMRS programs in the jurisdiction. Also in 2002, additional funding was appropriated for expanding the National Pharmaceutical Stockpile, renamed the Strategic National Stockpile, and supporting bioterrorism-related research at the National Institutes of Health’s National Institute of Allergy and Infectious Diseases. To determine eligibility for the funding, CDC required the applicants to submit plans for use of the funds in six focus areas: preparedness planning and readiness assessment, surveillance and epidemiology capacity, laboratory capacity for biological agents, communications and information technology, risk communication and health information dissemination, and education and training. Each focus area included critical capacities that had to be addressed. These are the core expertise and infrastructure elements that need to be in place as soon as possible to enable a public health system to prepare for and respond to bioterrorism and other infectious disease outbreaks. An example of a critical capacity under the laboratory capacity for biological agents focus area is to develop and implement a jurisdiction-wide program to provide rapid and effective laboratory services in support of the response to public health threats and emergencies. In November 2002, HHS released supplemental guidance for implementing the new National Smallpox Vaccination Program. These guidelines state that recipients are encouraged to use funds made available through the CDC cooperative agreements to plan and implement this program and should redirect the funding as necessary. respond to bioterrorist attacks. The department released the first 20 percent of these funds to states and the municipalities within weeks of the January announcement. HHS identified 17 “critical benchmarks” (14 for the CDC funding and 3 for the HRSA funding) that officials were required to address in their application plans. HHS used the critical benchmarks to screen application plans for approval before it released the remaining 80 percent of the CDC and HRSA funding. The benchmarks for the CDC program included such activities as designating an executive director of the state bioterrorism preparedness and response program, developing an interim plan to receive and manage items from the Strategic National Stockpile, and preparing a time line for the development of regional plans to respond to bioterrorism. In addition, CDC is allowing states to use this funding to address preparedness efforts between states and in regions that border a foreign country. The benchmarks for the HRSA program included development of a timeline for developing and implementing a regional hospital plan for dealing with a potential epidemic involving at least 500 patients. HHS requires progress reports from the states at approximately 6-month intervals to provide oversight of CDC and HRSA programs and to determine future funding. The remaining funds that were allocated for state and local preparedness in January 2002 supported OER’s MMRS program. State and local officials reported varying levels of preparedness to respond to a bioterrorist attack. They recognized deficiencies in preparedness and were beginning to address them. We found that the states and cities we visited were making greater progress in certain elements of preparedness than in others. Some elements, such as those involving coordination efforts and communication systems, were being addressed more readily, whereas others, such as infrastructure and workforce issues, were more resource-intensive and therefore more difficult to address. The level of preparedness varied across the cities, with jurisdictions that had multiple prior experiences with public health emergencies generally being more prepared than the other cities, which had little or no such experience prior to our site visits. The cities we visited generally made greater progress in coordination and communication preparedness than in other elements of preparedness. Coordination efforts where progress was made included participation by relevant government and private sector officials in meetings to discuss how to work together in an emergency and participation in joint training exercises. Communication efforts included the purchase and implementation of new communication systems and development of procedures for communicating with the public and the media. Despite these advances, deficiencies in coordination and communication remained. Most of the cities we visited had made efforts to improve coordination among the response organizations. Experience from public health emergencies, especially the terrorist attacks of September 11, 2001, and the subsequent anthrax incidents, provided momentum for local response organizations—including fire departments, emergency medical services, law enforcement, public health departments, emergency management agencies, and hospitals—to improve coordination. Organizations, such as hospitals, that previously were not substantially involved increased their participation in preparedness meetings and agreements. Further, most of the states we visited reported having established better links between the public health departments and the hospitals since the September 11, 2001, terrorist attacks and the subsequent anthrax incidents than had previously existed. For example, after September 11, 2001, a hospital in one of the cities reported that the public health department had given it a telephone number to reach public health officials 24 hours a day, 7 days a week. In many aspects, the anthrax incidents in October 2001 were exercises in cooperation between the health care community and traditional first responders. Many cities were inundated with calls about suspicious packages and powders. In several of the cities we visited, public health officials reported working with police and fire officials to create a system to determine which specimens were most suspicious. These triage systems greatly reduced the number of costly full-emergency responses. For example, during the height of the public’s concern about anthrax, one city, which was experiencing as many as 75 to 90 reports of a white powder per day, decided against sending out a complete hazardous materials unit for every report. Instead it sent a team consisting of a fire official, a hazardous materials official, a police official, and a public health official and this team made an initial assessment of whether the full team was needed to respond. Coordination improved not only horizontally, that is, across different entities within jurisdictions, but also vertically, that is, between local and state agencies. According to their progress reports, all of the states we visited used the 2002 federal funding in part to identify needs and coordinate and integrate information technology systems. In all of these states, emergency management communication systems were integrated both vertically between state and local agencies and horizontally between local government and hospitals. Only one of these states reported in its progress report to HHS that it continued to have major difficulties in improving coordination across different governmental levels because its communication system was not capable of sending and receiving critical health information. In addition, we found that officials were beginning to address communication problems. For example, six of the seven cities we visited were examining how communication would take place in an emergency. Many cities have purchased communication systems that allow officials from different organizations to communicate with one another in real time. Officials in one area told us that the fire and police departments in their area had incompatible radio systems and, consequently, were unable to communicate directly. This locality intended to install a compatible radio system. It was also considering purchasing wireless communication and messaging devices because of their success in other jurisdictions on September 11, 2001. State officials reported that they were beginning to make progress in developing procedures for communication. Responding to the anthrax incidents revealed a number of communication issues. For example, state and local agency officials identified problems with how information about the anthrax incidents was given to the public. These problems included not always getting facts about anthrax out quickly, not explaining what was occurring, and releasing inconsistent messages. Officials in one city told us that they set up an advisory group of retired media personnel to help them examine how they could use the media to help convey their message. Following a chemical exercise, public health officials in the same city realized that better lines of communication were needed. In response, members of the core bioterrorism team were issued pagers so that they could be contacted more easily. In addition, two states we visited reported to HHS that the outbreaks of West Nile virus in summer 2002 provided successful tests of their communication capabilities. In addition to these improvements, the state and local health agencies were working with CDC to build the Health Alert Network (HAN), an information and communication system. The nationwide HAN program has provided funding to establish infrastructure at the local level to improve the collection and transmission of information related to a bioterrorism incident as well as other emergency health events and disease surveillance. Goals of the HAN program include providing high- speed Internet connectivity, broadcast capacity for emergency communication, and distance-learning infrastructure for training. Despite these improvements, deficiencies in communication and coordination remained. For example, while four of the states we visited said in their progress reports that they had completed integrating all of their jurisdictions into HAN, two states had not yet achieved CDC’s goal to cover 90 percent of the state’s population. One of these states reported that, although it had developed a plan for emergency communication with the public, local needs were still being assessed. This state reported that coordination across multiple governmental levels was problematic and time-consuming, and progress in meeting goals for planning was slow. In addition, as of November 2002, only two of the states we visited reported that they had conducted preparedness exercises that encompassed all jurisdictions in the state. According to the states’ progress reports, all states we visited intended to conduct exercises on at least some portion of their various preparedness plans, such as the plan for receiving and distributing the Strategic National Stockpile, in 2003. In contrast to the improvements made in coordination and communication, progress related to the response capacity of the workforce, the surveillance and laboratory systems, and hospitals generally lagged. Deficiencies in capacity often are not amenable to solution in the short term because either they require additional resources or the solution takes time to implement. At the time of our site visits, shortages in personnel existed in state and local public health departments, laboratories, and hospitals and were difficult to remedy. Officials from state and local health departments told us that staffing shortages were a major concern. One official from a state health department said that local health departments in his state were able to handle the additional work generated by the anthrax incidents only by putting aside their normal daily workload. Local officials also stated that their normal daily workload suffered when staff were diverted from their usual responsibilities to work on bioterrorism response planning. Local officials recognized that diverting staff from their usual duties is appropriate in a time of crisis but were concerned about the impact on their other public health responsibilities over the longer term. Two of the states and cities that we visited were particularly concerned that they did not have enough epidemiologists to do the appropriate investigations in an emergency. One state department of public health we visited had lost approximately one-third of its staff because of budget cuts over the past decade. This department had been attempting to hire more epidemiologists. Barriers to finding and hiring epidemiologists included noncompetitive salaries and a general shortage of people with the necessary skills. Shortages in laboratory and hospital personnel were also cited. Officials in one city noted that they had difficulty filling and maintaining laboratory positions. People that accepted the positions often left the health department for better-paying positions. Five of the states we visited reported shortages of hospital medical staff, including nurses and physicians, necessary to increase response capacity in an emergency. Increased funding for hiring staff cannot necessarily solve these shortages because for many types of positions, such as laboratorians, there are not enough trained individuals in the workforce. According to the Association of Public Health Laboratories, training laboratorians to provide them with the necessary skills will take time and require a strategy for building the needed workforce. Three states cited ongoing shortages of personnel, which they were addressing in their progress reports. Two states had reported that they plan to hire veterinarians to assist in their preparedness efforts. One of these two states also noted difficulties in recruiting personnel when there was no guarantee of funding beyond the current year, meaning that prospective employees may not be offered permanent positions. Another state, however, has had success in hiring epidemiologists. State and local officials for the cities we visited recognized and were attempting to address inadequacies in their surveillance systems and laboratory facilities. Local officials were concerned that their surveillance systems were inadequate to detect a bioterrorist event. Six of the cities we visited used a passive surveillance system to detect infectious disease outbreaks. However, passive systems may be inadequate to identify a rapidly spreading outbreak in its earliest and most manageable stage because, as officials in three states noted, there is chronic underreporting and a time lag between diagnosis of a condition and the health department’s receipt of the report. To improve disease surveillance, six of the states and two of the cities we visited were developing electronic surveillance systems. In one city we visited, the public health department received clinical information electronically from existing hospital databases, which required no additional work by the hospitals. Several cities were also evaluating the use of nontraditional data sources, such as pharmacy sales, to conduct surveillance. Three of the cities we visited were attempting to improve their surveillance capabilities by incorporating active surveillance components into their systems. For example, one city asked six hospitals to participate in a type of active system in which the public health department obtains information from the hospitals and conducts ongoing analysis of the data to search for certain combinations of signs and symptoms. The city also had an active surveillance system for influenza. However, work to improve surveillance systems has proved challenging. For example, despite initiatives to develop active surveillance systems, the officials in one city considered event detection to be a weakness in their system, in part because they did not have authority to access hospital information systems. In addition, various local public health officials in other cities reported that they lacked the resources to sustain active surveillance. Officials from all of the states we visited reported problems with their public health laboratory systems and said that they needed to be upgraded. All states were planning to purchase the equipment necessary for rapidly identifying a biological agent. State and local officials in most of the areas that we visited told us that the public health laboratory systems in their states were stressed, in some cases severely, by the sudden and significant increases in workload during the anthrax incidents. During these incidents, the demand for laboratory testing was significant even in states where no anthrax was found and affected the ability of the laboratories to perform their routine public health functions. Following the incidents, over 70,000 suspected anthrax samples were tested in laboratories across the country. Public health laboratories in some areas quickly ran out of space for testing and storing samples. State and local officials had to rely on laboratory assistance at the federal level, and CDC received over 6,000 anthrax-related samples and had to operate its anthrax-testing laboratory 24 hours a day, 7 days a week and open an additional laboratory to test all the samples. Eighty-five percent of state and territorial public health laboratories reported that the need to perform bioterrorism testing during the anthrax incidents had a negative impact on their ability to do routine work, delaying testing for tuberculosis, sexually transmitted diseases, and other infectious diseases. Further, public health laboratories have a minimal association with private laboratories (that is, laboratories that are associated with private hospitals or are independent) or sometimes lack ties to laboratories in other states that could serve as a backup to ensure timely testing of samples. One state we visited had one state public health laboratory, no backup laboratory, and no written agreements with neighboring states to provide support. A task force of the Association of Public Health Laboratories has written that a lack of close ties can lead to a lack of communication and a lack of coordination of laboratory testing, both of which are needed to support public health interventions. All states we visited recognized these problems and, in their progress reports to HHS, reported that they were using the funds to improve the Laboratory Response Network. According to their progress reports, officials in the states we visited were working on solutions to their laboratory problems. States were examining various ways to manage peak loads, including training additional staff in the newest bioterrorism response methods, entering into agreements with other states to provide surge capacity, incorporating clinical laboratories into cooperative laboratory systems, and purchasing new equipment. One state was working to alleviate its laboratory problems by providing training on protocols for handling bioterrorist agents, upgrading two local public health laboratories to Biosafety Level 3 laboratories, and establishing agreements with other states to provide backup capacity. Another state reported that it was using the funding from CDC to increase the number of pathogens the state laboratory could diagnose. The state also reported that it has worked to identify laboratories in adjacent states that are capable of being reached within 3 hours over surface roads. In addition, all of the states reported that their laboratory response plans were revised to cover reporting and sharing laboratory results with local public health and law enforcement agencies. Federal, state, and local officials were concerned that hospitals might not have the capacity to accept and treat sudden, large increases in the number of patients, as might be seen in a bioterrorist attack. Hospital, state, and local officials reported that hospitals needed additional equipment and capital improvements—including medical stockpiles, personal protective equipment, decontamination facilities, quarantine and isolation facilities, and air handling and filtering equipment—to enhance preparedness. The resources that hospitals would require for responding to a bioterrorist attack with mass casualties are far greater than what are needed for everyday performance. Meeting these needs fully would be extremely difficult because bioterrorism preparedness is expensive and hospitals are reluctant to create capacity that is not needed on a routine basis and may never be utilized at a particular facility. Although hospitals may not be able to fully meet all preparedness needs, they can take action to increase their preparedness by developing plans for their internal emergency response operations, and some hospital officials reported taking these initial actions. For example, officials at one hospital we visited appointed a bioterrorism coordinator and developed plans for taking care of the families of hospital staff, transporting patients to the hospital, and communicating during an emergency. However, from its assessments of hospital capacity, one of the states we visited reported that only 11 percent of its hospitals could readily increase their capacity for treating patients with communicable diseases requiring isolation, such as smallpox. Another state reported that most of its hospitals have little or no capacity for isolating patients diagnosed with or being tested for communicable diseases. A third state was working with the state hospital association to provide every hospital in the state with portable decontamination units. Efforts have been made to assist hospitals in preparing for bioterrorism. For example, the hospital association in one city we visited was developing a set of recommendations, based on the American Hospital Association checklist, along with cost estimates, for health care facilities to improve their preparedness. The association’s recommendations included that each hospital have a 3-day supply of basic personal protective equipment (such as gloves, gowns, and shoe covers) on hand for staff, a 3-day supply of specified pharmaceuticals, emergency power, a loud speaker or other mechanism to communicate with a large group of converging casualties outside of the hospital entrance, and an external decontamination facility capable of handling 50 victims per hour. These guidelines give hospitals criteria by which they can measure their preparedness and, in turn, improve their internal emergency response operation plans. In their progress reports to HHS, all the states we visited discussed a number of activities they were undertaking with the HRSA funding to increase hospital preparedness. These included hiring state hospital bioterrorism program coordinators and medical directors, exploring the feasibility of coordinating hospitals’ bioterrorism emergency planning across states, and supplying selected hospitals with biohazard suits and decontamination systems. We found that the overall level of bioterrorism preparedness varied by city. In the cities we visited, we observed that those cities that had recurring experience with public health emergencies, including those resulting from natural disasters, or with preparation for National Security Special Events, such as political conventions, were generally more prepared than cities with little or no such experience. Cities that had dealt with multiple public health emergencies in the past might have been further along because they had learned which organizations and officials need to be involved in preparedness and response efforts and moved to include all pertinent parties in the efforts. Experience with natural disasters raised the awareness of local officials regarding the level of public health emergency preparedness in their cities and the kinds of preparedness problems they needed to address. For example, in one city we visited, officials found that emergency operations center personnel became separated from one another during earthquakes and had trouble staying in contact. These problems made decision making difficult. The officials told us that the personnel needed to learn how to use their radio system more effectively. (See app. I for details concerning preparedness by city.) All the cities we visited had to respond to suspected anthrax incidents in fall 2001; however, each city found different deficiencies in its capabilities. The anthrax incidents presented challenges for jurisdictions across the country, not just in the communities where anthrax was found. Among the problems that surfaced during the anthrax incidents, for example, were several dealing with coordination across agencies and communication among departments and jurisdictions and with the public. A local official reported that there was no mechanism to coordinate the public information, medical recommendations, and epidemiologic assessments throughout the state and neighboring areas and that this created considerable confusion and frustration for the public and medical community. In addition, officials in several states became aware of different types of limitations in their state and local communication capabilities during the anthrax incidents. For example, in one rural state, which had no confirmed anthrax cases but numerous false alarms, the state public health department faxed messages containing critical information to hospitals throughout the state. Officials in the department realized that this one-way system was insufficient because they also needed to be able to receive communications rapidly. They were able to increase their communication capabilities by setting up a 24-hour toll-free telephone number staffed by officials, who could respond to questions from hospitals. In another state, public health laboratory officials found that it was difficult for many facilities to print files received from CDC because their Internet connections were inadequate. Ultimately, the state created CD-ROMs containing the protocols describing how to deal with suspected anthrax samples, and a state public health official drove more than 500 miles across the state to deliver them. One of the cities we visited, which had experienced a large natural disaster in the late 1990s, was in the early stages of bioterrorism preparedness. This city is in a predominantly rural state, which started receiving funds for establishing a HAN system for public health information in fiscal year 2002. There were five epidemiologists at the state level and none at the local level, so the city depended on the state to determine when a disease investigation was warranted. The state had a limited passive surveillance system, with plans for a more elaborate, active surveillance system. In contrast, another city we visited was much further along in bioterrorism preparedness. In addition to dealing with natural disasters and other public health emergencies, the city had also prepared for and hosted a National Security Special Event. The state had been receiving funding for HAN since 1999. Epidemiologists were employed at the state and local levels. The city had a passive surveillance system, and it also had an active surveillance system for influenza, which has symptoms similar to those of the early stages of diseases attributable to several likely bioterrorist agents, such as anthrax. Even the cities that were better prepared were not strong in all elements. For example, one city had successfully developed an integrated approach to preparedness in which multiple organizations, both governmental and nongovernmental, examined where terrorist attacks are likely to occur, how they could be mitigated, and what resources were necessary. City officials also reported that communications had been effective during public health emergencies and that the city had an active disease surveillance system. However, officials also reported deficiencies in laboratory capacity and said that hospitals had not received sufficient bioterrorism response training. Another one of the better-prepared cities was connected to HAN and the Epidemic Information Exchange (Epi-X), and all county emergency management agencies in the state were linked. However, the state did not have written agreements with its neighboring states for responding to an emergency, and a major hospital in the city we visited lacked sufficient equipment for a bioterrorism response. State and local jurisdictions and response organizations made progress in developing plans to improve their preparedness. They had begun to include bioterrorism in their agencies’ overall emergency operation plans, and preparing the application plans for HHS funding helped states focus their planning efforts. In addition, hospitals, which were beginning to be seen as part of a local response system, were starting to participate in local response planning. While progress was made in local planning, regional planning between states lagged. A regional response to a bioterrorist attack would potentially require the mutual participation of officials from neighboring states or, in several instances, a neighboring country, yet some states lacked such coordination with their neighboring states and country and had not participated in joint response planning. At the time of our site visits, although most of the cities and states we visited had emergency operation plans, many of these plans did not specifically address the unique requirements of response to a bioterrorist attack. However, many of the response organizations in these cities and states had begun to develop emergency operation plans that include bioterrorism response. Officials from all of these response organizations stated that planning for a bioterrorist incident is difficult because they do not know what it means to be prepared and therefore are not sure if their plans will be adequate. At the time of our site visits, all seven states were in the stage of “planning to plan” for bioterrorism. While all of these states had previously taken steps to assess the readiness levels of their localities, they continued to need further assessments. For example, most were doing some assessments of capacity, such as assessments of hospital capacity and equipment. Although some of these efforts were time-consuming because of the need to develop assessment tools, such as surveys, the information on needs and current status is essential for the states to be able to plan. Preparing the application plans for HHS helped states to identify problems in bioterrorism preparedness by requiring them to address specified preparedness focus areas. In the application process, states were required to assess their capabilities in the focus areas and discuss how they planned to address their deficiencies. For example, under the surveillance and epidemiologic capacity focus area in its application plan for CDC funding, one state we visited identified a lack of adequate staffing, expertise, and resources. Officials reported in the plan that the department of public health was developing regional medical epidemiology teams, each of which would include a part-time practicing physician and a full- time epidemiologist, with enough teams to cover all the regions in the state. These teams would establish ongoing relationships with area hospital infection control programs, emergency departments, and other health care providers. Another state reported in its HRSA application plan that it did not have the capability to track resources, supplies, and the distribution of patients at the regional level. It planned to expand an existing electronic tracking system to track each hospital’s capacity, resources, and patient distribution on a real-time basis. At the time of our site visits, we found that hospitals were beginning to coordinate with other local response organizations and collaborate with each other in local planning efforts. Hospital officials in one city we visited told us that until September 11, 2001, hospitals were not seen as part of a response to a terrorist event but that the city had come to realize that the first responders to a bioterrorism incident could be a hospital’s medical staff. Officials from the state began to emphasize the need for a local approach to hospital preparedness. They said, however, that it was difficult to impress the importance of cooperation on hospitals because hospitals had not seen themselves as part of a local response system. The local government officials were asking them to create plans that integrated the city’s hospitals and addressed such issues as off-site triage of patients and off-site acute care. Government officials, health care association representatives, and hospital officials in many of the areas that we visited stated that hospitals had become more interested in these issues and more involved in planning efforts than prior to September 11, 2001. They noted that health care providers in hospitals gained an awareness of the seriousness of the threat of bioterrorism and began to ask for information, lectures, and presentations of their cities’ emergency plans. Hospital representatives, as well as state and local officials, told us that hospital personnel were more interested in attending training on biological agents and that hospitals had formed better connections with local public health departments in many areas. We also found that some hospitals were starting to collaborate with one another on planning efforts. Response organization officials were concerned about a lack of planning for regional coordination between states. As called for by the guidance for the cooperative agreements, all of the states we visited organized their planning on a regional basis, assigning local areas to particular regions for planning purposes. However, the state-defined regions encompassed areas within the state only. A concern for response organization officials was the lack of planning for regional coordination between states and with a neighboring country of the public health response to a bioterrorist attack. With regard to coordination efforts between states, a hospital official in one city we visited said that state lines presented a “real wall” for planning purposes. Hospital officials in one state reported that they had no agreements with other states to share physicians. However, one local official reported that he had been discussing border issues and had drafted mutual aid agreements for hospitals and emergency medical services. Public health officials from several states reported developing working relationships with officials from other states to provide backup laboratory capacity. States varied with regard to the intensity of their coordination efforts with a neighboring country. Officials in one state told us that the state lacked the needed coordination with the foreign country that it borders, but they reported in the state’s CDC application plan that workforce plans and infectious disease surveillance and reporting are the two priorities for the state with the neighboring country. The emergency management officials in the city we visited in that state reported that the border guards knew and informally coordinated with one another. Officials in this state reported in the state’s CDC application plan that some of the state’s hospitals employed people from the foreign country and so hospital staffing could be problematic if borders were closed during an emergency. However, officials in another state that we visited reported good regional partnerships with the foreign country that it borders. In fact, the state officials noted that the needs of a metropolitan area in the neighboring country would be evaluated and integrated into the state plan. In addition, the state reported in its progress report that it was developing an agreement with the neighboring country to provide laboratory surge capacity. State and local officials and hospital officials expressed concerns about the distribution and sustainability of federal bioterrorism preparedness funding, as well as about a lack of guidance on what it means to be prepared for a bioterrorism event. State and local officials we met with disagreed about whether federal funding for bioterrorism preparedness should flow through the state or go directly to the local jurisdictions. Hospital officials reported that federal funding from OER’s MMRS program in their cities had not always been shared with them in the past. In addition, state and local officials reported that sustainability in funding over several years would be beneficial to all jurisdictions. State and local officials requested more specific federal guidance on what constitutes adequate preparedness. State officials also requested more sharing of best practices to assist them in closing the remaining gaps in preparedness. State and local officials expressed several concerns regarding the federal funding provided for state and local bioterrorism preparedness both before and after September 11, 2001. These concerns were related to the distribution and sustainability of these funds. State and local officials we met with disagreed about whether federal funding for bioterrorism preparedness should flow through the state or go directly to the local jurisdictions. Local officials suggested that some funding should be allocated directly to local governments because it would be more efficient since the state would not withhold a percentage for its own use. However, state officials told us that if funds went directly to the local level, it would be difficult for them to direct the funding to the areas of greatest need within the states. In addition, state officials reported that when money flows through the states they can control purchases of emergency response equipment to ensure compatibility across regions of the state. Progress reports to HHS from the seven states we visited showed great variability in the speed with which the states committed funds provided through the CDC cooperative agreements, in part because of the differing state requirements for distribution. Two of the states had obligated more than 70 percent of the funding they received from HHS as of fall 2002, while two other states had obligated only about 20 percent of their funds as of the same time, with the remaining three states obligating percentages between these figures. Some states reported that they needed to arrange for grants or take other actions before they could transfer any of the funds to local jurisdictions. Hospital officials also raised concerns about the distribution of federal funding for preparedness. In a national survey, 62 percent of hospital officials said that a lack of awareness of federally sponsored preparedness programs was a factor in not participating in preparedness programs. In addition, hospital officials that we spoke with in two cities added that federal funding from OER’s MMRS program in their cities had not been shared with hospitals in the past. The HRSA program may help alleviate these problems. It has led to increased coordination among government agencies, which may lead to an increased awareness of the funding opportunity it provides. In addition, the HRSA guidance on funding under the cooperative agreement requires that approximately three-quarters of the funding be spent directly on or in hospitals, community health clinics, and other health care systems. HRSA also requires states to undertake certain initial state-level tasks that would not involve costs to the hospitals, including designating a hospital bioterrorism preparedness coordinator, establishing a statewide advisory committee, and conducting a needs assessment. In their progress reports to HHS, all states we visited reported that the HRSA funding was being used primarily to support such initial state-level activities, including conducting assessments, developing plans, and hiring state-level personnel. HHS recently stated that most, if not all, states have now determined how funding will be awarded to hospitals, community health clinics, and other health care systems. During our site visits, state officials also expressed concerns in light of the budget shortfalls and cuts they were experiencing. Officials from one state expressed concern that the 2002 funding from HHS might be used to supplant state funding instead of supplementing it, because of general budgetary cutbacks in the state, although such use is expressly prohibited by the funding agreements. An official from another state told us that the funding that its state public health laboratory received in 2002 from CDC for bioterrorism preparedness was not enough to offset the general cuts in the state budget for the public health laboratory. We were not able to determine whether any of the state funds were supplanted by the HHS funding. The public health infrastructure depends on sustained and consistent investment, yet in the past the funding has been viewed as unsystematic. In fiscal year 2002, states were experiencing budget shortfalls (as a percentage of general fund revenues) that were worse than after the recession of the early 1990s ended, and shortfalls in 2003 were expected to be even worse. The influx of federal funds for bioterrorism preparedness made it possible for jurisdictions to undertake new efforts in this area, at a time when other public health programs were experiencing cutbacks. State and local officials told us that sustained funding would be necessary to address one important need—hiring and retaining needed staff. They told us they would be reluctant to hire additional staff unless they were confident that the funding would be sustained and staff could be retained. These statements are consistent with the findings of the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction, which recommended that federal support for state and local public health preparedness and infrastructure building be sustained at an annual rate of $1 billion for the next 5 years to have a material impact on state and local governments’ preparedness for a bioterrorist event. We have noted previously that federal, state, and local governments have a shared responsibility in preparing for terrorist attacks and other disasters. However, prior to the infusion of federal funds, few states were investing in their public health infrastructure. Officials we spoke with at both the state and the local levels requested more federal guidance and sharing of best practices to assist them in closing the remaining gaps in preparedness. Officials from response organizations in every state we visited reported a lack of guidance from the federal government on what it means to be prepared for bioterrorism. In the past, CDC has made efforts to develop guidance for state and local public health officials on bioterrorism preparedness. For example, in its core capacity project of 2001, CDC developed criteria to provide guidance on developing the bioterrorism preparedness capacity of state and local public health systems. However, these criteria were broad and nonspecific. State and local officials told us they needed specific benchmarks (such as how large an area a response team should be responsible for) to indicate what they should be doing to be adequately prepared. Local officials were turning to state officials for guidance, and state officials wanted to be able to turn to the federal government. Response organizations have been hindered in their efforts to prepare for bioterrorism because they do not know what agents pose the most credible threat, which makes it difficult to know when they are prepared. There have been federal efforts to devise lists of threats, but as we reported, these efforts have been fragmented, as is evident in the different biological agent threat lists that were developed by federal departments and agencies. In addition, medical organizations have historically not been recipients of intelligence regarding threat information. The Institute of Medicine and the National Research Council have stated that this practice needs to be changed. The need for federal guidance has continued to be an issue as states have proceeded in their planning and preparedness activities using the HHS funding. For example, in their progress reports to HHS in late 2002, two of the states we visited reported that they were seeking guidance from HHS on assessing vulnerabilities for foodborne or waterborne diseases and preparedness steps they should take for these hazards. One of these states declared that it could not make further efforts on testing for waterborne or agricultural diseases until it received more guidance. States also reported needing guidance in such areas as using the CDC emergency notification systems. State and local officials were interested in receiving detailed guidance from HHS to be able to better assess their progress and develop realistic time frames. One state we visited wrote in its progress report that CDC’s development of pre-event guidelines for use of the vaccinia vaccine for smallpox would be crucial for providing consistent practices nationwide. It also wrote that it would be useful to have an approved method for evaluating laboratory response to ensure that minimum standards were being met. Two other states wrote that they would like CDC to provide guidance for developing emergency operation plans. CDC has begun to provide more detailed guidance in some areas. For example, it is developing standards for the National Electronic Disease Surveillance System, which serves as the foundation for many states’ bioterrorism information systems. Under this system, standards are being developed to ensure uniform data collection and electronic reporting practices across the nation. Another initiative that is providing guidance on communication is CDC’s Public Health Information Network. This network is intended to build on and integrate existing public health communication systems and will include public health data standards to ensure the compatibility of the communication systems used by the health care community and federal, state, and local public authorities. In addition, CDC has made efforts in developing new laboratory protocols. One state noted that CDC’s efforts have been of the highest standard, and the protocols received have been designed for easy implementation at the state level. Officials at the state level also expressed a desire for more sharing of best practices. Officials stated that although each jurisdiction might need to adapt procedures to its own circumstances, time could be saved and needless duplication of effort avoided if there were better mechanisms for sharing strategies across jurisdictions. They contended that HHS was positioned to know about different strategies that states were pursuing. For example, one state wrote in its progress report that it would be useful for HHS to provide information on syndromic surveillance systems that were operational. In its progress report, another state wrote that it had requested the portions of other states’ application plans related to risk communication and health information dissemination. The state wanted to include its Native American population in preparedness planning and was looking for best practices on how to involve tribal governments in planning. Some officials particularly expressed a desire for increased information sharing of best practices among state and local jurisdictions on various types of training. Many jurisdictions were developing training programs to increase bioterrorism preparedness. One state official told us during our visit that his agency needed training material on handling incidents, but he did not want to duplicate others’ efforts by developing his own materials. In their progress reports, five of the seven states we visited indicated that they would like CDC’s help in obtaining training information. One state wrote that establishing national standards for training and training aids for laboratories would minimize the need for individual states or regions to develop their own materials. Another state requested assistance with Strategic National Stockpile and smallpox education and training materials, and a third state requested training videos or videos of tabletop exercises to study. One state suggested that it would be useful for CDC to organize an Internet site and teleconferences among states to facilitate information sharing. As concerns about bioterrorism and other public health emergencies, including newly emerging infectious diseases such as West Nile virus, have surfaced over the past few years, cities across the nation have been working to increase their preparedness for responding to such events. An essential first step for cities was to recognize some of the deficiencies that existed in their public health infrastructures and how these would affect their ability to respond to a bioterrorism event. Cities have recognized and begun to work on deficiencies in elements of coordination, communication, and capacity necessary for bioterrorism preparedness. Progress in addressing capacity issues has lagged behind progress in other areas, in part because finding solutions to deficiencies in capacity can be complicated by the magnitude of the resource needs. For example, the resources that hospitals would require for responding to a biological attack would be greater than what are normally needed. Local authorities can shift resources between functions and plan for ways to expand capacity in an emergency. However, shifting resources between functions can cause serious problems if the emergency is an extended one and other important responsibilities are not being met. Needs for additional capacity for responding to bioterrorism emergencies must be balanced with preparedness for all types of emergencies and must not detract from meeting the everyday needs of cities for emergency care. Regional plans can help address capacity deficiencies by providing for the sharing across localities of resources that, while adequate for everyday needs, may be in short supply on a local level in an emergency. Our observations of state and local preparedness for bioterrorism in selected cities bring certain other needs into focus as well. First, there is not yet a consensus on what constitutes adequate preparedness for a public health emergency, including a bioterrorist incident, at the state and local levels. There have been some efforts to provide guidelines for hospital preparedness, but specific standards for state and local preparedness are lacking. Officials from state and local response organizations expressed a need for specific benchmarks from the federal government, which could lead to consistent standards across all states. This could also facilitate needed regional planning across state boundaries. Second, we noted several instances in which cities found solutions to deficiencies that they identified. For example, cities developed methods for triaging samples during the anthrax incidents. Federal mechanisms for sharing innovations and other resources, such as fact sheets on infectious diseases and training materials, could prevent states and cities from having to develop solutions to common problems individually. The federal government could take additional steps to assist these states and cities in efficiently and effectively increasing their preparedness. To help state and local jurisdictions better prepare for a bioterrorist attack, we recommend that the Secretary of Health and Human Services, in consultation with the Secretary of Homeland Security, develop specific benchmarks that define adequate preparedness for a bioterrorist attack and can be used by state and local jurisdictions to assess and guide their preparedness efforts and develop a mechanism by which solutions to problems that have been used in one jurisdiction can be evaluated by HHS and, if appropriate, shared with other jurisdictions. We provided a draft of this report to HHS and the Department of Homeland Security. HHS submitted written comments, which are reprinted in appendix III. HHS said the report provides an informative assessment of preparedness for bioterrorism and other public health emergencies at the state and local levels. HHS concurred with our recommendations. The liaison from the Department of Homeland Security provided oral comments noting the department’s concurrence with the draft report and the recommendations. In its comments, HHS stated that it is taking steps to address the concerns we identified. For example, the department noted that both CDC and HRSA will issue guidance that will emphasize coordination of planning on a regional level. HHS also stated that CDC and HRSA will be developing guidelines and templates to assist states in identifying specific benchmarks and that the Office of the Assistant Secretary for Public Health Emergency Preparedness will be leading an effort to create a repository of best practices. HHS noted that it has been a year since our site visits and that during that period both state and local health departments have made further strides in their efforts to achieve preparedness for bioterrorism and other public health emergencies. We noted in the draft report that we include information obtained from state officials several months after our site visits. As we also noted in the draft report, we recognize that changes continue to occur. However, many of the problems we identified will require sustained efforts, and HHS said that it is now taking steps that are intended to facilitate further progress. HHS also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Health and Human Services and the Secretary of Homeland Security, and other interested officials. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7119. Another contact and key contributors are listed in appendix IV. Table 1 provides comparisons across several elements of preparedness for each of the seven cities we visited. The purpose of this table is to provide additional context for the discussion in the report and some understanding of the strengths and weaknesses of each city in preparing for a bioterrorist attack and how these strengths and weaknesses vary among the cities. The information in this table was obtained from December 2001 through March 2002. The cities have continued to make changes to improve their bioterrorism preparedness; however, this table does not reflect those changes. We visited seven cities selected to provide wide variation in geographic location, population size, and experience with natural disasters and large exercises. Recommendations from experts, including officials from the Department of Health and Human Services (HHS) Office of Emergency Response and the National Association of County and City Health Officials, were also considered in the selection of cities. We also visited each city’s state government. The cities visited are not identified in this report because of the sensitive nature of the issue. During the multiday site visits, which we conducted from December 2001 through March 2002, we interviewed officials from state and local public health departments, local emergency medical services, state and local emergency management agencies, local fire and law enforcement agencies, and hospitals and national public health care associations. We asked them about their activities related to preparing for and responding to bioterrorism, lessons learned from past natural disasters and the anthrax incidents in October 2001, past and current federal funding for helping state and local agencies prepare for bioterrorism, and gaps and weaknesses as well as strengths and successes in their readiness for bioterrorism. We reviewed copies of the bioterrorism preparedness plans states sent to HHS in spring 2002 for cooperative agreement funding from the Centers for Disease Control and Prevention (CDC) and the Health Resources and Services Administration (HRSA). In addition, to update our data, we obtained follow-up information from state and local officials and reviewed the 6-month progress reports on the CDC and HRSA cooperative agreements that were submitted to HHS in late 2002 from the relevant states, covering the period through October 31, 2002. Because our focus was on the public health and medical consequences of a bioterrorist event, we do not report on preparedness efforts funded by the Department of Justice and the Federal Emergency Management Agency in this study. The results of our visits cannot be generalized to the entire country. In addition, the hospitals we included in our site visits were chosen based on recommendations of local public health officials and hospital associations. This resulted in a mix of private and public hospitals, but because of the selection method, the results cannot be generalized to all hospitals in the areas we visited. We interviewed officials from HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness regarding its efforts to improve state and local preparedness for responding to a bioterrorist incident. We reviewed reports from the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction and reports from several associations, including the American Hospital Association, the National Association of County and City Health Officials, and the American College of Emergency Physicians. We conducted interviews with representatives from several associations, including the American Hospital Association, the Association of State and Territorial Health Officials, and the National Governors Association. We also reviewed a report by the U.S. Conference of Mayors about local costs associated with bioterrorism preparedness. In addition, we examined the President’s budget request for bioterrorism preparedness for fiscal year 2003. Because of the events of the fall of 2001, and the subsequent federal preparedness funding, changes were occurring at the state and local levels with regard to bioterrorism preparedness during our site visits and subsequent data collection. Changes have continued to occur and this report may not reflect all these changes. We conducted our work from November 2001 through April 2003 in accordance with generally accepted government auditing standards. In addition to the contact named above, George Bogart, Barbara Chapman, Robert Copeland, Deborah Miller, and Roseanne Price made key contributions to this report. Chemical and Biological Defense: Observations on DOD’s Risk Assessment of Defense Capabilities. GAO-03-137T. Washington, D.C.: October 1, 2002. Anthrax Vaccine: GAO’s Survey of Guard and Reserve Pilots and Aircrew. GAO-02-445. Washington, D.C.: September 20, 2002. Homeland Security: New Department Could Improve Coordination but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. | Much of the response to a bioterrorist attack would occur at the local level. Many local areas and their supporting state agencies, however, may not be adequately prepared to respond to such an attack. In the Public Health Improvement Act that was passed in 2000, Congress directed GAO to examine state and local preparedness for a bioterrorist attack. In this report GAO provides information on state and local preparedness and state and local concerns regarding the federal role in funding and improving preparedness. To gather this information, GAO visited seven cities and their respective state governments, reviewed documents, and interviewed officials. Cities are not identified because of the sensitive nature of this issue. State and local officials reported varying levels of preparedness to respond to a bioterrorist attack. Officials reported deficiencies in capacity, communication, and coordination elements essential to preparedness and response, such as workforce shortages, inadequacies in disease surveillance and laboratory systems, and a lack of regional coordination and compatible communications systems. Some elements, such as those involving coordination efforts and communication systems, were being addressed more readily, whereas others, such as infrastructure and workforce issues, were more resource-intensive and therefore more difficult to address. Cities with more experience in dealing with public health emergencies were generally better prepared for a bioterrorist attack than other cities, although deficiencies remain in every city. State and local officials reported a lack of adequate guidance from the federal government on what it means to be prepared for bioterrorism. They said they needed specific standards (such as how large an area a response team should be responsible for) to indicate what they should be doing to be adequately prepared. The need for federal guidance has continued to be an issue as states have proceeded in their planning and preparedness activities with funding from HHS. For example, in their progress reports to HHS in late 2002 two states reported that they were seeking guidance from HHS on assessing vulnerabilities for foodborne or waterborne diseases and preparedness steps they should take for these hazards. One of these states has declared that it could not make further efforts on testing for these types of diseases until it receives more guidance. State officials also expressed a desire for more sharing of best practices. Officials stated that, while each jurisdiction might need to adapt procedures to its own circumstances, time could be saved and needless duplication of effort avoided if there were better mechanisms for sharing strategies across jurisdictions. They stated that HHS was better positioned to know about different strategies that states were pursuing and they want information on the best practices. |
Each year, the federal government, through the Internal Revenue Service, collects tax revenue to fund government operations. The federal government relies on IRS to collect the proper amount of tax revenue at the least cost to the public. In fiscal year 1997, IRS collected over $1.6 trillion used to finance various government programs and activities. These receipts represent payments by individuals, businesses, corporations, estates, and other types of taxpayers primarily for amounts owed as a result of wages, income, employment, sales, and consumption. To a large extent, the annual receipts collected by IRS represent the amounts taxpayers owe for the given period. However, not all taxpayers pay the amounts they owe the federal government. Some simply do not provide payments on their tax liability when they file their tax returns. Others underreport, either mistakenly or deliberately, the amounts they owe the government. Still others do not report the amounts they owe. While some taxpayers eventually pay some or all of the amounts due, others do not. Also, those that do pay may pay over an extended period. This has resulted in a significant build-up in the amount of unpaid taxes due the federal government because, in addition to taxes owed, taxpayers also become liable for penalty and interest charges that continue to accrue over time until the tax, plus accrued penalty and interest charges, have either been paid in full or the statutory period of collection has expired. As of September 30, 1997, information accompanying IRS’ fiscal year 1997 financial statements reflected a total balance of $214 billion in unpaid taxes, penalties, and interest. These amounts are referred to as unpaid assessments. IRS identifies unpaid assessments through a number of means. Most are identified through the filing of tax returns by the taxpayer. In cases where the taxpayer files a return that reflects an amount or tax liability owed the federal government but does not provide payment or provides only partial payment, the unpaid assessment represents the difference between the tax liability as reflected on the return and the amount actually paid by the taxpayer. IRS refers to these as self-assessments because the amount of the delinquent tax is identified solely through information provided by the taxpayer. IRS also identifies unpaid assessments through its enforcement programs. Such programs include IRS’ underreporter program, where information such as wages, interest, and dividends contained on the tax return is compared to other third party-supplied information, such as wage and earnings statements and annual interest statements. Any differences identified through this process can result in the identification of additional tax liabilities or assessments owed by the taxpayer. Also, IRS tax examinations and audits can identify additional taxes owed the government. IRS’ substitute for return program, where IRS constructs tax returns through the use of third party information and prior taxpayer history for taxpayers who have filed returns in the past but have not filed for the given period, is another tool used by IRS to attempt to identify amounts that are owed the government. Through these various enforcement programs, IRS attempts to close what is referred to as the tax gap, which is the difference between taxes actually collected and the amount that is legally due under the Internal Revenue Code. The tax gap can be further subdivided into the “compliance gap” and the “collection gap.” The compliance gap, which is outside the scope of this report, represents the difference between taxes that should actually be due and those that have been identified by IRS, either through self-assessments or through assessments resulting from IRS enforcement programs. It is this compliance gap that IRS’ enforcement programs attempt to close, although considerable amounts of taxes due on legal and illegal income remain unassessed or underassessed each year. The collection gap represents the difference between the amounts that have been identified as being due through assessments and the amounts that will ultimately be collected on these assessments. This collection gap, which is the subject of this report, represents the amount of IRS’ unpaid assessments that is not collectible. Like a commercial lender’s loan portfolio, IRS’ ability to collect amounts owed is constrained to a great extent by the financial condition of the taxpayer. However, unlike a commercial lender, who can review the financial condition and viability of a prospective borrower prior to extending a loan, IRS does not choose who owes the government taxes. Taxpayers who owe delinquent taxes generally do not have good credit, reliable incomes, or significant assets and in many instances are corporations that have gone out of business. Consequently, IRS cannot manage risk in a manner similar to commercial lenders. This makes closing the collection gap significantly more problematic for IRS than for a commercial lender. The balance of unpaid assessments as of September 30, 1997, consists of the types of taxes that IRS collects, the majority of which result from individual income taxes, Social Security and Hospital Insurance taxes, and corporate income taxes. Payments for these taxes may be made on a periodic basis, or for individuals by their employers via withholdings from their wages. The major types of taxes in IRS’ unpaid assessments balance are: individual income taxes and self-employment taxes, payroll taxes, and corporate income taxes, which are generally taxes on business profits. Other types of taxes in IRS’ balance of unpaid assessments include (1) unemployment taxes, (2) excise taxes, such as fuel, communications, air transportation, sporting goods, alcohol, and environmental taxes, and (3) estate taxes. Not all unpaid assessments are considered accounts or taxes receivable. Federal accounting standards provide criteria for distinguishing which unpaid assessments constitute taxes receivable. IRS has had considerable difficulty properly distinguishing and reporting taxes receivable in its financial statements because its systems were not designed to generate information for use in preparing financial statements in accordance with these standards. Fiscal year 1997 was the first time IRS was able to successfully prepare reliable financial statements covering its tax collection activities. However, this required the use of special programming to extract information from IRS’ master files—its only detailed database of taxpayer information—for use in preparing the financial statements as its systems cannot readily produce this information. Also, this approach still required significant manual intervention and, in the end, adjustments totaling tens of billions of dollars, principally to correct significant misclassifications and duplication of unpaid assessments. As part of our audit of IRS’ fiscal year 1997 Custodial Financial Statements, we reviewed IRS’ unpaid assessments using statistical sampling techniques. Our objectives for the unpaid assessments segment of that audit were to determine (1) whether IRS had properly classified its balance of unpaid assessments between taxes receivable, compliance assessments, and write-offs, (2) whether the balances for taxes receivable, compliance assessments, and write-offs were accurate, and (3) in conjunction with IRS, an estimate of the amount IRS could reasonably expect to collect on its balance of taxes receivable. See appendix I for the scope and methodology used to accomplish these objectives. The objective of this report is to provide detailed information on the composition and collectibility of IRS’ September 30, 1997, balance of unpaid assessments based on the work we performed as part of our audit of IRS’ fiscal year 1997 custodial financial statements. It is important to note that, in performing our work, we did not assess the effectiveness of IRS’ enforcement and collection programs, nor did we attempt to address the compliance gap component of the tax gap. Also, we did not specifically analyze the impact provisions of the IRS Restructuring and Reform Act of 1998 may have on the future composition and collectibility of IRS’ unpaid assessments. We conducted our work from August 1997 through February 1998 in accordance with generally accepted government auditing standards. In commenting on a draft of this report, IRS provided preliminary oral comments, which we have incorporated where appropriate. We requested written comments on a draft of this report from the Commissioner of Internal Revenue. The Deputy Commissioner provided us with written comments, which are discussed in the “Agency Comments” section and are reprinted in appendix II. Taxes receivable are one category of unpaid assessments, and they comprised less than half the balance of IRS’ unpaid assessments as of September 30, 1997. Under federal accounting standards, unpaid assessments fall into three categories: taxes receivable, compliance assessments, and write-offs. Taxes receivable are taxes and associated penalties and interest due for which IRS can support the existence of a receivable through taxpayer agreement, such as the filing of a tax return without sufficient payment, or a court ruling favorable to IRS. The key distinction between taxes receivable and compliance assessments is the acknowledgement by the taxpayer or a court that the taxpayer owes money to the federal government. Compliance assessments are unpaid assessments in which neither the taxpayer nor a court has affirmed that the taxpayer owes money to the federal government. For example, an assessment resulting from an IRS audit or examination in which the taxpayer does not agree with the results of the audit or examination is a compliance assessment but is not considered a receivable under federal accounting standards. Write-offs are unpaid assessments for which IRS does not expect further collections due to factors such as the taxpayer’s bankruptcy, insolvency, or death. Write-offs may at one time have been taxes receivable, but the absence of any future collection potential prevents them from being considered receivables under federal accounting standards. Although compliance assessments and write-offs are not considered receivables under federal accounting standards, they represent legally enforceable claims of IRS—acting on behalf of the federal government—against taxpayers. There is, however, a clear distinction between these categories from the standpoint of assessing what they represent with respect to future cash flow to the federal government. Our review of these categories of unpaid assessments as part of our fiscal year 1997 Custodial Financial Statement audit confirmed that there are significant differences in their collection potential, which clearly supports the usefulness of the distinctions between unpaid assessments required under federal accounting standards. IRS’ fiscal year 1997 Custodial Financial Statements for the first time presented reliable information on the components of IRS’ balance of unpaid assessments. The Statement of Custodial Assets and Liabilities appropriately reflected that portion of IRS’ unpaid assessments that represented taxes receivable that were estimated to be collectible. In information accompanying the financial statements, IRS separated the September 30, 1997, balance of unpaid assessments into the three categories. This categorization, along with an understanding of what these categories represent, allows the users of the financial statements to gain a useful perspective on the portion of the unpaid assessments balance that represent viable future cash flow for the federal government. Figure 1 presents this categorization. Taxes Receivable - Uncollectible ($62) Compliance Assessments ($48) As reflected in figure 1 and as reported in the accompanying supplemental information to IRS’ fiscal year 1997 Custodial Financial Statements, IRS’ balance of unpaid assessments as of September 30, 1997, totaled about $214 billion. It is important to note that IRS’ systems reflected unpaid assessments totaling about $236 billion, which is $22 billion more than the final reported balance. During our audit, we found that most of this $22 billion represented amounts recorded as assessments multiple times in IRS’ systems. These amounts primarily represented “trust fund recovery penalties.” Such penalties can result when a business does not forward payroll taxes to the government. Each officer and director is individually liable for the amounts withheld from employees, provided the officers or directors were found willful and responsible for the nonpayment of these taxes. Consequently, IRS may record assessments against several individuals, each for the total amount withheld from employees in an effort to collect the total payroll tax liability of the business. Such recording of these multiple assessments is necessary for enforcement tracking purposes. However, IRS cannot collect from these multiple individuals and the business more than the total payroll tax liability owed. Consequently, by counting each of the individual and business assessments owed, IRS systems distort the balance of amounts that IRS has authority to collect. Also, systems limitations lead to frequent erroneous balances, and have led to collection of trust fund recovery penalties that had already been paid. This issue will be discussed further in a subsequent report on internal control issues identified in our fiscal year 1997 financial audit. It is also important to note that these systems limitations result in IRS having to resort to unconventional means to obtain information for use in preparing its financial statements. In particular, the lack of an adequate general ledger and a subsidiary ledger for taxes receivable results in IRS running special computer programs against the detailed taxpayer accounts in its master files to extract information on unpaid assessments and attempt to classify this information into the three unpaid assessment categories defined by federal accounting standards. The use of this process led to significant misclassifications of unpaid assessment amounts within these three categories. The following table shows the extent of misclassified items we found, by category of unpaid assessments, in our statistical sample of 730 items. In total, the effect of these misclassifications was significant and resulted in reclassification of tens of billions of dollars between the three categories of unpaid assessments to arrive at reliable fiscal year-end balances. This issue is discussed more fully in our report on internal control issues identified in our fiscal year 1997 financial audit. Of IRS’ $214 billion in unpaid assessments as of September 30, 1997, $76 billion were write-offs. Consequently, 36 percent of IRS’ balance of unpaid assessments consisted of amounts for which there is virtually no hope of collection. Write-offs are comprised largely of amounts owed for corporate income taxes and payroll taxes by businesses or corporations that have subsequently become bankrupt or defunct. For example, over $24 billion of unpaid assessments classified as write-offs, or 32 percent, related to corporate income taxes due from failed financial institutions resolved by the FDIC and the former RTC. In our statistical sample of 730 unpaid assessments, 197 were deemed wholly or partially write-offs. Of these 197 write-offs, 123 (62 percent) consisted of amounts owed from defunct corporations such as failed financial institutions, and another 41 (21 percent) consisted of amounts owed by other bankrupt corporations or businesses. Most of the remaining 33 write-off cases (17 percent) included amounts due from taxpayers who were deceased, whom IRS was unable to locate, or who had no identifiable assets or other means of repaying the amounts owed. Write-offs also consist primarily of older amounts owed. Of the 197 items in our sample that were ultimately determined to be wholly or partially write-offs, about 90 percent of the total amounts owed were over 6 years old. As we discuss later in this report, age is an indicator of the extent to which unpaid taxes are likely to be collected. Also, a significant portion of the total amounts classified as write-offs were comprised of penalties and interest that have and continue to accrue on the delinquent tax assessment balance. Of the 197 items in our sample that were ultimately classified wholly or partially as write-offs, 19 percent of the total outstanding amounts owed were comprised of penalties and 45 percent of the total amounts owed consisted of interest. IRS is required to maintain unpaid assessment accounts on its records until the statutory period for collecting taxes, 10 years, has expired. During this period, IRS must continue to accrue interest and penalties on the outstanding amounts owed regardless of whether IRS concludes that the delinquent taxes owed will ever be collected. For example, IRS continues to accrue interest and penalties on hundreds of failed financial institutions resolved by FDIC and the former RTC, despite the fact that these were insolvent institutions with no viable means of repaying their delinquent taxes. In one case, over 60 percent of the $1 billion balance of amounts owed by the failed financial institution consisted solely of accrued interest and penalties. Consequently, with respect to write-offs, interest and penalties continue to be accrued against the delinquent taxes owed and thus continue to increase the total outstanding balance, despite the fact that there is virtually no prospect for collection. Forty-eight billion dollars of IRS’ $214 billion balance of unpaid assessments as of September 30, 1997, about 22 percent, consisted of compliance assessments. As discussed previously, the key distinction between unpaid assessments classified as compliance assessments and those considered taxes receivable is the lack of acknowledgement either by the taxpayer or by a court that delinquent taxes are owed. Many of these unagreed assessments resulted from IRS’ various compliance efforts, such as its examinations or audits and its various computer matching programs, in which IRS uses third-party information to identify potential underreporting of tax liabilities. Compliance assessments are generally comprised of individual and business income tax liabilities and payroll tax liabilities. In our sample of 730 unpaid assessments, 90 were ultimately classified wholly or partially as compliance assessments. Of these 90 compliance assessments, 60 (67 percent) consisted of amounts owed for individual income taxes, 25 (28 percent) consisted of amounts owed for business income taxes and payroll taxes, and 5 (5 percent) consisted of other tax types, such as estate taxes, taxes on transferor of property to a foreign entity, and other miscellaneous taxes. While compliance assessments have some future collection potential, the lack of taxpayer or court agreement as to the amounts identified by IRS as owed reduces the likelihood of IRS collecting these amounts. As a category of unpaid assessments, compliance assessments have significantly less likelihood of collection than those unpaid assessments classified as taxes receivable. Based on our sample, we found that taxpayers who do not agree that they owe IRS usually do not make payments. Specifically, we noted less than $75,000 in collections since 1995 on the $2.6 billion balance of amounts owed for the 90 unpaid assessment sample items that were ultimately wholly or partially classified as compliance assessments. It should be noted that, although compliance assessments are not likely to generate significant revenue, IRS will generally pursue collection on them (and on unpaid assessments classified as uncollectible taxes receivable, discussed later in this report) to encourage compliant taxpayers to continue to be compliant, and noncompliant taxpayers to become compliant, with respect to reporting and paying their tax liabilities. Like write-offs, a significant portion of the total amounts classified as compliance assessments is comprised of penalties and interest. Of the 90 items in our sample that were ultimately classified wholly or partially as compliance assessments, about 10 percent of the total outstanding amounts owed were comprised of penalties, and about 72 percent of the total amounts owed consisted of interest. As noted in the discussion of write-offs, IRS’ requirement to continue to accrue penalties and interest through the statutory collection period contributes to the higher and increasing percentage of penalties and interest to the total outstanding amounts owed. It also contributes to an increasing balance that is unlikely to generate significant revenues for the federal government. Collectively, write-offs and compliance assessments totaled $124 billion, which was 58 percent of IRS’ balance of unpaid assessments as of September 30, 1997. This represents a significant portion of IRS’ unpaid assessments balance for which collection, based on our audit work and IRS’ financial statements, is highly unlikely. The remaining balance of IRS’ unpaid assessments as of September 30, 1997, $90 billion (42 percent) represent amounts that are considered to be taxes receivable under federal accounting standards. These amounts meet the definition of taxes receivable because they represent amounts for which IRS has obtained concurrence, either by the taxpayer or a court, that the amounts are in fact owed to the federal government. These unpaid assessments thus constitute the most probable category of unpaid assessments from which there is potential for collection of tax revenues. Of the 730 items in our statistical sample, 465 items were ultimately classified wholly or partially as taxes receivable. Of these items, only 193 were determined by IRS and us to be fully or partially collectible. The other 272 items were determined by IRS and us to be uncollectible as of September 30, 1997. Based on a projection of these items to the total population, only $28 billion, about 31 percent of the balance of unpaid assessments classified as taxes receivable, was estimated to be collectible. In contrast, $62 billion, 69 percent of the balance of taxes receivable, was estimated to be uncollectible. Consequently, of the $214 billion balance of IRS’ total unpaid assessments at September 30, 1997, only 13 percent was estimated to be collectible. As noted above, 193 sample items that were ultimately determined to be taxes receivable had at least some future collection potential based on available information. In general, these items consisted of taxes owed where IRS was receiving at least some payments from the taxpayer, where the taxpayer had entered into agreements with IRS to repay all or some of the amounts owed and appeared to have the resources to comply with these agreements, or where IRS had identified other means of obtaining full or partial payment. Figure 2 shows the composition of the 193 items that we determined were fully or partially collectible. The following paragraphs detail each of these categories, discussing the primary consideration for determining that all or some portion of each item’s balance was collectible: Sixty-one items consisted of amounts owed where individuals or businesses had entered into installment agreements to repay some or all of the delinquent taxes and associated penalties and interest. It is important to note that the existence of an installment agreement alone was not a sufficient basis from which to conclude that some or all of the outstanding balance of a given item was collectible. We and IRS accepted installment agreements as the basis for collectibility only if they were supported by evidence of a regular stream of payments. On the other hand, if IRS was not receiving payments in accordance with the terms of the installment agreement, we and IRS generally considered the installment agreement to be in default and thus did not use it as a basis for determining collectibility. Thirty-six items were determined to have full or partial collectibility based on payments IRS received subsequent to the sample selection. In some instances, these payments were sufficient to fully pay the outstanding balance owed. In these cases, because of the certainty of the payment, the item was classified as fully collectible. Twenty-eight items involved amounts owed by taxpayers with a history of compliance, including 10 large established corporations. Eighteen items involved installment agreements between IRS and taxpayer estates. These estate cases are distinguishable from the other installment agreement cases involving individuals in that these items are not considered delinquent and are generally fully collectible due to the executors’ fiduciary responsibility to manage the affairs of the estate and evidence of estate assets sufficient to satisfy the tax liability. Specifically, in certain cases the Internal Revenue code allows estates to enter into 15-year installment agreements to pay their taxes. These agreements are allowed so that estates do not have to sell family businesses or other nonliquid assets to satisfy tax liabilities. Seventeen items involved amounts owed by taxpayers with a history of allowing IRS to keep overpayments from other tax periods to pay off some or all of the amounts owed. The IRS refers to such cases as refund offsets, where refunds that would normally be paid to the taxpayer are instead kept by IRS and used to reduce the taxpayer’s liability from another tax period. Twelve items involved taxpayers who were in the process of entering into either an installment agreement or an offer-in-compromise to pay off some or all of the outstanding amounts owed, and where there was evidence, either through good faith payments or other financial resources, that the taxpayer would be able to comply with the agreement or offer. Eight items involved situations in which IRS was in the process of levying taxpayer assets for amounts owed. Seven items involved taxpayers in bankruptcy. Estimates of collectibility were based on anticipated payments from the bankruptcy proceedings and evidence that assets available were sufficient to make payments. Five items involved taxpayers who had submitted, and IRS had accepted, offers-in-compromise to satisfy the outstanding amounts owed. In these cases, the payments had not yet been made but the taxpayer had the financial resources to pay off the compromised amount owed. One item involved IRS seizing certain assets of the taxpayer to satisfy the outstanding amounts owed, but IRS had not yet liquidated the assets. Most of the items where we identified the likelihood of full or partial collection of the outstanding amounts owed tended to be more recent balances and were typically amounts owed within the last 4 tax years. The more recent age of the items limited the extent to which the original tax assessment owed was compounded by accruals of interest and penalties, in contrast to write-offs and compliance assessments, as discussed earlier. As discussed earlier, 272 items in our sample that were ultimately classified wholly or partially as taxes receivable were assessed by IRS and us as being uncollectible based on available information. These items consisted of taxes owed where the taxpayer, for a variety of reasons, was deemed unwilling or unable to pay the amounts owed. Figure 3 shows the composition of the 272 items that we determined were uncollectible. A discussion of each of these categories and the primary consideration for determining that all or some portion of each item’s balance was not collectible follows: Sixty-seven items involved hardship cases, in which IRS determined that the taxpayer was unable to pay due to insufficient income and assets. IRS will follow up if the taxpayer subsequently reports a certain level of income. Forty-four items involved the portion of unpaid payroll taxes due from defunct or bankrupt businesses that were assessed as trust fund recovery penalties against the businesses’ officers or directors who were found willful and responsible for the nonpayment of withheld payroll taxes. In these cases, we saw no evidence of an ability or willingness on the part of these officers or directors to pay some or all of the amounts owed. Thirty-nine items involved taxpayers in bankruptcy proceedings (the court had not yet determined whether any amount would be paid to IRS for delinquent taxes) or other defunct corporations. In these cases, we found no evidence of either ability or willingness on the part of the taxpayers to pay some or all of the amounts owed. Thirty-one items involved cases where IRS’ collection source (levy, installment, etc.) would be applied to prior outstanding tax periods and was not sufficient to cover the taxes owed for our sample items. Twenty-three items involved cases where the taxpayer defaulted on an installment agreement or offer-in-compromise and IRS records did not identify alternative collection sources. Eighteen items involved cases where individuals or businesses had amounts due from multiple tax periods and in recent tax periods had stopped filing tax returns. Sixteen items involved assessments that resulted from the discovery of illegal acts, including drug trafficking, embezzlement, prostitution, international arms dealing, and real estate fraud. These were generally high-dollar cases related to criminal prosecutions. In all cases, we saw little or no evidence of assets to satisfy the assessments. Income from illegal acts is taxable and multiple penalties generally apply. Twelve items related to taxpayers who had no payment history (no voluntary payments, levies, refund offsets, etc.). Eight items involved amounts owed by individuals who we determined, through a review of IRS records and discussions, IRS was unable to locate or contact. Eight items involved cases where IRS could not provide sufficient documentation to support the existence of collection sources identified in the collection files. Four items related to unemployed taxpayers. In these cases, IRS records did not identify any potential sources of collection. Two items related to taxpayers involved in litigation with IRS, cases in which IRS expects no recovery. The age of the items is also an indicator of the extent to which the outstanding amounts owed are not likely to be collected. In contrast to those taxes receivable sample items where we identified the likelihood of full or partial collection, the items where we identified no reasonable expected collection typically were older items, with the majority of these cases predating the 1990 tax year. In fact, as illustrated in figure 4, for the 465 items in our sample that were ultimately classified wholly or partially as taxes receivable, the percentage of items where we identified the likelihood of full or partial collection declined dramatically as the age of the items increased. The inability or unwillingness of taxpayers to pay the delinquent taxes they owe results in the overall balance of IRS’ unpaid assessments continuing to age and also results in the continued accrual of significant amounts of interest and penalties. This contributes to the high loss rate reflected in IRS’ allowance for doubtful accounts pertaining to its taxes receivable, as reported in its fiscal year 1997 custodial financial statements. Another factor that presents difficulties for IRS in its effort to collect amounts owed is the degree to which some taxpayers repeatedly fail to pay taxes year after year. For the majority of the 730 items we reviewed, the taxpayers actually owed more than just the unpaid assessment we examined. This was particularly applicable for officers and directors against whom trust fund recovery penalties were assessed. Of the 730 unpaid assessment sample items, 83 were unpaid corporate payroll tax assessments related to trust fund recovery penalties assessed against officers and directors of businesses who were found willful and responsible for the nonpayment of withheld payroll taxes. In many of these 83 items, the same officer was responsible for the nonpayment of withheld taxes over multiple years. Additionally, for 17 of these 83 items, the same individual was responsible for nonpayment of withheld taxes at more than one company. In one of these instances, the same individual was responsible for the nonpayment of eight tax periods for taxes withheld from his employees at three different businesses, and in another case, one individual was responsible for nonpayment of withheld payroll taxes at five separate businesses. In only 9 of the 83 unpaid payroll tax assessment items in our sample related to trust fund recovery penalties did IRS and we determine that recovery of at least some of the amounts owed was likely. The remainder of these items was determined to be uncollectible. We will be examining issues associated with trust fund recovery penalties further in conjunction with our audit of IRS’ fiscal year 1998 financial statements. Because IRS continues to accrue penalties and interest through the statutory collection period of a delinquent tax assessment, regardless of the likelihood of collecting on even the original tax assessment owed, this has, over time, contributed substantially to the buildup of IRS’ balance of unpaid assessments. According to IRS records, $136 billion, over 60 percent of IRS’ $214 billion balance of unpaid assessments as of September 30, 1997, consisted of interest and penalties. Figure 5 breaks down the balance of unpaid assessments between the original tax assessment and the accrued interest and penalties. Interest and penalties ($136) While IRS records show that it accrued tens of billions of dollars in interest and penalties on its unpaid assessments during fiscal year 1997, according to IRS it collected less than $13 billion in interest and penalties during the year. The large interest and penalty amounts are a consequence of the age of IRS’ unpaid assessments. According to IRS records and as reflected in figure 6, about 75 percent of its September 30, 1997, balance of unpaid assessments is over 2 years old, and about 34 percent is in excess of 6 years old. The age of IRS’ unpaid assessments leads to large and increasing amounts of accrued interest and penalties. IRS is required to continue to accrue interest and penalties on all unpaid assessments, regardless of their collection potential, until the statutory period for collecting taxes has expired. In contrast, major financial institutions in the private sector place older nonperforming loans in a nonaccrual status to stop interest and penalties from continually increasing the outstanding loan balances. The statutory period for collecting taxes is generally 10 years from the date of the tax assessment. However, this period can be extended under a variety of circumstances, and such extensions, we noted, do occur. For example, of the 730 unpaid assessment items in our sample, 16 (2 percent) related to tax years prior to 1979. In total, 290 sample items (40 percent) related to tax years prior to 1989, and many of these items had extensions to their collection periods. In our sample items, we noted that a major reason for extending the collection period was ongoing litigation, such as bankruptcy, appeal, or tax court. These actions result in the suspension of the 10-year collection period until they have been resolved. Also, offers submitted by taxpayers for less than the amount owed, known as “offers-in-compromise,” also result in a suspension of the 10-year period while IRS considers the offer. Also, taxpayers often sign waiver agreements that extend the collection period beyond the initial 10 years. Waivers are frequently used in conjunction with installment agreements. IRS has recently determined that the use of waiver agreements in certain instances is inappropriate;however, we would not expect this to have a significant impact on IRS’ unpaid assessments. To illustrate how an older tax assessment can remain on IRS’ records for decades, an individual could file a 1979 tax return in 1980. IRS, based on its various enforcement programs, could identify and assess additional taxes in 1983. The taxpayer could bring the matter to litigation. Assume for the purpose of this illustration that this litigation takes 6 years to resolve, at the end of which the court could decide in favor of IRS. The taxpayer could then enter into an installment agreement in 1989 with repayment terms extending out through the year 1999. IRS must accrue interest and penalties through the statutory collection period, regardless of whether an unpaid assessment meets the criteria for financial statement recognition or has any collection potential. For example, interest and penalties continue to accrue on write-offs, such as the FDIC and RTC cases, as well as on exam assessments where taxpayers have not agreed to the validity of the assessments. In fact, per IRS records, the balances for the RTC cases alone will increase from about $22 billion (which was already mostly interest and penalties) in 1997 to an estimated $40 billion by the year 2003, under current interest rates. According to IRS records, the overall growth in unpaid assessments during fiscal year 1997 was wholly attributable to the accrual of interest and penalties. In fact, most of the year-to-year growth in unpaid assessments during the past 5 years is attributable to increases in interest and penalties, as noted in figure 7. Because much of IRS’ inventory of unpaid assessments consists of (1) write-offs, which have no future collection potential, (2) compliance assessments, which have little likelihood of being collected, and (3) taxes receivable with no estimated collectibility, and because these items are mostly penalties and interest, a substantial portion of the $136 billion in accumulated interest and penalties is not collectible. It is important to note that the IRS Restructuring and Reform Act of 1998 contains several provisions that affect the manner in which IRS will assess interest and penalties on future delinquent tax debts. For example, IRS will not be allowed to assess interest and certain penalties for individual taxpayers if IRS does not send a notice to the taxpayer of any tax deficiency within the statutory time frame after the taxpayer files a return or the due date of the tax return, whichever is later. In addition, the penalty IRS assesses taxpayers for failing to pay their original tax assessment is reduced by half for taxpayers with active installment agreements with IRS. Also, IRS’ application of tax deposits to taxpayers’ tax liabilities will no longer be based on a first in/first out principle, but will instead be based on taxpayer designation. This could result in a reduction in IRS’ assessed penalty for the failure to make tax deposits. Less than half of IRS’ balance of unpaid assessments as of September 30, 1997, was considered taxes receivable, and only about one-third of the taxes receivable, about 13 percent of the $214 billion total unpaid assessments balance, was estimated to be collectible. The composition of the unpaid assessments balance consisted largely of amounts owed by businesses that no longer existed, deceased individuals, taxpayers who cannot be located, or taxpayers who do not have the financial ability or willingness to pay the amounts they owe the federal government. As a result, despite the fact that many of these delinquent taxes will never be paid, many of these balances remain on IRS’ records through the statutory collection period and continue to grow due to the accruing of penalties and interest. Unlike commercial lenders, IRS does not choose who owes the government taxes, and much of what exists in IRS’ balance of unpaid assessments is more analogous to a commercial lender’s list of troubled loans or loans that have been written off than to a lender’s entire loans receivable portfolio. Given the serious financial problems of taxpayers who owe these delinquent taxes, the fact that over one-third of the delinquent taxes are over 6 years old, and the fact that nearly two-thirds of the unpaid assessments balance at September 30, 1997, consisted of penalties and interest, it is likely that IRS will collect only an estimated 13 cents of every dollar of unpaid assessments. IRS stated that it was pleased with this report and appreciated our efforts in better explaining the composition and collectibility of IRS’ unpaid assessments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this report. At that time, we will send copies of this report to the Commissioner of Internal Revenue; the Director of the Office of Management and Budget; the Secretary of the Treasury; and the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations, Subcommittee on Treasury and General Government, Senate Committee on Finance, Subcommittee on Taxation and IRS Oversight, Senate Committee on Governmental Affairs, Senate Committee on the Budget, House Committee on Appropriations and its Subcommittee on Treasury, Postal Service, and General Government, House Committee on Ways and Means, House Committee on Government Reform and Oversight, House Committee on the Budget, and other interested congressional committees. Copies will be made available to others upon request. Please contact me at (202) 512-9505 or Steven J. Sebastian, Assistant Director, Governmentwide Accounting and Financial Management Issues, at (202) 512-9521 if you or your staff have any questions concerning this report. Other major contributors are listed in appendix III. As part of our audit of IRS’ fiscal year 1997 Custodial Financial Statements, we reviewed IRS’ unpaid assessments using statistical sampling techniques. Our objectives in the unpaid assessment segment of the audit were to determine (1) whether IRS had properly classified its balance of unpaid assessments between taxes receivable, compliance assessments, and write-offs, (2) whether the balances for taxes receivable, compliance assessments, and write-offs were accurate, and (3) in conjunction with IRS, an estimate of the amount IRS could reasonably expect to collect on its balance of taxes receivable. To achieve these objectives, we requested that IRS run its computer program against the master files to initially classify the population of unpaid assessments into taxes receivable, compliance assessments, and write-offs. IRS’ general ledger—the Interim Revenue Accounting Control System (IRACS)—does not maintain detail transaction information, nor does it classify unpaid assessments into the three classifications, so it could not be used to develop the three separate populations of unpaid assessments for testing purposes. However, it does contain overall summarized assessment data, so it could be used to verify the completeness of the populations of unpaid assessments obtained from the master files. To gain assurance that IRS provided us with a complete population of unpaid assessments from which to draw our samples, we reviewed and recalculated IRS’ reconciliations of its unpaid assessments recorded in its master files and compared them to the assessment information recorded in IRACS. The populations were obtained from information contained in the master files as of July 17, 1997. We selected this interim date for our detailed test work instead of September 30, 1997, because (1) detailed testing of statistical samples of unpaid assessments as of September 30, 1997, could not have been completed in time to facilitate meeting our statutory audit report date of March 1, 1998, and (2) we expected little change in the balance of unpaid assessments between July 17, 1997, and September 30, 1997. We performed additional audit procedures of an analytical nature to ensure that, in fact, no significant changes in either the overall unpaid assessments balance or between the three categories of unpaid assessments occurred between July 17, 1997, and the fiscal year-end. From the three separate populations of unpaid assessments, we selected statistical samples of items on which to conduct detail testing. For the population of unpaid assessments initially classified as taxes receivable, we employed a classical variable sampling approach. In addition to testing for the proper classification and recorded amount, use of classical variable sampling allowed us, in conjunction with IRS, to project a statistically valid estimate of the amount of taxes receivable that IRS could reasonably expect to collect. We stratified the population into 20 dollar ranges to (1) decrease the effects of variances in the total unpaid assessment population, (2) gain assurance that the sample amounts were representative of the population, and (3) obtain assurance that the resulting net tax receivable amount is a reliable estimate of the amount IRS can reasonably expect to collect. Separate random samples were then selected for 19 of the 20 strata. For the remaining strata, which consisted of all tax receivable items in excess of $30 million individually, all items were selected for testing. We used $5 billion as our materiality level, a 95 percent confidence level, and a planned precision level of plus or minus $2.5 billion. This approach resulted in a total sample size of 626 tax receivable items, totaling $5.9 billion, which is 4.6 percent of the $127.8 billion in unpaid assessments initially classified as taxes receivable by IRS. For the populations of unpaid assessments initially classified by IRS as compliance assessments and write-offs, we employed dollar unit sampling techniques to test their proper classification and amount and to evaluate the significance of any misclassifications. We used $5 billion as our materiality level, a 95 percent confidence level, and an expected aggregate error rate of $1.29 billion. This resulted in a sample size of 74 compliance assessment items totaling $3.7 billion, which is 5.1 percent of the $72.3 billion in unpaid assessments initially classified as compliance assessments by IRS, and a sample size of 30 write-off items totaling $8.4 billion, which is 27.5 percent of the $30.6 billion in unpaid assessments initially classified as write-offs by IRS. In total, we selected for testing 730 items totaling about $18 billion, 7.8 percent of the unadjusted unpaid assessments balance as of July 17, 1997. These items covered various tax types. A summary of the 730 sample items, broken down by major tax type, is presented in table I.1. Unadjusted book value (dollars in billions) Individual income and self-employment taxes Corporate payroll tax (includes individual withholdings and FICA) Miscellaneous penalty (trust fund recovery penalties) To determine whether the taxes receivable sample items were properly classified and recorded for the appropriate amounts, we examined taxpayers’ case files to determine whether IRS had sufficient and reliable information to document (1) taxpayers’ agreement to the assessment or (2) evidence of court rulings favorable to IRS. We also analyzed detailed masterfile transcripts of the taxpayers’ accounts, reviewed correspondence between IRS and taxpayers, and examined IRS internal documents to verify that the items were recorded at the correct amounts. To determine if and to what extent IRS could reasonably expect to collect the outstanding taxes receivable balance for each sample item we concluded was properly classified as taxes receivable, we examined detailed masterfile transcripts of the taxpayers’ account and IRS collection case files, which could include documentation of taxpayers’ income and assets, earnings potential, other outstanding unpaid assessments, payment history, and other relevant collection information that affected the taxpayers’ ability to pay. We also considered the extent and results of IRS’ documented efforts to collect the assessment amount. To determine whether the compliance assessments and write-off sample items were properly classified and recorded for the appropriate amounts, we examined taxpayers’ case files, reviewed taxpayers’ transcripts, reviewed correspondence between IRS and taxpayers, and examined IRS internal documents to verify that the items were recorded at the correct amounts. In testing these sample items, we identified numerous instances where unpaid assessments were incorrectly classified between the three categories of unpaid assessments. We projected the impact of these misclassified sample items to the three populations and proposed adjustments based on these projections. Additionally, we identified unpaid assessment balances counted multiple times. We proposed adjustments to remove the associated amounts from the unpaid assessments balance. IRS reviewed all of these items, agreed with our conclusions, and agreed with the proposed adjustments. We projected the results of our collectibility assessment to the population of taxes receivable. We then employed analytical procedures to adjust September 30, 1997, balances for the results of our detail testing of July 17, 1997, unpaid assessments data. IRS reviewed our estimates of amounts that were reasonably expected to be collectible, our collectibility projection to the entire population, and the adjustments from our analytical procedures and agreed with the results. In conducting our work, we did not assess the effectiveness of IRS’ enforcement and collection programs, nor did we address issues associated with the compliance gap component of the tax gap. In addition, we did not specifically analyze the impact provisions of the IRS Restructuring and Reform Act of 1998 may have on the future composition and collectibility of IRS’ unpaid assessments. We conducted our work at IRS’ National Office in Washington, D.C., and at the IRS Kansas City Service Center from August 1997 through February 1998. We conducted our work in accordance with generally accepted government auditing standards. Thomas Armstrong, Assistant General Counsel Andrea Levine, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the composition and collectibility of the Internal Revenue Service's (IRS) September 30, 1997, balance of unpaid assessments. GAO noted that: (1) most of IRS' $214 billion in unpaid assessments as of September 30, 1997, are not taxes receivable and are not collectible; (2) of this balance, $76 billion, or 36 percent, consists of write-offs, which are typically over 6 years old and have no potential for collection; (3) write-offs consist primarily of corporate income and payroll taxes owed by bankrupt or defunct businesses, including failed financial institutions resolved by the Federal Deposit Insurance Corporation and former Resolution Trust Corporation during the savings and loan and banking crises of the 1980s and early 1990s; (4) another $48 billion, or 22 percent, of the unpaid assessments represents compliance assessments, which are amounts IRS has identified as owed to the federal government, but which have not been agreed to by taxpayers or a court; (5) many of these unagreed assessments result from IRS' various compliance efforts; (6) the lack of acknowledgement by the taxpayer or courts of the amounts owed, and evidence of little or no payment activity on compliance assessments, diminish prospects for their collection; (7) GAO noted less than $75,000 in collections since 1995 on $2.6 billion of unpaid compliance assessments; (8) only $90 billion, or 42 percent of the September 30, 1997, balance of unpaid assessments, represents taxes receivable under federal accounting standards; (9) these are distinguished from compliance assessments in that they are amounts that either taxpayers or courts have agreed are owed to the federal government; (10) however, of this amount, only an estimated $28 billion, or 13 percent of the total balance of unpaid assessments, will likely be collected; (11) the accounts comprising this $28 billion balance show evidence of both willingness and ability on the part of the taxpayers to pay their tax liability; (12) GAO found little or no evidence of payments made on these uncollectible taxes receivable; (13) these cases are older debts, with most predating the 1990 tax year; (14) growth in the overall unpaid assessments balance reported by IRS in the last several years is largely due to the accrual of interest and penalties; (15) regardless of collection potential, IRS accrues interest through the statutory collection period of the delinquent tax debt, which can last 10 years or more; (16) as of September 30, 1997, $136 billion, or 64 percent, of IRS' balance of unpaid assessments represented interest and penalties; and (17) most of these amounts will not likely be collected. |
Established as a national program in the mid 1970s, WIC is intended to improve the health status of low-income pregnant and postpartum women, infants, and young children by providing supplemental foods and nutrition education to assist participants during critical times of growth and development. Pregnant and post-partum women, infants, and children up to age 5 are eligible for WIC if they are found to be at nutritional risk and have incomes below certain thresholds. According to USDA, research has shown that WIC helps to improve birth and dietary outcomes and contain health care costs, and USDA considers WIC to be one of the nation’s most successful and cost-effective nutrition intervention programs. WIC participants typically receive food benefits—which may include infant formula—in the form of paper vouchers or checks, or through an electronic benefit transfer card, which can be used to purchase food at state-authorized retail vendors. USDA has established seven food packages that are designed for different categories and nutritional needs of WIC participants. Authorized foods must be prescribed from the food packages according to the category and nutritional needs of the participants. USDA recently revised the food packages to align with current nutrition science, largely based on recommendations of the National Academies’ Institute of Medicine. Infants who are not exclusively breastfeeding can receive formula from WIC until they turn 1 year of age. While federal regulations specify the maximum amount of formula different categories of infants are authorized to receive, state and local agency staff also have some flexibility in determining precise amounts to provide, depending on an infant’s nutritional needs. Staff at local WIC agencies play a critical role in determining infants’ feeding categories, and they have the authority to provide them with less formula than the maximum amount allowed for each category, if nutritionally warranted. Nutrition specialists, such as physicians or nutritionists, working at the local agency perform nutritional assessments for prospective participants as part of certification procedures. They use the nutritional assessment information to appropriately target food packages to participants. USDA’s role in operating WIC is primarily to provide funding and oversight, and state and local WIC agencies are charged with carrying out most administrative and programmatic functions of the program. Specifically, USDA provides grants to state agencies, which use the funds to reimburse authorized retail vendors for the food purchased by WIC participants and to provide services. As part of its federal monitoring and oversight obligations, USDA annually reviews the state plan for each state WIC agency, which provides important information about the agency’s objectives and procedures for all aspects of administering WIC for the coming fiscal year. For their part, state agencies are responsible for developing WIC policies and procedures within federal requirements, entering into agreements with local agencies to operate the program, and monitoring and overseeing its implementation by these local agencies. The WIC oversight structure is part of the program’s internal controls, which are an integral component of management. Internal control is not one event, but a series of actions and activities that occur on an ongoing basis. As programs change and as agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal controls to assure that the control activities being used are effective and updated when necessary. Management should design and implement internal controls based on the related cost and benefits. Effective internal controls include: (1) communicating information to management and others to enable them to carry out internal control and other responsibilities and (2) assessing the risks agencies face from both external and internal sources. USDA does not have data that can be used to determine the national extent of online sales of WIC formula, and department officials told us that USDA has not conducted a comprehensive study to assess these sales. According to the officials, the department does not collect data on this issue, in part because it is not the department’s responsibility to sanction WIC participants for program violations. Rather, they said, it is the responsibility of state agencies to establish procedures to prevent and address participant violations, including attempts to sell WIC food benefits. According to state officials, states’ monitoring efforts have revealed some WIC formula offered for sale online. Of the officials we spoke to from 12 states, those from 5 states said that they have found WIC formula offered for sale online by participants. Officials in 3 of these states said that they have found fewer than 0.5 percent of their WIC participants attempting these sales online. Officials in 2 other states did not estimate percentages but stated that the incidence is low. Consistent with these state accounts, our own monitoring of a popular e- commerce website for 30 days in four large metropolitan areas found few posts in which individuals explicitly stated they were attempting to sell WIC-provided formula. Specifically, we identified 2,726 posts that included the term “formula,” and 2 of these posts explicitly stated that the origin of the formula was WIC. In both posts, the users indicated they were selling the WIC formula because they had switched to different brands of formula. A posting from late June 2014 included the container size in the title and stated: “I am looking to sell 5 [brand name] 12.5oz cans (NOT OPENED) because is super picky and does not want to drink it no matter what i do. will drink the kind for some reason. I told my WIC office to switch me to another brand but they say it might take 3 months. Im asking 35$ but best offer will do since the brand I buy is from so Im not looking to make a profit here if you consider each can is 16$ at the store. please text if interested!! A posting from early July 2014 included the brand, type, and container size in the title and stated: “I have 7 powder cans of they dnt expire for another year at least just got them from my wic n we ended up switching formulas so its $65.oo for pick up all 7 cans or $70 if i have to drive.” From the same e-commerce website, we also identified 481 posts, of which any number could have been advertising WIC-provided formula. However, these posts did not state that the advertised formula was from WIC, and while the formula offered for sale was generally consistent with formula provided through WIC, we could not identify it as such. Specifically, during our 30 days of monitoring formula advertisements, we applied a number of criteria to narrow the broad pool of advertisements to identify those that may have been selling WIC formula. First, because state agencies are generally required to award single-source contracts for WIC formula, we searched for posts advertising formula brands that matched the state-specific WIC-contracted brand. We found that about three-quarters (2,013 posts) fit this criterion. We then reviewed each of these posts and determined that 346 of the posts fit each of three additional criteria, which we chose because they are generally consistent with WIC formula provided to infant participants. 1. The formula type, such as soy or sensitive, advertised for sale was equivalent to one of the types provided to WIC participants in the state in which the posting was made. 2. The volume of the formula container advertised was equivalent to the volume of one of the containers provided to WIC participants in the state in which the posting was made. 3. The amount of formula advertised represented a large proportion of the maximum amount of formula authorized to be provided to fully formula- fed WIC infant participants each month, averaged across all ages. Beyond the 346 posts that matched these three criteria, we found another 135 that met at least one, but not all, of the criteria. However, since we did not investigate any of these posts further, we do not know if any or all of these 481 posts were attempts to sell WIC formula. A posting from mid-June 2014 stated: “$10 a can! 14 -12.9 oz Cans of [brand name and type] Formula. Expiration Date is - July 1, 2015. Please take it all. I will not separate the formula! NOT FROM WIC!!! is now 14 months and no longer needs this. Email only please A posting from mid-June 2014: “ Turn A Year Already, and we Just bought her 7 Brand New Cans of . She no longer needs Formula. Selling each Can for $10. Brand New, NOT Open. 12.4 Oz. EXP. 1 March. 2016.” Through our monitoring efforts, and through interviews with USDA and state and local WIC officials, we identified a number of key challenges associated with distinguishing between WIC-obtained formula sales and other sales: Each state’s specific WIC-contracted formula brand is typically available for purchase at retail stores by WIC participants and non- WIC participants alike, without an indicator on the packaging that some were provided through WIC. There are a number of reasons why individuals may have excess formula. For example, a WIC participant may obtain the infant’s full monthly allotment of formula at one time; alternatively, non-WIC parents may purchase formula in bulk at a lower cost to save money. In either case, if the infant then stops drinking that type of formula, parents may attempt to sell the unused formula. Individuals posting formula for sale online are able to remain relatively anonymous, so WIC staff may not have sufficient information to link the online advertisement with a WIC participant. According to one WIC official we spoke with, staff in that state identify approximately one posting a week with sufficient detail about the seller—such as name or contact information—for staff to pursue. A WIC official from another state said that staff previously used phone numbers to identify WIC participants posting formula for sale, but they believe participants then began to list other people’s phone numbers on posts. Advertisements for infant formula sales can be numerous online, and formula for sale originates from varied sources. For example, through our literature search, we found multiple news reports on stolen infant formula advertised for sale online. USDA has taken steps aimed at clarifying that the online sale of WIC benefits is a participant violation. For example, in 2013, USDA proposed regulations that would expand the definition of program violation to include offering to sell WIC benefits, specifically including sales or attempts made online. Earlier, in 2012, USDA issued guidance to WIC state agencies clarifying that the sale of, or offer to sell, WIC foods verbally, in print, or online is a participant violation. This guidance stated that, in accordance with federal regulations, USDA expects states to sanction and issue claims against participants for all program violations, but it did not provide direction on ways to prevent online sales of WIC foods, including formula. That same year, USDA also sent letters to four e-commerce websites—through which individuals advertise the sale of infant formula—requesting that they notify their customers that the sale of WIC benefits is prohibited, and two of the companies agreed to post such a notification. More generally, USDA has highlighted the importance of ensuring WIC program integrity through guidance issued in recent years aimed at encouraging participants to report WIC program fraud, waste, and abuse to the USDA Office of the Inspector General (OIG). For example, in 2012, USDA disseminated a poster developed by the OIG and attached it to a guidance document describing its purpose, which includes informing WIC participants and staff how to report violations of laws and regulations relating to USDA programs. The following year, USDA issued additional guidance that encouraged states to add contact information for the OIG to WIC checks or vouchers, or to their accompanying folders or sleeves. USDA indicated that both guidance documents were intended to facilitate participant reports of suspected fraud, waste, and abuse to the OIG, but neither specifically directed states to publicize the fact that attempting to sell WIC benefits, either online or elsewhere, qualifies as an activity that should be reported. Although WIC regulations require that state agencies establish procedures to control participant violations, we found that states vary in whether their required procedures include informing participants of the prohibition against selling WIC formula. The WIC regulations require that all participants (or their caretakers) be informed of their rights and responsibilities and sign a written statement of rights and obligations during the certification process. The regulations also require certain program violations to be included in the information provided on rights and responsibilities. However, according to USDA officials, the sale of WIC food benefits is not required to be included, nor do the regulations require participants be informed of this violation through other means. In our review of rights and responsibilities statements from 25 states’ WIC policy and procedure manuals, we found that 7 did not require local agency staff to inform participants that selling WIC benefits is against program rules. Inconsistent communication to participants about this violation conflicts with federal internal control standards, and participants who are unaware of this prohibition may sell excess formula online, thus inappropriately using program resources. Based on these findings, we recommended in our December 2014 report that USDA instruct state agencies to include in the rights and responsibilities statement that participants are not allowed to sell WIC food benefits, including online. USDA agreed with this recommendation, and in April 2015, department officials reported that they intend to revise WIC regulations to require state agencies to include in participant rights and responsibilities statements the prohibition against selling WIC food benefits online. In the interim, USDA included this as a best practice in the 2016 WIC State Plan guidance it disseminated to state agencies on April 6, 2015. Department officials indicated that USDA expects states to move forward on this action and not wait for regulations. In addition, we found that states vary in the ways they identify attempted sales of WIC formula through monitoring efforts, and USDA has not collected information on states’ efforts to address these sales. Of the officials that we spoke to from 12 states, those from 9 states mentioned that they regularly monitor online advertisements. However, the method of monitoring and the level of effort devoted to this activity varied across states. For example, officials in one state said that a number of staff within the state office, as well as a number of those in local agencies, search social media websites daily. In contrast, officials from another state said that staff spend about a half day each week monitoring online sites for attempted sales of WIC food benefits, and an official from a different state said that staff monitor for such sales only when time allows. A USDA official told us that the department would like to provide more support to states in pursuing likely cases of participant fraud related to the online sale of WIC food benefits, but it has not yet determined how to be of assistance. USDA officials indicated they believe states are monitoring attempted sales of WIC formula online to identify this participant violation; however, the department has not gathered information on the status of state efforts to address online sales. Although USDA officials review each WIC state plan annually to ensure that it is consistent with federal requirements, a state’s procedures for identifying participant violations are not among the required elements for WIC state plans included in federal statute and regulations. Because USDA does not require that state agencies document their procedures for identifying participant sales of WIC foods, including online sales of infant formula, USDA does not know whether or how states are working to ensure program integrity in this area. The fact that the department does not work more directly with states on this issue is also inconsistent with federal internal control standards. We recommended in our December 2014 report that USDA require state agencies to articulate their procedures for identifying attempted sales of WIC food benefits in their WIC state plans and analyze the information to ascertain the national extent of state efforts. USDA agreed with this recommendation, and department officials reported in April 2015 that they intend to revise WIC regulations to require state agencies to include in state plans their procedures for identifying attempted sales of WIC food benefits. In the interim, USDA included this as a best practice in the 2016 WIC State Plan guidance it disseminated to state agencies on April 6, 2015. USDA and the states also lack information to determine cost-effective approaches for monitoring these attempted sales. According to USDA, state, and local WIC officials, because of the various challenges state WIC staff face in distinguishing between WIC-obtained formula sales and other sales, the return on investment for monitoring these sales is low. One USDA official noted that it is difficult for states to prove that participants are selling WIC food benefits, which increases the amount of time and effort state staff need to spend to address these cases. Officials from one state WIC agency and one local WIC agency we spoke to said that efforts by state and local agency staff to identify and address online WIC formula sales result in few confirmed cases and draw away scarce resources from other aspects of administering the program. One USDA official said that states that sanction a participant for attempting to sell WIC formula without sufficient evidence that it occurred will likely have the violation overturned during the administrative appeal process. These cases also appear unlikely to result in court involvement, as when we asked the 19 officials from 12 states how these cases were addressed, only one said that a couple had gone through the legal system. Federal internal control standards state that agencies should design and implement internal controls based on the related costs and benefits. According to USDA, because of the substantial risks associated with improper payments and fraud related to WIC vendor transactions, both USDA and the states have focused their oversight efforts in recent years on addressing vulnerabilities in the management of this area, rather than focusing on possible participant violations. However, because the use of the Internet as a marketplace has substantially increased in recent years and the national extent of online sales of WIC food benefits is unknown, USDA and the states have insufficient information to assess the benefits of oversight efforts related to this participant violation. Because of this, we recommended in our December 2014 report that USDA collect information to assess the national extent of attempted online sales of WIC formula benefits and determine cost-effective techniques states can use to monitor online classified advertisements. USDA agreed with this recommendation, and department officials reported in April 2015 that they plan to explore ways to assess the extent of online sales of WIC formula and identify and share best practices, cost- effective techniques, or new approaches for monitoring online advertisements with state agencies. To do this, they noted that they will draw on funds designated for addressing high-priority programmatic issues. We believe this approach will help states to strike the appropriate balance of costs and benefits when determining how to target their program integrity resources. Chairman Rokita, Ranking Member Fudge, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this statement, please contact Kay E. Brown, Director, Education, Workforce, and Income Security, at 202-512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Sarah Cornetto, Aimee Elivert, Rachel Frisk, and Sara Pelton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | WIC provides supplemental foods—including infant formula—and assistance to low-income pregnant and postpartum women, infants, and young children. WIC regulations prohibit participants from selling the foods they receive from the program. However, the Internet has substantially increased as a marketplace in recent years, and news reports suggest that some participants have attempted to sell WIC formula online. This testimony addresses: (1) what is known about the extent to which participants sell WIC formula online, and (2) USDA actions to prevent and address online sales of WIC formula. It is based on a December 2014 report, and includes April 2015 updates on actions USDA has taken to address the report's recommendations, which GAO obtained by analyzing USDA documents. For the 2014 report, GAO reviewed relevant federal laws, regulations, and USDA guidance; monitored online advertisements to sell formula in four metropolitan areas; reviewed a non-generalizable sample of policy manuals from 25 states, selected for their varied WIC caseloads and geography; and interviewed USDA and state and local officials. The U.S. Department of Agriculture (USDA) does not have data to determine the national extent of online sales of infant formula provided by the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Nevertheless, state WIC officials and GAO's own limited monitoring suggest that some WIC participants have offered formula for sale online. Of the officials we spoke with in 12 states, those from 5 states said that they have found WIC formula offered for sale online by participants. GAO monitored one online classified advertisements website in four large metropolitan areas for 30 days and found 2 posts in which individuals attempted to sell formula specifically identified as WIC—from among 2,726 that advertised infant formula generally. A larger number, 481 posts, advertised formula generally consistent with the formula brand, type, container volume, and amount provided to WIC participants, but these posts did not indicate the source of the formula. Because WIC participants purchase the same formula brands and types from stores as non-WIC customers, monitoring attempted online sales of WIC formula can present a challenge. State officials GAO spoke with cited other challenges to monitoring online sales, such as the fact that individuals posting formula for sale online are able to remain relatively anonymous, and their posts may contain insufficient information to allow staff to identify them as WIC participants. USDA has taken some steps toward helping states prevent and address online sales of WIC formula but has not collected information that could assist states in determining cost-effective approaches for monitoring such sales. In December 2014, GAO found that USDA had not specifically directed states to tell participants that selling WIC formula was a participant violation, which could have led to participants making these sales without realizing doing so was against program rules. GAO also found that states were not required to report their procedures for controlling participant violations—including sales of WIC benefits—to USDA, leaving the department without information on state efforts to ensure program integrity in this area. Through interviews with state and local WIC agency officials from 12 states, GAO found that states varied in the method and level of effort devoted to monitoring these sales and lacked information to determine cost-effective approaches for monitoring. Without information on the national extent of online sales of WIC benefits or effective monitoring techniques, both USDA and the states are unable to target their resources effectively to address related risks. As a result, GAO recommended that USDA require state agencies to inform participants of the prohibition against selling WIC formula and describe to USDA how they identify attempted sales. GAO also recommended that USDA collect information about the national extent of attempted online sales of WIC formula benefits and determine cost-effective techniques states can use to monitor them. In response, USDA issued revised guidance in April 2015 stating that it expects states to (1) inform participants that selling WIC benefits violates program rules and (2) report their procedures for monitoring attempted WIC benefit sales to USDA. Also in April 2015, USDA officials reported that although they had not yet taken action to assess the national extent of online sales and determine cost-effective techniques to monitor them, they planned to explore ways to do so. GAO recommended, in December 2014, that USDA better ensure WIC participants are aware of the prohibition against selling formula, require states to describe how they identify attempted sales, and assess online sales, including techniques for monitoring. USDA agreed, has taken some action, and plans to do more. |
To obtain information on whether CPOT investigations were consistent with the mission of the HIDTA program, we reviewed the Office of National Drug Control Policy Reauthorization Act of 1998, ONDCP’s appropriations statutes and accompanying committee reports, ONDCP’s strategic plans and policies, and ONDCP’s Web site. We also reviewed all HIDTA applications (38) to ONDCP from HIDTAs that received discretionary funds for various investigation activities linked to the CPOT list in fiscal years 2002 and 2003, and compared them with the mission of the HIDTA program. At 11 selected HIDTA sites—Appalachia; Atlanta; Central Florida; Lake County, Indiana; Los Angeles; Milwaukee; Nevada; North Texas; Oregon; Rocky Mountain; and Washington-Baltimore—we interviewed HIDTA management officials and task force leaders to discuss whether their investigative activities were consistent with the HIDTA mission. We selected these 11 HIDTAs to ensure geographic spread (east coast, central, west coast) across the country. To obtain information about ONDCP’s distribution of CPOT funding, we interviewed ONDCP officials and obtained statistics they provided on HIDTAs that received CPOT funding in fiscal years 2002, 2003, and 2004 (app.I). We also reviewed ONDCP documents and correspondence that described the basis for ONDCP’s decision for awarding HIDTAs CPOT funding. In addition, we discussed with officials at three HIDTAs— Washington-Baltimore, North Texas, and Los Angeles—how CPOT funding was being used. We selected these three HIDTAs because they had received funds for both fiscal years 2002 and 2003 and were geographically dispersed. We also interviewed officials from 8 of the 13 HIDTAs (Appalachia, Atlanta, Central Florida, Lake County, Milwaukee, Nevada, Oregon, and Rocky Mountain) that did not apply for or applied for but did not receive CPOT funding in fiscal years 2002 and 2003. We selected these HIDTAs to reflect broad geographic segments of the country. We determined that the data presented in appendixes I and II from ONDCP, the Organized Crime Drug Enforcement Task Force (OCDETF), the Drug Enforcement Administration (DEA), and the Federal Bureau of Investigation (FBI) are sufficiently reliable, for the purposes of this review, based on interviews with agency officials and a review of their information systems documentation. In 1988, Congress established the White House’s Office of National Drug Control Policy to, among other things, coordinate the efforts of federal drug control agencies and programs and establish the HIDTA program. By fiscal year 2004, ONDCP had designated 28 drug trafficking areas (HIDTAs) as centers of illegal drug production, manufacturing, importation, or distribution within the United States with a federally funded HIDTA program budget of about $225 million. Each HIDTA is to develop and implement an annual strategy to address the regional drug threat. The initiatives involve the active participation of federal, state, and local law enforcement agencies to enhance and assist the coordination of drug trafficking control efforts in the region. To encourage HIDTAs to conduct CPOT investigations, ONDCP utilized discretionary funding. In fiscal year 2004, ONDCP allocated about $8 million in discretionary funding to HIDTAs to support their drug initiatives that link with international drug trafficking organizations on the CPOT list. This funding is not meant to supplant or replace existing agency/program budgets intended for similar purposes, according to ONDCP guidance to the HIDTAs. OCDETF is a nationwide law enforcement task force program administered within Justice that targets major narcotic trafficking and money laundering organizations using the combined resources and expertise of its federal member agencies together with state and local investigators. Its mission is to identify, investigate, and prosecute members of high-level drug trafficking enterprises and to dismantle or disrupt the operations of those organizations. To help carry out this mission and to focus investigative resources on major sources of supply, OCDETF member agencies developed the CPOT list of major international drug trafficking organizations. In September 2002, at the request of the U.S. Attorney General, OCDETF issued the first CPOT list, naming international drug trafficking organizations most responsible for supplying illegal drugs to the United States. OCDETF member agencies developed criteria for determining whether an international drug organization was to be placed on the CPOT list. Criteria include whether the international organization operates nationwide in multiple regions of the United States and deals in substantial quantities of illegal drugs or illicit chemicals on a regular basis that have a demonstrable impact on the nation’s drug supply. OCDETF compiles and issues the CPOT list at the beginning of each fiscal year, with the intent that federal law enforcement agencies will target their investigations on CPOT organizations. OCDETF member agencies control the CPOT list and its distribution. OCDETF also collaborates with ONDCP on reviews of CPOT funding applications by HIDTAs that link their initiatives with the CPOT list. CPOT investigations were not inconsistent with the mission of the HIDTA program because HIDTAs’ targeting of local drug traffickers linked with international organizations on the CPOT list was one possible strategy for achieving the program’s goal of eliminating or reducing significant sources of drug trafficking in their regions. The mission of the HIDTA program is not expressly stated in current law. However, ONDCP has developed a mission statement that reflects the legislative authority for the HIDTA program, specifically, to enhance and coordinate U.S. drug control efforts among federal, state, and local law enforcement agencies to eliminate or reduce drug trafficking and its harmful consequences in critical regions of the United States. The primary legislative authority for the HIDTA program is the Reauthorization Act, which provides guidance on the mission of the program by setting out factors for the Director of ONDCP to consider in determining which regions to designate as HIDTAs. The factors contained in the act are the extent to which 1. the area is a center of illegal drug production, manufacturing, importation, or distribution; 2. state and local law enforcement have shown a determination to respond aggressively to drug trafficking in the area by committing resources to respond to it; 3. drug-related activities in the area are having a harmful impact in other areas of the country; and 4. a significant increase in federal resources is necessary to respond adequately to drug-related activities in the area. In addition, House and Senate Appropriations Committee reports on ONDCP’s appropriations have stated that the program was established to provide assistance to federal, state, and local law enforcement entities operating in those areas most adversely affected by drug trafficking. The use of a portion of the HIDTA program’s discretionary funds to focus on CPOT investigations is not inconsistent with ONDCP’s mission statement for the program and the legislative authority on which it is based, particularly the first and third factors in the Reauthorization Act. Drug traffickers operating in a HIDTA may be linked with the CPOT list because of their role in major international drug trafficking activities, including illegal distribution in multiple regions of the United States. Given such activities, they would contribute to the HIDTA’s status as a center of illegal drug importation and distribution and have a harmful impact in other regions. Similarly, in keeping with appropriations committee statements on the purpose of the program, HIDTA involvement in CPOT investigations is one way of assisting federal, state, and local operations in areas where the significant adverse effects of drug trafficking activities are due in part to links to international criminal organizations. Thus, for HIDTAs to investigate and disrupt or dismantle regional drug traffickers that are linked with CPOT organizations is not inconsistent with the HIDTA program’s stated mission and its legislative authority. ONDCP distributed discretionary funds to HIDTAs to help support their investigations of drug traffickers linked with international organizations on the CPOT list by reviewing and approving HIDTA applications for funding. In fiscal years 2002, 2003, and 2004, ONDCP distributed CPOT funds to a total of 17 of the 28 HIDTAs. A Justice official who participates in the evaluation of HIDTA applications for CPOT funding said that ONDCP encourages applications for CPOT funding where additional funds are likely to benefit an initiative and move the investigation forward. Some HIDTAs chose not to apply because they face a domestic drug threat that does not have a link to any international CPOT organization activity. Other HIDTAs that have applied for funds did not receive CPOT funding because they did not have sufficient investigative resources to uncover the link to a CPOT organization. In commenting on a draft of this report, Justice said that while this may be true in some circumstances, it was also often the case that HIDTAs may have had sufficient resources but simply had not yet taken the investigation far enough to justify the award of discretionary funds. During fiscal years 2002 and 2003, 6 HIDTAs did not apply and 7 applied but were not approved for CPOT funding. In fiscal year 2004, 17 of the 28 HIDTAs did not receive CPOT funding—10 did not apply and 7 applied but were not approved for funding. ONDCP and HIDTA officials mentioned several reasons why some HIDTAs may not receive funding. First, some HIDTAs were denied funding if the investigative activities in their funding applications were not consistent with the HIDTA mission and linked to a CPOT organization. Second, ONDCP did not provide clear guidance or sufficient information for HIDTAs to develop their applications for CPOT funds, although it took steps to clarify its guidance and create opportunity for all HIDTAs to participate. Third, reducing the amount of discretionary funds available for CPOT funding in fiscal year 2004 affected the number of HIDTAs that received this funding. Fourth, HIDTAs’ local priorities may not link to any CPOT organization activity. ONDCP granted CPOT funding for HIDTA investigative activities that it determined demonstrated a link to the CPOT list and were consistent with the mission of the HIDTA program. As an example, one of the applications we reviewed requested CPOT funding for overtime pay, video cameras, portable computers, and wiretaps for surveillance activities to target a complex criminal organization involved in the distribution of significant quantities of heroin and cocaine as well as related homicides, abductions, arson, assaults, fraud, and witness tampering. Surveillance of the organization indicated that it was being supplied with drugs through an affiliate of a Latin American/Caribbean-based CPOT organization. Therefore, these drug activities were linked to an organization on the CPOT list, and the investigations also were consistent with the HIDTA program’s mission, in that these activities contributed to eliminating or reducing significant sources of drug trafficking within the HIDTA region. Drug investigation activities that were not consistent with the HIDTA program’s mission were not to receive CPOT funds from ONDCP, even if they showed a CPOT link. Specifically, it is inconsistent with the HIDTA program’s mission to supplant funds from other sources. Rather CPOT funds are meant to supplement funding for investigations that support the HIDTA mission. For example, in one HIDTA application, a request was made for $686,000 for the HIDTA to provide software to a cellular telephone company located in a Caribbean country to monitor the cellular telephone calls of a CPOT organization. The application also asked for travel expenses of $7,500 to send a prosecutor and two HIDTA investigators to that country to review the cellular telephone records. ONDCP officials told us that they denied funding for these activities because ONDCP guidance to the HIDTAs regarding CPOT funding states that the funds cannot be used to “supplant,” or replace, existing agency/program budgets intended for similar purposes because to do so would be inconsistent with the HIDTA mission. In commenting on a draft of this report, ONDCP made the clarifying statement that CPOT funding is provided for investigations of major drug trafficking organizations affiliated with CPOTs. However, HIDTAs do not participate in international investigations, and CPOT funding cannot be used to conduct or supplement investigations in places like Colombia or Afghanistan. In another application, a request was made for $120,000 to pay for street lighting in a drug-infested crime area of a major U.S. city to aid the HIDTA surveillance task force in pursuing drug enforcement operations. ONDCP officials told us that they determined the activity was not consistent with the HIDTA mission because CPOT funding cannot be used to supplant a city’s budget for street maintenance and improvements. In some cases, ONDCP’s lack of clear guidance or sufficient information limited some HIDTAs’ ability to apply for CPOT funding. For example, some HIDTA officials told us that in fiscal year 2002, ONDCP did not provide clear directions in its guidance about how HIDTAs were to document the link between their investigations and the CPOT list. However, in fiscal year 2003, ONDCP’s officials recognized the problem and, at quarterly meetings, discussed with HIDTAs how to document links between their investigations and the CPOT list, thus resolving the problem. In addition, ONDCP was only able to provide a partial CPOT list to officials in all HIDTAs in each of the 3 fiscal years it provided CPOT funding, even though applications were to include a link between their investigations and the CPOT list. The partial list contained some of the largest organizations in operation and ones that were most frequently targeted by law enforcement. ONDCP, in its guidance, advised HIDTAs that they could obtain the entire list from their Justice contacts. Some HIDTA officials said not having a full list available to them from ONDCP limited their ability to apply for CPOT funding. In fiscal year 2004, ONDCP created an opportunity for all HIDTAs to participate. According to OCDETF officials, access to the full CPOT list is restricted to federal law enforcement officials. Commenting on a draft of this report, Justice said these restrictions are driven by the fact that the member agencies have designated the list as “law enforcement sensitive,” because disclosure of certain investigative information contained on the list might jeopardize ongoing investigations of targeted organizations. As a result, access to the full CPOT list is restricted to OCDETF-member federal law enforcement agencies. Nonparticipating federal agencies, HIDTA directors, state and local police officials, and non-law enforcement federal agencies such as ONDCP could obtain the list from U.S. Attorneys or Special Agents-in-Charge of the OCDETF member agencies on a need-to- know basis. To facilitate the distribution of discretionary CPOT funding, however, OCDETF provided a partial list, which contained information on some of the largest organizations and those commonly known to, and targeted by, the law enforcement community, to ONDCP. Since HIDTA officials have said that they need to know who is on the CPOT list to determine which of their investigations qualify for CPOT funds, ONDCP, in its guidance, advised HIDTAs to obtain the full CPOT list through their Justice contacts. However, officials from 2 HIDTAs we spoke to said that they had some difficulty in obtaining the full CPOT list. We spoke with officials from 8 of the 13 HIDTAs that either did not apply or applied for and did not receive CPOT funds in either of the first 2 years (fiscal years 2002 and 2003) ONDCP awarded CPOT funds. Officials from 2 of the HIDTAs said that obtaining the full list was a problem because for one HIDTA, they did not have the full CPOT list within the time needed to complete the application, and the other HIDTA said there was not a formal procedure for obtaining the full CPOT list. Officials from 6 of the 8 HIDTAs said it was not a problem, however, because they were able to obtain the full CPOT list from their Justice contacts. Although these examples may not typify all HIDTAs, they nevertheless indicate that not every HIDTA was able to readily access the full CPOT list and that it would be difficult to show how their investigations qualify for CPOT funds without having the full list. Although ONDCP believed the CPOT information it provided was sufficient for all HIDTAs to fairly compete for discretionary CPOT funding, an ONDCP official responsible for CPOT funding acknowledged that not receiving a full CPOT list most likely reduced opportunities for some HIDTAs to receive CPOT funding or discouraged others from applying for funds. All HIDTAs are eligible to apply to receive CPOT funding, according to ONDCP officials, even though 13 of the 28 HIDTAs did not apply for or applied for but did not receive CPOT funding in fiscal years 2002 and 2003. In fiscal year 2004, ONDCP’s guidance identified three international organizations that trafficked in illegal drugs in all HIDTAs. ONDCP officials said that this additional guidance would allow all HIDTAs to focus their limited funding on these three organizations and would allow a baseline of opportunity for all HIDTAs to apply for CPOT funding. ONDCP stated it would give preference to funding applications that had links to these three CPOT organizations. Ten of the 11 HIDTAs that received CPOT funds in fiscal year 2004 linked their applications to the three CPOTs referenced in ONDCP’s guidance. Providing HIDTAs with the names of three CPOT organizations that operated in all the HIDTA regions established a baseline of opportunity for the HIDTAs to apply for funding despite receiving a limited number of CPOT organizational targets from ONDCP. Commenting on a draft of this report, Justice acknowledged that the HIDTAs did face some difficulty regarding the distribution of the CPOT list. However, through participation with ONDCP in evaluating applications for CPOT funding, Justice officials noticed that—for those HIDTAs that applied—problems associated with the limited distribution of the list appeared to be confined to fiscal year 2002, when the list was first developed. In subsequent years, law enforcement agencies, including those in the HIDTAs, were more familiar with the CPOT list and how to gain access to it. The CPOT funding amount almost tripled from fiscal year 2002 to fiscal year 2003 but was cut in half in fiscal year 2004. Given the reduction in discretionary funding allocated to CPOT funding, ONDCP officials said that even if HIDTAs link their investigations to the CPOT list, and do not supplant other funding sources, they are not guaranteed CPOT funding. They recognized that reduced funding affected HIDTA participation. As shown in figure 1, fiscal year 2004 funding was reduced from $16.5 million to $7.99 million. In the first year, 8 HIDTAs received funding. In the second year, 14 HIDTAs received funding, and in the third year, when funding was reduced, 11 HIDTAs received funding. Despite more than a 50 percent drop in funding in fiscal year 2004, 2 of 11 HIDTAs received CPOT funding for the first time. While there could be multiple causes, we also noted that the number of HIDTAs that did not apply in fiscal year 2004 compared with prior years increased from 6 to 10. ONDCP officials said that the limited CPOT funds must be directed at those HIDTAs where, in the judgment of those officials who reviewed the CPOT applications, the supply of drugs from CPOT organizations had the best chance of being interrupted. Commenting on a draft of this report, ONDCP agreed that the reduction of CPOT funding in fiscal year 2004 affected HIDTA participation but added that this observation, while accurate, should be stated within the context of all discretionary funding activities. ONDCP consulted with Congress prior to allocating the discretionary funding, as required by the report language accompanying the ONDCP’s appropriations. As a result of those consultations, ONDCP decided to reduce the amount available for funding CPOT-related investigations in order to fund other activities. Thus, while the reduction in fiscal year 2004 for CPOT-related funding resulted in fewer HIDTAs receiving CPOT funding, that should not have caused a decline in applications for other discretionary funding activities. For more detailed information on the amounts funded to each HIDTA, see appendix I. Figure 2 shows the 17 HIDTAs that received CPOT funding at least once during fiscal years 2002 through 2004 and the 11 that have not received funding. Within certain HIDTAs, law enforcement tended to focus more on domestic drug enforcement than on developing links with CPOT organizations. Officials at three HIDTAs we spoke to told us that in fiscal years 2002 and 2003, they did not apply for CPOT funding because their biggest drug problems were domestic drug producers and distributors, such as those organizations involved in methamphetamine and marijuana. As a result, their strategy was to focus on these local drug traffickers that they were required by law to investigate, and those investigations did not necessarily link with CPOT organizations. In addition, according to some HIDTA law enforcement officials, local law enforcement officers in their HIDTA focused on local investigations rather than those potentially linked with CPOT organizations because they saw a direct benefit to their city or countyprosecution of local targets accompanied by drug and asset seizures. Also, HIDTA officials said that while their law enforcement officers initiated numerous investigations, they do not always have enough funds to proceed to a level that may link the HIDTA investigation to the CPOT list. Commenting on a draft of this report, ONDCP did not disagree with the facts above but emphasized that HIDTAs should be focusing on investigations of local activities that reach beyond the boundaries of the HIDTA, consistent with their designation as centers of illegal drug trafficking activities that affect other parts of the country. On December 27, 2004, we provided a draft of this report for review and comment to ONDCP and Justice. ONDCP commented on our analysis that the use of some discretionary funding for the HIDTA program to support CPOT-related drug trafficking investigations was not inconsistent with the HIDTA mission because it was one possible strategy to eliminate or reduce significant sources of drug trafficking in their regions. Justice generally agreed with the substance of the report and provided clarifications that we also incorporated in this report where appropriate. Both agencies focused their comments and clarifications on the second objective: how ONDCP distributed discretionary funds to HIDTAs for CPOT investigations and why some HIDTAs did not receive funding. ONDCP stressed their belief that the information they provided to HIDTAs was sufficient for all HIDTAs to fairly compete for limited CPOT funding, and that although CPOT funding was reduced in fiscal year 2004, HIDTAs could still participate in other discretionary funding activities. Finally, ONDCP believes that while some HIDTAs’ investigations may not link to CPOTs, HIDTAs should focus on finding that link, given their designation as centers of illegal trafficking that affect other parts of the country. Justice emphasized that their restrictions on the distribution of the CPOT list were soundly based, allowed for HIDTAs to gain access to the full list, and were not intended to withhold access to the CPOT list from HIDTA personnel. They acknowledged that HIDTAs did face some difficulty but were confident the problem has been overcome. We incorporated their perspectives as appropriate. The full text of the ONDCP Deputy Director for State and Local Affairs’ letter, and the Department of Justice’s Associate Deputy Attorney General’s memo are presented in appendix III and IV, respectively. We will provide copies of this report to appropriate departments and other interested congressional committees. In addition, we will send copies to the Attorney General of the United States and the Director of the Office of National Drug Control Policy. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Major contributors to this report are listed in appendix V. If you or your staffs have any questions concerning this report, contact me on (202) 512-8777. During fiscal year 2003, a total of 744 CPOT investigations were conducted by OCDETF member law enforcement agencies. The majority of those investigations (497, or 67 percent) were multi-agency OCDETF investigations, involving participation from DEA, FBI, ICE, IRS and other member agencies, while the remaining were conducted individually by DEA (191, or 26 percent) or FBI (56, or 8 percent). For fiscal year 2004, the majority of CPOT investigations continued to be multi-agency OCDETF investigations. For the first 7 months of fiscal year 2004, 72 percent (548 of 761) of CPOT investigations conducted by member law enforcement agencies were designated as OCDETF investigations. OCDETF officials attributed fiscal year 2004 increases in CPOT investigations over fiscal year 2003 to OCDETF’s emphasis on identifying links between targeted domestic organizations and the CPOT list. As previously mentioned, OCDETF is composed of member agencies that worked together on the 497 CPOT investigations in fiscal year 2003. Member agencies either led investigations or supported other OCDETF member agencies in these investigations. The bar chart in figure 3 shows the number of drug investigations in which each OCDETF member agency participated. For example, DEA participated in 402 CPOT investigations, the highest level of participation by any member agency. FBI participated in 320 investigations, many of which it conducted jointly with DEA along with other member agencies. DEA and FBI are the only OCDETF member agencies that conducted separate CPOT investigations. Generally, these investigations were handled outside of OCDETF because they did not yet satisfy the criteria for OCDETF designation—that is, they were investigations conducted exclusively by foreign offices or investigations that had not yet developed to a sufficient level to be designated as OCDETF cases. For the first 7 months of fiscal year 2004, data showed that DEA separately conducted 23 percent (172 of 761) and FBI separately conducted 5 percent (41 of 761) of investigations linked to CPOTs in addition to their participation in OCDETF investigations. These two agencies conducted their CPOT investigations out of their own agency’s direct appropriations. These CPOT investigations can subsequently become eligible for OCDETF funding when OCDETF’s criteria are met. For example, besides being linked to the CPOT list, DEA and FBI investigations are to involve multiple law enforcement agencies, among other things, in order to qualify as OCDETF-designated CPOT investigations. Figure 4 shows the relationship among OCDETF, DEA, and FBI in their handling of CPOT investigations and shows that DEA and the FBI conduct CPOT investigations both separately and collectively with other OCDETF member agencies. Figure 4 also shows the collaborative relationship between ONDCP and Justice. In addition to those named above, the following individuals contributed to this report: Frances Cook, Grace Coleman, David Dornisch, Michael H. Harmond, Weldon McPhail, and Ron Salo. | In fiscal year 2002, the Attorney General called upon law enforcement to target the "most wanted" international drug traffickers responsible for supplying illegal drugs to America. In September 2002, law enforcement, working through the multi-agency Organized Crime Drug Enforcement Task Force (OCDETF) Program, developed a list of these drug traffickers, known as the Consolidated Priority Organization Target List (CPOT), to aid federal law enforcement agencies in targeting their drug investigations. Also, the White House's Office of National Drug Control Policy (ONDCP) collaborated with law enforcement to encourage existing High Intensity Drug Trafficking Areas (HIDTA) to conduct CPOT investigations. According to ONDCP, the 28 HIDTAs across the nation are located in centers of illegal drug production, manufacturing, importation, or distribution. ONDCP distributed discretionary funds to supplement some HIDTAs' existing budgets beginning in fiscal year 2002 to investigate CPOT organizations. Out of concern that a CPOT emphasis on international drug investigations would detract from the HIDTA program's regional emphasis, the Senate Committee on Appropriations directed GAO to examine whether investigations of CPOT organizations are consistent with the HIDTA program's mission and how ONDCP distributes its discretionary funds to HIDTAs for CPOT investigations. The mission of the HIDTA program is to enhance and coordinate U.S. drug control efforts among federal, state, and local law enforcement agencies to eliminate or reduce drug trafficking and its harmful consequences in HIDTAs. CPOT investigations were not inconsistent with this mission because HIDTAs' targeting of local drug traffickers linked with international organizations on the CPOT list was one possible strategy for achieving the program's goal of eliminating or reducing significant sources of drug trafficking in their regions. GAO found that in fiscal years 2002 through 2004, ONDCP distributed discretionary funds to 17 of the 28 HIDTAs for CPOT investigations. Some HIDTA officials said they did not receive CPOT funding for several reasons including unclear guidance, insufficient application information to the HIDTAs for funding, and local priorities not linking with CPOT investigations. Reduced discretionary funding in fiscal year 2004 for CPOT investigations affected the number of HIDTAs that received this funding. |
DOD revised its approach to BMD in Europe as part of the department’s comprehensive review of BMD strategy and policy, which culminated in the February 2010 Ballistic Missile Defense Review. In that report, DOD set out to match U.S. BMD strategies, policies, and capabilities to the requirements of current and future threats and to inform DOD planning, programming, budgeting, and oversight. Judging that the current and planned defenses against intercontinental ballistic missiles will protect the United States against such threats from North Korea and Iran for the foreseeable future, DOD is refocusing its resources to defend deployed forces and allies against regional threats. Each region will have a phased adaptive approach to BMD tailored to the threats and circumstances unique to that region, with a principal focus on Europe, East Asia, and the Middle East. DOD’s goal is to enable a flexible, scalable response to BMD threats around the world by incorporating new technologies quickly and cost-effectively and concentrating on the use of mobile and relocatable BMD assets instead of fixed assets. In addition, DOD expressed a commitment to testing new assets before fielding to allow assessment under realistic operational conditions. Finally, DOD is emphasizing working with regional allies to strengthen BMD and its deterrent value. The European Phased Adaptive Approach to BMD is the first implementation of this revised strategy and policy. EPAA currently consists of four phases of increasing capability that spans to 2020. Table 1 summarizes DOD’s proposed time frames and capabilities for the four phases of EPAA. For a further description of the various BMD assets that may be part of EPAA, see appendix II. A number of stakeholders within DOD play a role in the developing, building, fielding, and governing of BMD. MDA is responsible for the acquisition of the elements that comprise the integrated Ballistic Missile Defense System (BMDS). MDA continues to be exempted from DOD’s traditional joint requirements determination, acquisition, and associated oversight processes and retains its expanded responsibility and authority to define BMD technical requirements, change goals and plans, and allocate resources. Although not required to build elements to meet specific operational requirements as it would be under traditional DOD processes, MDA is required to work closely with the combatant commands when developing BMD capabilities. DOD reported in the Ballistic Missile Defense Review that it would maintain its existing policy of developing, building, fielding, and governing BMD as it had prior to the EPAA announcement. Table 2 identifies some of the key DOD stakeholders that are involved in the implementation of EPAA. In previous reports on BMD, we have identified challenges associated with MDA’s BMD efforts and DOD’s broader approach to BMD planning, implementation, and oversight. For instance, we concluded in a February 2010 report that although MDA had shown progress in demonstrating increased performance, its cost estimates could not be thoroughly assessed and some planned capability could not be verified due to target shortfalls and modeling limitations. In addition, in September 2009, we reported that DOD had not identified its requirements for BMD elements and interceptors and had not fully established units to operate the elements before making them available for use. For additional GAO reports on BMD, see the Related GAO Products section. DOD has initiated multiple simultaneous efforts to implement EPAA, including considering options for the deployment of assets, requesting forces, preparing for testing, analyzing infrastructure needs, and gaining North Atlantic Treaty Organization (NATO) support for BMD in Europe. DOD manages its BMD efforts by individual program elements and considers EPAA a flexible approach, not a program. However, the department faces three key management challenges—lack of clear guidance, life-cycle cost estimates, and a fully integrated schedule—that may result in inefficient planning and execution, increased cost and performance risks, and limited oversight of EPAA. First, DOD has not yet established clear guidance to help direct and align its EPAA efforts. Without such guidance, DOD faces uncertainty in planning and implementing this revised approach. Second, DOD has not yet developed EPAA life-cycle cost estimates and has indicated that it is unlikely to do so because EPAA is considered a policy designed to maximize flexibility. As a result, DOD does not have a basis from which to assess EPAA’s affordability and cost-effectiveness and is missing a tool with which to monitor implementation progress. Finally, the EPAA phase schedule is not fully integrated with acquisition, infrastructure, and personnel activities. As a result, DOD does not have the information it needs to assess whether the EPAA schedule is realistic and achievable, identify potential problems, or analyze how changes will impact the execution of this effort, and therefore is exposed to increased schedule, performance, and cost risks. Without addressing these three management challenges, DOD will likely face difficulties in planning for and implementing EPAA, potentially resulting in significant cost increases. Since the September 2009 announcement of EPAA, stakeholders throughout DOD—including U.S. European Command (EUCOM), MDA, and the military services—as well as the State Department, have taken steps to implement this policy, including considering options for the deployment of assets, requesting forces, preparing for testing, analyzing infrastructure needs, and gaining NATO support for BMD in Europe. For example, EUCOM initiated EPAA planning efforts and submitted an official request for some of the BMD assets it determined are needed for Phase 1, including the personnel to operate them. EUCOM, with the assistance of its service components, has been developing an operation plan for EPAA. DOD officials told us that this plan, covering Phase 1, is expected to be approved in the spring of 2011. EUCOM officials told us that their efforts have been informed by the command’s close collaboration with MDA, which has provided it with information on the capabilities of BMD assets the command intends to employ in its operational plan. In order to facilitate the information exchange, MDA has located representatives at EUCOM headquarters. EUCOM has also been working with MDA to develop test designs for the BMD system that may be fielded in EUCOM’s area of responsibility. In particular, EUCOM designed notional EPAA architectures that will be used in testing. The results of these tests are intended to provide the command with greater visibility into the performance of the BMD system it will be responsible for employing. MDA has also taken a number of steps to implement EPAA. As we reported in December 2010, MDA has made progress in acquisition planning for EPAA, including integrating and aligning its test planning efforts with EPAA phases through its semiannual Ballistic Missile Defense System Integrated Master Test Plan. MDA has collaborated with the combatant commands and members of the testing community to develop an Integrated Master Test Plan to support planning and execution of all BMD testing for the phased adaptive approach. Additionally, according to MDA, its Global Deployment Program Office has been actively engaged in an effort to align the acquisition activities of EPAA with the EPAA efforts of other stakeholders, such as the State Department, host country embassy personnel, the Office of the Under Secretary of Defense for Policy, EUCOM, the Joint Staff, and the military services. Officials from the military services and EUCOM’s service components told us they are also pursuing activities to support EPAA planning, as the following examples illustrate. The Navy has established the Ballistic Missile Defense Enterprise, which is an effort aimed at coordinating all Navy BMD activities to support EPAA as well as other BMD missions. The Army Corps of Engineers is working with MDA and the Navy on the preliminary stages of a technical analysis related to Aegis Ashore site options. U.S. Naval Forces Europe is analyzing its Aegis BMD ship presence options and requirements as well as planning for Aegis Ashore. U.S. Army Europe is conducting resource planning for potential basing concepts and manning requirements of Army BMD assets that may be allocated for EPAA, such as the Terminal High-Altitude Area Defense (THAAD) element and the AN/TPY-2 radar. U.S. Air Forces in Europe is drafting a concept of operations that, when approved by the EUCOM Commander, will establish the command and control relationships for conducting BMD operations for EPAA. Similar efforts are in progress within NATO. The State Department, in coordination with DOD, has also made significant progress in achieving NATO support for BMD in Europe. NATO recently adopted the territorial missile defense mission—to protect its populations and territories in Europe against ballistic missile attack—but now must undertake the challenging task of reaching agreement on how to implement this new mission. Poland and Romania have agreed to host U.S. BMD assets although the U.S. has not yet found a host nation for a critical sensor planned for deployment in 2011. Finally, NATO members may provide BMD assets to assist in the defense of Europe. However, the U.S. currently is the only NATO member with BMD assets designed to provide territorial defense. See appendix IV for more details of NATO support for BMD in Europe. DOD has initiated many efforts to implement EPAA, but the department has not yet established clear guidance to help direct and align its efforts. According to DOD, effective planning requires clear guidance on desired end states. In the context of BMD, this could include information such as the purpose and duration of the mission and areas to be defended, as well as priorities within a region and between regions. While senior DOD officials stated that the President’s EPAA announcement and the Ballistic Missile Defense Review provide sufficient guidance to begin planning and implementation, a recent DOD study recommended planning guidance be further refined. Further, key BMD stakeholders, including those from the Joint Staff, combatant commands, and military services believe that additional guidance is needed for EPAA. Senior DOD officials from the Office of the Under Secretary of Defense for Policy, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Joint Staff, and MDA told us it was their view that the President’s announcement and the Ballistic Missile Defense Review provide sufficient guidance to enable the Joint Staff, combatant commands, and services to begin planning and implementing EPAA. The officials also noted that some additional guidance would be forthcoming through the regular updating of DOD’s high-level policy and planning documents. According to these officials, EPAA is a policy framework for the evolutionary development and fielding of missile defenses in Europe to defend against ballistic missile threats. They further indicated that the EPAA framework does not establish or dictate a specific architecture or force structure requirement. Additionally, the officials stated that the Joint Staff and the combatant commands are responsible for translating the overarching policy into specific requirements to allow military forces to execute the policy. Moreover, the senior officials also stated that the specific requirements for EPAA, including architecture, would be developed by the combatant commands and Joint Staff in consultation with the Office of the Secretary of Defense using standard DOD planning processes and that any policy gaps that may emerge would be addressed as plans are iterated through the normal planning process. DOD examined the need for policy guidance in the Global Force Management Development Project, a study to clarify and more fully assess the scope and implications of the decision to adopt EPAA and the phased adaptive approach in general. This effort was led by the Joint Staff and included participation from U.S. Strategic Command, EUCOM, U.S. Pacific Command, U.S. Central Command, U.S. Northern Command, U.S. Joint Forces Command, the Office of the Under Secretary of Defense for Policy, and technical assistance from MDA. The study was tasked with developing the plan and facts to be used to allocate limited BMD assets among the combatant commands as regional situations and national strategies require. The classified study was unable to fully address this task but concluded, among other things, that DOD needed to refine its BMD planning guidance, identifying 14 BMD-related general planning guidance questions that DOD needed to answer. According to Joint Staff officials, the study’s findings were briefed to and endorsed by several senior DOD boards, including the Missile Defense Executive Board in May 2010. Officials from the Office of the Under Secretary for Defense for Policy told us that it takes time to fully develop all of the strategic planning and investment guidance necessary to implement a significant policy shift like EPAA. Further, the officials added that some of the guidance questions identified in the study could not be addressed immediately because they had to be sequenced with other events. They gave the example that some of the guidance would rely on decisions made by NATO, which has only recently adopted the territorial missile defense mission. Consistent with the study’s findings, officials from the Joint Staff, combatant commands, and services told us that DOD needed to provide more clarity on desired EPAA end states to ensure that they were appropriately executing their responsibilities. For example, Army officials told us that the Army’s primary concern with EPAA was the lack of clear guidance on end states and said that the Army could not be certain that it was appropriately preparing to support EPAA assets without knowing what assets would be deployed when, where, and for how long. In addition, the Navy created a new organization to help coordinate the service’s BMD efforts and also developed its own set of EPAA facts and assumptions so that it could support EPAA requirements. However, Navy officials told us that although they coordinate with other BMD stakeholders regularly, they did not know if everyone was operating under the same end-state assumptions, including assumptions about force allocation and deployment deadlines. Combatant command officials also told us that existing guidance did not provide clarity on desired end states, including prioritization of regions to be defended. By contrast, other BMD policy decisions, such as the 2002 decision to deploy BMD and the later decision to deploy an AN/TPY-2 radar to Israel, were based on clear and formal policy guidance, according to Joint Staff officials. The officials told us that the lack of clear guidance for EPAA was leading different organizations to make different assumptions about desired end states and that this was resulting in inefficient planning and execution. A reason that BMD stakeholders throughout DOD may be seeking further planning guidance is that there is a lack of clarity on both the relative priority of EPAA to other BMD missions around the world and the extent to which BMD assets will be deployed forward. Although the Ballistic Missile Defense Review presents the phased adaptive approach as pertaining to all geographic combatant commands, EPAA was a presidential policy decision, implying a certain priority for European BMD needs. However, this priority has not yet been formally codified through a presidential directive or memorandum. Additionally, statements by senior DOD officials have detailed potential EPAA plans that, if carried out, would consume a significant portion of DOD’s BMD assets, depending on the amount of physical presence required. For example, depending on interpretation of existing guidance for EPAA, Aegis BMD ships could be tasked with maintaining a continuous physical forward presence; only needing to be available to surge into the theater in response to heightened threat situations; or be available for a mixture of forward presence and surge capability. The Ballistic Missile Defense Review also discusses the need to have a strategic approach to regional BMD and tailor the requirements to the unique and varied needs of each region, including Europe. DOD is undertaking several studies related to regional BMD led by the Joint Staff and U.S. Strategic Command that should help to better define force allocation and quantity needs for both surge and forward presence BMD forces. Additionally, senior officials from the Office of the Under Secretary of Defense for Policy, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, and MDA said that there is a draft presidential directive that will help clarify EPAA policy. However, DOD has not yet issued formal guidance clarifying the EPAA mission, including its relative priority among the regions identified for the phased adaptive approach. DOD officials told us that combatant commands responded to the EPAA announcement and the uncertainty about priorities with a surge of requests for BMD forces to ensure that their requirements would be met. Without establishing guidance to more fully align understanding throughout the department on what the desired end states are for EPAA, including its relative priority to other regional BMD architecture requirements, the department faces uncertainty in planning and implementing this revised approach. DOD has not established life-cycle cost estimates for EPAA and therefore is missing an important management tool for preparing budgets, monitoring progress and assessing long-term affordability of its revised approach to BMD in Europe. DOD has stated two main reasons for not establishing life-cycle cost estimates for EPAA. First, DOD officials told us that DOD does not intend to prepare separate life-cycle cost estimates for EPAA because DOD views it as an approach, not a program, and so funding is provided through the individual BMD elements that make up EPAA. However, in introducing the revised approach to BMD, the department emphasized that it would be fiscally sustainable and affordable. Additionally, in referring to EPAA in prepared testimony before Congress, the MDA Director stated that DOD was “committed to fully funding this program.” Although DOD reported that the acquisition cost estimates and annual BMD budget request for individual elements include EPAA costs, we found that such information does not include full life- cycle costs. Further, this budgeting method is fragmented and so does not provide decision makers with a transparent and holistic view of EPAA costs. Second, DOD has emphasized that the inherent flexibility of EPAA makes developing life-cycle cost estimates for the approach difficult. However, without life-cycle cost estimates DOD may not be able to determine whether its revised approach to BMD in Europe is fiscally sustainable and affordable. We have found that key principles for managing major investments such as EPAA include that an organization should understand the financial commitment involved and ensure appropriate transparency and accountability. Further, according to the GAO cost estimating guide, a credible cost estimate is required in order to assess a program’s affordability and cost-effectiveness and to serve as a basis for a budget. The guide identifies 12 steps necessary for developing credible cost estimates. Following these steps ensures that realistic cost estimates are developed and presented to management, enabling them to make informed decisions about whether the program is affordable within the portfolio plan. Providing decision makers with a program’s updated cost estimate helps them monitor the implementation of the program and ensure that adequate funding is available to execute the program according to plan. Finally, credible cost estimates serve as a basis for a program’s budget and validate that a program’s strategy has an adequate budget for its planned resources. Part of the challenge in determining EPAA life-cycle costs results from uncertainty about what elements and interceptors will be included in EPAA. According to the GAO cost estimating guide, the final accuracy of cost estimates depends on how well a program is defined. In order to develop credible estimates, an organization needs detailed technical, program, and schedule descriptions from which all life-cycle cost estimates can be derived. Some of these details would include system architecture, deployment details, operational concepts, personnel requirements, and logistics support. DOD’s phased schedule for EPAA is comprised of multiple elements and interceptors to provide ever- improving integrated BMD capability, but many aspects of the approach have not yet been determined. For example, DOD has thus far committed to using two Aegis Ashore facilities and at least one AN/TPY-2 radar. Additionally, each EPAA phase could have as many as three Aegis BMD ship patrol areas, but DOD has not yet committed to a specific number of ships or SM-3 interceptors for each phase. As we reported in December 2010, DOD also has not yet committed to the specific type or number of the other elements and interceptors that will be part of the EPAA phases. Figure 1 summarizes the current status of DOD’s BMD assets that may be part of EPAA. Despite the current lack of detail on the implementation of EPAA policy, best practices for cost estimating include methods by which to develop valid cost estimates when a program’s details are limited and thus still provide markers for measuring progress and assessing affordability. The cost guide makes special mention of spiral development efforts that, like EPAA, do not have clearly defined final requirements. In such cases, valid cost estimates can be developed as long as they clearly state the requirements that have been included and account for those that have been excluded. The Congressional Budget Office and the Institute for Defense Analysis have completed such analyses for the previous approach to BMD in Europe and the Institute for Defense Analysis also completed a cost estimate for EPAA. As the types and quantities of elements and interceptors needed for EPAA become better defined over time, cost estimates should be updated to ensure that managers understand the impact of any changes. DOD has also emphasized that the inherent flexibility of EPAA makes developing life-cycle cost estimates for the approach difficult. According to senior DOD officials, the department could develop a life-cycle cost estimate for the phased adaptive approach but they were unsure of the relevancy of characterizing unique costs for EPAA. The officials said that DOD places significant emphasis on flexibility in its new approach to regional BMD, calling EPAA flexible by nature. The officials also stated that DOD’s focus on using mobile and relocatable BMD assets for EPAA and in other regions means that the mix of elements and interceptors in each region could be adjusted to adapt to changes in threat. The result of this flexibility, according to the Ballistic Missile Defense Review, is that the actual life-cycle cost of the missile defense system is difficult to determine because there is no final configuration for the system. However, an organization can develop estimates for a range of possible scenarios. A cost estimating best practice in developing technical baselines includes defining deployment details for various scenarios, such as peacetime, contingency, and war. By presenting a range of scenarios, decision makers can better understand the short-term and long-term cost implications of different options and better evaluate their choices. While we recognize that life-cycle cost estimates will have increased levels of uncertainty for the later phases compared to the near-term phases, the level of flexibility inherent in EPAA needed to respond to changes in threat or technology over the four phases of this approach is bounded and cost estimating practices are adaptive enough to allow for the development of valid cost estimates. Table 3 describes our assessment of DOD’s rationales for EPAA flexibility, factors limiting flexibility or the need for it, and their impact on DOD’s ability to develop life-cycle cost estimates for EPAA. There may be occasions when DOD, in response to more rapid than projected quantitative and qualitative developments in the existing threat or the emergence of new missile threats from an unexpected location, will need to adjust to those threats. Good life-cycle cost estimates are equipped to deal with such unforeseen circumstances because they clearly list the facts and assumptions on which they are based. In such circumstances, a life-cycle cost estimate would provide additional information to decision makers in DOD and Congress as they evaluate their options. Until DOD develops EPAA life-cycle cost estimates—which could potentially be part of a larger phased adaptive approach life-cycle cost estimate—the department will not have an accurate basis from which to determine the financial sustainability and affordability of the revised approach to BMD in Europe and is missing a tool with which to monitor its implementation. DOD established the EPAA phase schedule without fully integrating it with key acquisition, infrastructure, and personnel activities and, as a result, the department does not have an important management tool with which to assess whether the EPAA schedule is realistic and achievable, identify potential problems, or analyze how changes will impact the execution of this effort. As a result, the program may be exposed to schedule, performance, and cost risks. Implementing EPAA will require the synchronization of numerous efforts, including acquisition, infrastructure, and personnel activities. For example, DOD must develop and produce the BMD elements and interceptors for EPAA and must be able to integrate them into a system. The performance of a fielded BMD architecture, including the size of the area defended, is dependent on several factors, including the types and numbers of elements and interceptors fielded, the extent to which fielded elements are linked together operationally, and the geographic location of the elements (see fig. 2). Further, DOD must also have the appropriate infrastructure in place—such as needed power, water, roads, facilities, and security—in time to support not only the EPAA elements and interceptors it intends to field as part of EPAA but also the personnel necessary to operate and maintain them. DOD must also have these trained personnel available in time to carry out those duties. The department is working to implement EPAA, but EPAA timelines may not match the time needed to integrate and execute the necessary acquisition, infrastructure, and personnel activities. Our past work shows that a program’s success depends on the quality of its schedule. If it is well-integrated, a schedule clearly shows the relationships between program activities, activity resource requirements and durations, and any constraints that affect their start or completion. The schedule shows when major events are expected as well as the completion dates for all activities leading up to them, which can help determine if the schedule is realistic and achievable. When fully laid out, a detailed schedule can be used to identify where problems are or could potentially be. Moreover, as changes occur within a program, a well- integrated schedule will aid in analyzing how they affect the program. For these reasons, an integrated schedule is key in managing program performance and is necessary for determining what work remains and the expected cost to complete it. According to officials from MDA, the Navy, the Army, the Office of the Secretary of Defense, U.S. Naval Forces Europe, and EUCOM, a principle challenge for implementing EPAA is meeting its schedule. DOD established the EPAA phase schedule based on a top-level evaluation of the implementation activities that could impact or be impacted by that schedule and, as a result, DOD may face challenges executing it. EPAA is a policy framework and not a fully developed architecture or program, according to senior DOD officials responsible for developing the policy. Further, the schedule for EPAA was largely based on the alignment of the changes in the threat to availability of new technology, including the various SM-3 interceptor variants. These officials said that they relied upon acquisition feasibility and affordability information for various options that was provided by MDA and that the Joint Staff represented service and combatant command concerns during the development of the phased schedule. However, they also stated that the military services and combatant commands began examining the specific implementation requirements of EPAA after the policy’s announcement. EPAA’s phases are not yet integrated with key acquisition activities and so are exposed to risk of schedule slips, decreased performance, and increased cost. As we reported in December 2010, EPAA policy calls for DOD to deliver BMD capabilities on a timeline that requires concurrency among technology, design, testing, and other development activities; this concurrency introduces risk of increased costs, schedule delay, or performance shortfalls that must be addressed. A sound acquisition has firm requirements, mature technologies, and a strategy that provides sufficient time for design activities before the decision is made to start development and demonstration or to transition to production. As we reported, it is questionable whether DOD’s approach allows sufficient time for these activities. Schedules for the individual elements are highly optimistic in technology development, testing, production, and integration, leaving little room for potential delays. Additionally, DOD has not formally or fully aligned acquisition programming to support EPAA or set acquisition decision points for each phase, including production decisions. An integrated schedule defines major decision points at which to review demonstrated progress and follow-on plans. It establishes exit and entrance criteria to show that components are ready to move from one developmental step to the next, and that the component fits within the context of the bigger system to which it contributes. While individual BMD elements have a schedule, DOD has not developed an integrated schedule for EPAA that aligns the necessary acquisition activities. As a result, decisions about production of individual elements, risks associated with individual elements and interceptors, overall BMD system interoperability and integration, and assessment of the integrated system do not appear to be fully linked to the phases. Additionally, the Missile Defense Executive Board, which is responsible for overseeing missile defense portfolio developments, has thus far focused program reviews solely at the element level, not the broader EPAA level. According to DOD, the department is developing an integrated acquisition schedule for EPAA. Without such a schedule, DOD acquisition managers, stakeholders, and Congress lack an integrated EPAA-level view of BMD development. Table 4 summarizes some development risks for the individual BMD assets as well as the integrated system that may be exacerbated by the EPAA schedule compression. Furthermore, the EPAA phase schedule is not yet integrated with key infrastructure activities and therefore is also exposed to risk of schedule slips, decreased performance, and increased cost. BMD assets, such as the AN/TPY-2 radar and Aegis Ashore, require infrastructure to support and secure the assets. Designing, funding, and building military infrastructure can take years. Officials from MDA, the Navy, EUCOM, U.S. Naval Forces Europe, and the Army Corps of Engineers stated that having the necessary infrastructure in place to support the scheduled 2015 operational date for the first Aegis Ashore could be challenging. There were some early design questions about how relocatable Aegis Ashore was supposed to be, which had direct implications for infrastructure requirements. According to officials from MDA and the U.S. Army Corps of Engineers, initial design options included a modular construction option that allowed for placement or removal of Aegis Ashore from a site within 120 days. Infrastructure needs for the initial modular design option would have been minimal. Nevertheless, DOD decided not to pursue the initial modular design because of technical challenges that may have impacted performance and driven up the Aegis Ashore development and acquisition costs, as well as potentially increasing costs for operating and sustaining the element. However, there was disagreement among the officials to whom we spoke about the impact of pursuing a new design on infrastructure needs—ranging from no change to requiring significant additional infrastructure. Although DOD is beginning to narrow its design approach for Aegis Ashore, DOD is operating under a compressed schedule to meet the 2015 operational date for Phase 2. Construction, and therefore funding, for all of the necessary Phase 2 Aegis Ashore facilities and associated infrastructure needs to begin in fiscal year 2013, according to officials from the U.S. Army Corps of Engineers, Navy, and U.S. Naval Forces Europe. However, MDA reported to us and a senior DOD official testified to Congress that Aegis Ashore site construction will take approximately 1 year. According to officials from the Navy and U.S. Army Corps of Engineers, Aegis Ashore infrastructure costs remain unknown because the designs have not yet been finalized for the system itself or the supporting infrastructure. U.S. Army Corps of Engineers officials said that they are working closely with the Navy and MDA to reach basic agreement on the design of the infrastructure in March 2011, which is in time for MDA to budget for the needed facilities in fiscal year 2013. However, Army Corps of Engineers officials said that the Romania Aegis Ashore site design and construction estimate will not be as mature as those of typical military construction projects, which may expose the Aegis Ashore construction site to increased risk of design modifications, increased costs, and possible delays. As we have previously reported, DOD underestimated its BMD support infrastructure requirements and military construction costs for the prior plan for BMD in Europe when it did not follow the traditional military construction requirements. Army Corps of Engineers officials noted that DOD is accepting this extra risk with Aegis Ashore, because waiting for a more complete design for Aegis Ashore in Romania would result in missing the 2015 deadline. According to DOD, it is longstanding DOD policy to make best efforts to conclude a binding international agreement documenting the host nation’s permission for the presence of DOD personnel and equipment in its territory as well as adequate status protections for such personnel. According to the State Department, an agreement enters into force when the parties consent to be bound by the agreement, at which point the parties are legally obligated to comply with the agreement’s provisions. Depending on the form of the agreement and the parties’ domestic requirements, entry into force may require any number of events, including signature, ratification, exchange of notes, or some combination of these. ratification process for the Aegis Ashore facilities in Romania and Poland—to be completed as part of Phases 2 and 3 respectively—are in progress and, though they do not anticipate any significant delays, they also cannot predict when negotiations and ratification will be complete or when agreements will enter into force. For example, the U.S. government ran into unexpected delays in host nation agreement ratification when it was attempting to implement the previous approach to BMD in Europe. According to DOD, its schedule assumption in 2007 was that both Poland and the Czech Republic would complete the necessary ratification of host nation agreements by the end of fiscal year 2008. However, as we previously reported, delays in the ratification of key host nation agreements presented challenges to DOD’s planning and implementation of its prior approach to BMD in Europe. In that report, we also noted that the ratification votes were delayed, in part, because of a desire on the part of both the Polish and Czech parliaments to wait for an indication from the current U.S. administration on its policy toward ballistic missile defenses in Europe. In the end, neither Poland nor the Czech Republic ratified the necessary agreements before September 2009 when the U.S. decided to take a new approach to BMD in Europe. Similar delays in host nation agreement ratification for Aegis Ashore could also impact EPAA and result in schedule slips, decreased performance, or increased cost. Additionally, the U.S. must also reach agreement with nations to host other land-based BMD assets that may be part of EPAA. For example, DOD’s plans for EPAA Phase 1 include an AN/TPY-2 radar intended to provide early warning data to engage short- and medium-range ballistic missile threats and provide additional tracking information for homeland defense. According to a senior Joint Staff official, the AN/TPY-2 will significantly increase the capability of Aegis BMD that is also intended to be part of Phase 1. However, the U.S. has not reached agreement with a country to host the AN/TPY-2. If such an agreement is not reached soon, there may not be enough time to construct the necessary facilities for the AN/TPY-2 and deploy it by the end of 2011, thereby diminishing DOD’s expected EPAA Phase 1 performance. The EPAA timeline is not yet integrated with key activities to ensure personnel needs are met. The military services are responsible for organizing and training personnel, a process that typically takes years once requirements are identified. DOD generally requires that major weapon systems be fielded with a full complement of organized and trained personnel. As we previously reported, DOD has in the past put BMD elements into operational use before first ensuring that the military services had created units and trained service members to operate them and, as a result, combatant commanders sometimes lacked certainty that the forces could operate the elements as expected. DOD concurred with our recommendation that it require, in the absence of an immediate threat or crisis, that operational units be established with the organizations, personnel, and training needed to perform all of their BMD responsibilities before first making elements available for operational use. DOD’s aggressive EPAA schedule runs the risk of deploying assets without the full complement of trained personnel needed to carry out the mission, which could lead to issues with operational performance. For example, Navy officials told us that they will likely have to extend sailors’ rotations beyond the standard deployment length to meet possible EPAA ship requirements for Phase 1, thus placing a strain on the force and possibly affecting performance. The Navy is already dealing with manning issues that may affect BMD asset capabilities. In 2010, separate reports by the Navy found Aegis radar manpower and performance in decline. The reports stressed that the Navy’s Aegis crews are already overextended and they lack sufficient numbers of qualified people to meet its radar maintenance requirements. Additional requirements for Aegis presence because of EPAA could contribute further to this problem. Reducing EPAA deployments to address these concerns would result in a decrease in expected capability. Moreover, DOD has yet to make key decisions that will affect its personnel needs and so does not yet know how these needs will affect the EPAA schedule. For example, Navy officials told us that they lack some crucial information such as the required Aegis ship presence for the early phases of EPAA or the design of Aegis Ashore for later phases. This hinders their ability to fully plan and develop the necessary organizations, personnel, and training requirements. Navy officials said that the Navy expects to keep training requirements for the personnel operating the Aegis Ashore weapon system very similar to the training needed for the Aegis weapon system on the ship, thus simplifying training requirements. However, Navy officials said that some support infrastructure jobs unique to Aegis Ashore are difficult to assess, and training for these will have to be developed as Aegis Ashore designs mature. The Navy has not yet been able to establish training requirements for maintaining the land-based vertical launch system that is part of Aegis Ashore, for instance, because design has not been finalized. Further, Navy officials told us that the personnel required for Aegis Ashore could differ significantly if it is required to operate at full readiness at all times or if it is required to operate at some lower level of readiness. A requirement for maintaining high readiness could increase personnel costs and challenge the service’s ability to provide sufficient personnel. Also, Army officials told us that they need more guidance on what Army systems will be part of EPAA and when these systems will need to be operational. DOD is working to clarify many of its EPAA needs and doing so will help inform personnel needs and allow the services to prepare the necessary organizations and training for personnel. We have already mentioned several of these efforts, such as EUCOM’s operational plan expected to be completed in spring 2011, the plan by the Navy, MDA, and Army Corps of Engineers to reach agreement on Aegis Ashore facilities needs in March 2011, and the U.S. Strategic Command-led force allocation study that will inform DOD’s decisions on force distribution. However, service processes to ensure that the full complement of trained personnel is in place will take time. Without an integrated schedule, DOD is missing a management tool with which to assess the effects of emerging personnel needs on the execution of the phased adaptive approach in Europe. DOD has not yet established key performance metrics that would provide the combatant commands with needed visibility into the operational capabilities and limitations of the BMD system they intend to employ, creating potential challenges for EUCOM as it integrates BMD into its operational plans. DOD has already incorporated some combatant commands’ testing needs into BMD testing; however, as of January 2011, the combatant commands’ more detailed, operationally-relevant, quantifiable metrics had not yet been incorporated into DOD’s BMD testing plans. Lack of such metrics inhibits EUCOM’s understanding of the operational capabilities and limitations of the integrated BMD system they would have to employ. As a result, the combatant commands will lack key information they need to plan for the phased adaptive approach and so may face challenges in integrating BMD into operational plans. The combatant commands recognize this issue and are currently attempting to establish these metrics; however, they have yet to be finalized and implemented. Following the establishment of MDA in 2002, initial BMD system designs did not formally consider combatant command requirements because of MDA’s exemption from DOD’s requirements process; however, DOD has since taken multiple steps to increase combatant commands’ visibility into BMD operational performance. According to U.S. Strategic Command, MDA initially achieved the rapid deployment of BMD capabilities because it was unconstrained by operational requirements. Moreover, its testing did not focus on verification of operational BMD system performance against combatant command requirements. The BMD development and assessment process presented challenges for the combatant commands because MDA’s criteria for declaring a BMD element technically capable of performing some tasks did not always allow the combatant commands to thoroughly assess how the element could be operationally employed. For example, after DOD fielded the AN/TPY-2 radar in Japan in 2006, the combatant commands realized they did not have a good understanding of the operational capabilities and limitations of the radar that would allow them to fully employ it. In response to these problems, U.S. Strategic Command, in its role as warfighter advocate for missile defense, began efforts to incorporate combatant command needs into BMD testing and evaluation in order to assess the operational utility of the elements being fielded. In 2008, U.S. Strategic Command published the Force Preparation Campaign Plan, which laid out a framework designed to help manage risk to the combatant commands’ operations by identifying the information combatant commands need about BMD operational capabilities and limitations. For instance, the plan describes the need for designing BMD tests around combatant command operational plans and testing against validated scenarios and threats, since the integrated BMD system level performance is heavily threat, environment, and scenario-dependent. U.S. Strategic Command stressed that combatant commands need this information to develop flexible operational plans and assess BMD capabilities for supporting a command’s missions. MDA has also taken steps to revise its testing program to incorporate combatant command needs, but testing continues to be driven by collection of data points needed to verify the models and simulations used to characterize BMD performance. MDA has integrated many combatant command testing needs into the Integrated Master Test Plan. For instance, MDA has added three Operational Test periods, each aligned with the first three phases of the phased adaptive approach, which, according to U.S. Strategic Command officials, allow the combatant commands to use the BMD system configuration unique to the particular phase for training and operational system evaluation. These ground tests are based on combatant command-developed architectures and the relevant validated threats. EUCOM has been involved in the test design process, including providing input regarding where BMD assets should be located for EPAA. According to EUCOM officials, the test designs were then vetted through EUCOM intelligence and operations experts. Officials also said that the results of the tests will be used by the command to inform its EPAA planning. Although combatant commands are increasingly involved in BMD testing, they have expressed the need for additional metrics that can be used to assess the durability (how long it can defend) and effectiveness (how well it can defend) of the BMD system, which are important for planning the phased adaptive approach. For instance, one of MDA’s metrics for effectiveness is based on a “one-on-one” engagement between a given element or group of elements and a single threat missile. According to DOD officials, it therefore has limited applicability to a more realistic operational scenario where combatant commanders employ an integrated BMD system against multiple threat missiles. The combatant commands have concluded that they need to understand BMD system effectiveness and durability in quantitative terms so that, as they prepare their operational plans, they understand BMD’s contribution to the overall mission and appropriately balance it with other options. BMD is part of the defensive capabilities, and in combat operations, it alone cannot achieve or maintain effective defense against an adversary ballistic missile attack. DOD planning doctrine emphasizes that integrated and interoperable military forces improve the ability to not only defend against a ballistic missile attack with defensive counterair, such as BMD, but also ensure that offensive counterair can strike potential ballistic missile threats. As more ballistic defense assets are deployed into the EUCOM area of responsibility, creating a more complex BMD system, insight into the capabilities and limitations of the system and its overall contribution to EUCOM’s operational plans will become more important. The balance between offensive and defensive options, and therefore the need for a clear understanding of the operational capabilities of the BMD system, is further complicated for EPAA since it requires coordination between two geographic combatant commands—EUCCOM and U.S Central Command—given where the threats may originate. A threat originating from the Middle East, which is primarily U.S. Central Command’s area of responsibility, could be directed at Europe, which is in EUCOM’s area of responsibility. Therefore, these two commands must work together to balance BMD with other options. Without metrics to credibly quantify BMD system performance, EUCOM, and other combatant commands will not be able to thoroughly analyze performance gaps. Moreover, without the full understanding of their BMD system capabilities and limitations, they will be limited in their ability to develop comprehensive plans that integrate defensive and offensive options. The combatant commands, led by U.S. Strategic Command, created a process in 2006 to provide them with additional understanding of the operational utility of the BMD system but this process does not provide the specific performance information the combatant commands seek. Specifically, this BMD assessment process was initially intended to enhance visibility into BMD element capabilities by using subjective assessment criteria expressed in terms of yes or no judgments rather than quantified performance parameters. For example, the effectiveness criteria for the AN/TPY-2 radar includes whether that sensor possesses the ability to detect, classify, track and discriminate ballistic missile threats targeting U.S. defended areas. Thus, rather than assessing the extent to which a capability can perform a certain mission-essential function, the assessment focuses on whether or not a BMD component can perform a certain task. When the combatant commands first implemented this process, they concluded they would need to later introduce quantifiable mission-essential performance goals that would enable more complete operational assessments of BMD system capability in relation to their operational needs. To address the effort of developing quantifiable mission-essential performance goals, the combatant commands, led by U.S. Strategic Command, are currently attempting to introduce quantifiable operational performance metrics into the testing program through an effort called “Assess-to.” The combatant commands are defining metrics to measure BMD system effectiveness (how well it can defend) and durability (how long it can defend) against threats projected by the intelligence and operational communities. More specifically, as defined in a draft Assess-to criteria document, the metric used to measure effectiveness of a BMD system is expressed mathematically as the ratio of threats defeated to total threats launched. As such, this metric is designed to allow assessment of BMD system effectiveness against multiple ballistic missile threats. Durability, on the other hand, is defined as the length of time that an established BMD system can provide and sustain defensive capability at a specific level of protection against projected threats. U.S. Strategic Command officials agree that developing Assess-to criteria would help to quantify BMD system capabilities and limitations and thereby provide better data to the combatant commands as they develop their operational plans. The combatant commands have articulated the need for BMD system effectiveness and durability metrics since 2008 and developed a draft Assess-to document that describes them, but there are two main barriers that have prevented DOD from adopting Assess-to. First, various DOD officials stated that MDA is reluctant to have Assess-to metrics established due to concerns that these types of metrics could effectively turn into requirements to which MDA will be held accountable. As stated previously, MDA is exempt from formal acquisition requirements and the BMD elements it developed were not built to operational requirements. U.S. Strategic Command officials and documents describing Assess-to are sensitive to this concern and characterize Assess-to criteria in terms of communicating testing needs to MDA as well as goals to “build towards” rather than strict requirements. Second, an additional obstacle to Assess-to implementation is that current limitations in system-level modeling may limit DOD’s ability to test against the identified metrics. Assess-to metrics are geared towards system-level assessment, and currently ground tests—the primary venue for such assessments—rely on models and simulations, many of which continue to lack operational realism. Although MDA is working to validate models and simulations, they currently have technical limitations associated with their ability to represent system-wide operationally realistic scenarios. However, MDA officials told us that, while there are challenges associated with coming to agreement on how to quantify BMD effectiveness and durability, MDA believes that it is possible to do so. While various DOD officials told us that MDA and U.S. Strategic Command are collaborating to develop solutions to these issues, until quantifiable operational metrics for BMD system-level assessment are in place, the combatant commands will lack key information they need to plan for the phased adaptive approach and so may face operational risks should a conflict arise. DOD’s revised approach to BMD in Europe reflects the Administration’s desire to focus on threats currently facing the United States and allies while maintaining the flexibility to adapt the approach as threats change and new missile defense technologies become available. Since the September 2009 announcement of EPAA, DOD has taken steps to implement this policy, including considering options for the deployment of assets, requesting forces, preparing for testing, analyzing infrastructure needs, and gaining NATO support for BMD in Europe. However, this approach creates significant planning and implementation challenges that—if left unaddressed—could result in significant management issues and unforeseen costs. First, as a result of the lack of guidance on EPAA’s desired end states, including its priority compared to other BMD missions, the department faces uncertainty in planning and implementing its revised approach, particularly in how it will allocate limited assets among multiple geographic regions. Second, without cost estimates for the life cycle of EPAA, DOD will be unable to judge whether it is meeting its goal that EPAA be fiscally sustainable and affordable. The department will also have difficulty in monitoring the implementation of the program and ensuring that adequate funding is available to execute the program according to plan if it does not develop life-cycle cost estimates. Third, DOD does not have an EPAA schedule that integrates key acquisition, infrastructure, and personnel activities. As a result, the department does not have the information it needs to assess whether the EPAA schedule is realistic and achievable, identify potential problems, or analyze how changes will impact the execution of this effort, and therefore is exposed to increased schedule, performance, and cost risks. Finally, without incorporating operationally quantifiable metrics—such as how long the system can defend (durability) and how well the system can defend (effectiveness)—into its test program, DOD will not be able to fully understand the capabilities and limitations of the BMD system and EUCOM will not have the most relevant performance data it needs to thoroughly assess the extent to which BMD capabilities support its mission objectives and judge how to best plan for and employ BMD assets. Unless the department addresses these challenges, DOD will likely face implementation risks that ultimately may increase the cost for this approach in Europe and potentially beyond as it expands this BMD approach to other regions of the world. We recommend that the Secretary of Defense take the following four actions: Direct the Under Secretary of Defense for Policy and Chairman of the Joint Chiefs of Staff to provide guidance on EPAA that describes desired EPAA end states in response to concerns raised by key stakeholders. Direct the Missile Defense Executive Board to oversee and coordinate the life-cycle cost estimates that would provide for the management and oversight of EPAA and allow the department to assess whether its plans for EPAA are affordable and determine if corrective actions are needed, and an integrated EPAA schedule to include acquisition, infrastructure, and personnel activities that would help identify EPAA implementation risks that need to be considered. Direct U.S. Strategic Command, in coordination with the Missile Defense Agency, to adopt BMD operational performance metrics for durability and effectiveness and include these metrics into the BMD test programs. In written comments on a draft of this report, DOD concurred with two of our recommendations and partially concurred with two others. The department’s comments are reprinted in appendix V. DOD and the State Department also provided technical comments, which we have incorporated as appropriate. DOD partially concurred with our recommendation to provide guidance on EPAA that describes desired end states in response to concerns raised by key stakeholders. In its comments, DOD stated that it recognizes the need to provide policy guidance on the decision to pursue the EPAA. The department also noted that it has taken steps to provide guidance in the 2012 Guidance for the Employment of the Force and that this would provide detailed guidance to the Joint Staff, combatant commanders and other DOD components on end states, strategic assumptions and contingency planning, including for EPAA. However, since this guidance has not yet been approved by the Secretary of Defense, we cannot determine if the concerns raised by key stakeholders will be addressed. Additionally, since EPAA is a flexible approach, DOD will need to continue to refine its guidance over time. DOD partially concurred with our recommendation that the Missile Defense Executive Board oversee and coordinate the development of life- cycle cost estimates that would provide for the management and oversight of EPAA and allow the department to assess whether its plans for EPAA are affordable and determine if corrective actions are needed. In its comments, DOD stated that EPAA is an approach, not an acquisition program, and that it is designed to be flexible and match resources to the combatant commander’s requirements. The department believes a more effective approach is to prepare BMDS program element-specific life-cycle cost estimates and use them to inform the management of ongoing acquisition programs and senior-level oversight of the phased adaptive approach as BMDS systems are applied to the defense of Europe. We recognize that life-cycle cost estimates for individual elements will provide decision makers with information on DOD’s BMD efforts; however, we believe that DOD should also develop life-cycle cost estimates for its overall EPAA effort and that doing so will not impede flexibility. Without cost estimates for the life cycle of EPAA, DOD will be unable to judge whether EPAA is affordable and sustainable. The department will also have difficulty in monitoring the implementation of EPAA and ensuring that adequate funding is available to execute the program according to plan. In its response to our third recommendation, DOD concurred that the Missile Defense Executive Board oversee and coordinate the development of an integrated EPAA schedule to include acquisition, infrastructure, and personnel activities that would help identify EPAA implementation risks that need to be considered. DOD stated that MDA includes the anticipated phased adaptive approach requirements into the broader BMDS acquisition program and uses an integrated BMDS schedule for the emerging EPAA requirements, ensuring they are included in appropriate detail and timing within the BMD element-level schedules. DOD further indicated that MDA has a strict process to manage and integrate the acquisition of discrete BMDS elements which make up the capability to be delivered in each of the EPAA phases. While the department has an integrated BMDS acquisition schedule comprised of element-level acquisition schedules, we found that the schedules for the individual elements are highly optimistic. Additionally, DOD has not developed an integrated schedule specifically for EPAA so that EPAA-related acquisition activities as well as EPAA-related infrastructure and personnel activities can be synchronized directly within that schedule. As a result, we continue to believe that the department does not have an important management tool with which to assess whether the EPAA schedule is realistic and achievable, identify potential problems, or analyze how changes will impact the execution of this effort. DOD concurred with our recommendation to adopt BMD operational performance metrics for durability and effectiveness and include these metrics into the BMD test programs. In its comments, DOD stated that it recognizes the inherent value of measurable BMDS performance metrics and that, once provided with the warfighter’s operationally defined metrics, DOD will crosswalk these metrics to the BMD System specification values assessed to be achievable, and determine whether the specifications meet the operational requirements. Taking such actions would meet the intent of our recommendation. We are sending copies of this report to the Secretary of Defense; the Secretary of State; the Director, Missile Defense Agency; the Chairman, Joint Chiefs of Staff; the Commander, U.S. Strategic Command; and the Chiefs of Staff and Secretaries of the Army, Navy, and Air Force. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. During our review of the Department of Defense’s (DOD) plans for implementing the European Phased Adaptive Approach (EPAA), we reviewed relevant documentation and met with representatives from numerous agencies and offices. To assess the extent to which DOD has provided guidance for the force structure requirements, identified costs, and established an integrated schedule for EPAA we reviewed relevant documentation and spoke with cognizant DOD, State Department, and North Atlantic Treaty Organization (NATO) officials. The documents we reviewed relating to guidance for force structure requirements included the 2010 Ballistic Missile Defense Review, the President’s announcement from September 2009, and testimony from senior DOD officials. We also reviewed U.S. Strategic Command’s 2010 Military Utility Assessment and 2009 Prioritized Capabilities List. We spoke to senior-level officials from the Office of the Secretary of Defense, the Missile Defense Agency (MDA), and the Joint Staff about the presence or absence of a firm architecture for EPAA, any guidance that would be provided to the services, and how force structure for EPAA would be determined. Officials from U.S. Strategic Command, U.S. European Command, and U.S. Northern Command informed us about the typical processes for determining ballistic missile defense (BMD) force structure. We spoke to service representatives from the Army and Navy, including the Army Space and Missile Defense Command and the Naval Air and Missile Defense Command, about the kind of guidance they will need to prepare cost and force structure estimates for EPAA. We also reviewed intelligence documents and threat assessments and met with intelligence officials from the Office of the Director of National Intelligence, the Defense Intelligence Agency, and the National Air and Space Intelligence Center to become familiar with the threats that EPAA is intended to defeat and the type of force structure that might be required to accomplish this mission. To determine the extent to which DOD has identified the costs of EPAA, we reviewed the budget requests for some of the elements DOD stated would be part of EPAA and also met with representatives from the Office of the Secretary of Defense (Cost Assessment and Program Evaluation). In evaluating whether DOD has an integrated schedule that considers the factors that may impact EPAA, we relied on policy documents such as the 2010 Ballistic Missile Defense Review and the statements made by the President and the Secretary of Defense about the timelines for EPAA. We reviewed MDA’s Integrated Master Test Plan and the President’s budget requests and justifications for BMD elements. We also met with service representatives to discuss the kinds of schedules they typically follow when preparing infrastructure, training personnel, and preparing force structure to be fielded. For example, the Army Corps of Engineers provided information related to the efforts involved with constructing facilities in foreign countries and the types of challenges they face with such construction. Further, State Department officials provided us with information about the activities and schedule involved in establishing government-to-government agreements for hosting U.S. BMD assets. We also spoke with NATO representatives about that organization’s schedule for adopting the territorial missile defense mission and the process of making assets interoperable with U.S. missile defense assets. We also relied on our recent work dealing with the acquisition risks related to the EPAA schedule, contained in GAO-11-179R. To assess the extent to which the combatant commands are involved with testing for EPAA-related assets and understand the capabilities and limitations of the BMD system, we reviewed the Integrated Master Test Plan as well as U.S. Strategic Command’s 2010 Military Utility Assessment, and the Force Preparation Campaign Plan. We also spoke to officials at U.S. Northern Command and U.S. European Command about their understanding and confidence in the BMD system as a whole and the individual assets that comprise it. Officials from these same commands provided information about efforts to establish “Assess-to” criteria for durability and effectiveness of the BMD system. We met with officials from the office of the Director, Operational Test and Evaluation and the Ballistic Missile Defense System Operational Test Agency to discuss the status of models and simulations for the BMD system and elements. To understand DOD’s and the State Department’s plans for cooperation and coordination with NATO, friends, and allies in implementing EPAA, we conducted site visits to numerous installations both in the U.S. and in Europe. We met with State Department officials to discuss their ongoing efforts to negotiate agreements with countries that may host U.S. BMD assets and received updates on the progress of negotiations. We interviewed officials from the Office of the Under Secretary of Defense for Policy to discuss DOD’s role in negotiating these agreements. We also met with MDA officials to discuss the efforts to make EPAA interoperable with the Active Layered Theater Ballistic Missile Defense system of NATO. We also attended the Nimble Titan 2010 wargame in Suffolk, Va., where we talked to the representatives of foreign governments and militaries and learned about the efforts already under way that may affect the collaboration and coordination amongst allies, as well as points of conflict that could hinder cooperation. In Europe, officials with the U.S. mission to NATO informed us of the process whereby NATO would decide whether or not to adopt the territorial BMD mission, the likelihood of such an adoption, and next steps following adoption of the mission. We also met with the European representatives from U.S. Naval Forces Europe and U.S. Air Forces in Europe to discuss their perspective on the efforts and challenges to cooperating with NATO and foreign allies on BMD. We conducted this performance audit from December 2009 to January 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A system that (1) provides a forward deployed capability to search, detect, and track ballistic missiles of all ranges and transmit track data to the BMDS and (2) employs its own sensors and interceptors or exploits off-board sensors to protect deployed forces, large regions, and population centers. The element is based on a modification to existing Navy Aegis ships to provide these capabilities. The interceptors include the Standard Missile-3 (SM-3), designed to defend against short- to intermediate-range ballistic missile threats in the midcourse and ascent phases, and a modified Standard Missile-2 (SM-2) designed to defend against short-range threats in the terminal phase. A networked computer and communications element developed by MDA to integrate the BMDS by providing deliberate planning, situational awareness, sensor management, and battle management capabilities. A transportable, land-based radar, similar in design to the THAAD radar, which provides advance warning of ballistic missile launches to the BMDS from forward-based locations. The THAAD element employs the THAAD Interceptor and the Army Navy/Transportable Radar Surveillance - Model 2 (AN/TPY-2) (THAAD Mode) to engage ballistic targets in the late mid-course and terminal phases of their trajectory. THAAD can act as a surveillance sensor, providing sensor data to cue other elements of the BMDS. PATRIOT Advanced Capability-3 (PAC-3) PAC-3 provides simultaneous air and missile defense capabilities as the Lower Tier element in defense of U.S. deployed forces and allies against short-range ballistic missiles. Land-based element designed by MDA to provide capability to detect, track, and intercept threats. Aegis Ashore will leverage the Aegis BMD capability and deploy it at shore-based sites in Europe starting in 2015. DOD intends for it to employ the SM-3 for exoatmospheric defense against short- to medium- and some intermediate-range ballistic missile threats in the later stages of flight. Use of the SM-3 at shore-based sites will broaden the BMDS use of the SM-3 from its current sea- based applications and DOD plans for Aegis Ashore to employ SM-3 IIB in Phase 4 against intercontinental ballistic missiles. Unmanned Aerial Vehicle-based sensor in development designed to acquire and track large ballistic missile raid sizes. The sensor is also intended to provide tracking data of high enough quality to be used for launch-on- remote and early intercept engagements. Precision Tracking Space System (PTSS) Space-based sensor system, in early development, designed to provide end-to-end intercept quality tracking of ballistic missile threats. The ize of the re defended depend on the cabilitie nd ner of the BMD element deployed. In thi notionl case, the defended re of two BMD-cable hip dditive. Integrting BMD element into tem cn increase their cability, inclding expnding the defended re. In thi notionl case, the defended re of the same two BMD-cable hipastly expnded when integrted with enor. The geogrphic loction of the BMD elementn impct their performnce. In thi notionl case, the defended re of the same integrted element from option 2 iastly expnded y chnging the loction of the enor. Since the President’s announcement of EPAA in September 2009, the U.S. has made significant progress in advancing cooperative efforts with NATO allies on BMD in Europe. Increasing international cooperation on BMD is a major focus of the Administration’s new approach to BMD. According to the Ballistic Missile Defense Review, a benefit of EPAA is that it offers increased opportunities for allied participation and burden sharing. The U.S. intends to make EPAA its national contribution to a future NATO BMD capability and is therefore not asking NATO for financial support for EPAA assets. However, the U.S. is seeking allied participation and burden sharing for EPAA that may be demonstrated in various ways. According to DOD and the State Department, burden sharing may come in the form of support for EPAA, including adoption of a NATO territorial BMD mission; expansion of NATO’s command and control system for territorial missile defense; bilateral agreements for hosting U.S. BMD assets; and contributions of allied BMD assets toward an expanded NATO BMD system capability. NATO’s adoption of the territorial BMD mission at the Lisbon Summit in November 2010 fulfilled a major U.S. goal. NATO’s prior BMD mission was limited to the protection of deployed troops and so was focused on defending smaller areas. The shift to a territorial defense mission means that NATO’s BMD efforts will now focus on protecting much larger geographic areas, including population centers and countries. Additionally, DOD and State Department officials noted that the agreement at Lisbon will help facilitate cooperation with NATO allies on hosting U.S. BMD assets and provides justification for allies to pursue additional BMD efforts. NATO allies had expressed their support for EPAA prior to the Lisbon Summit. At the December 2009 NATO Foreign Ministers Meeting in Brussels, NATO welcomed the U.S. adoption of EPAA and declared that this approach would further strengthen European missile defense work in NATO. Further, the NATO Secretary General stated in October 2010 that building a missile defense for Europe was important, because missiles are increasingly posing a threat to European populations, territory, and deployed forces. Although the political endorsement at Lisbon was a significant accomplishment, the U.S. and its NATO allies must now overcome the difficult task of reaching consensus on how to carry out this new BMD mission, including prioritizing what areas to defend and establishing command and control relationships. According to DOD, State Department, and NATO officials, reaching agreement on these issues will be a challenge facing NATO’s new territorial missile defense mission. DOD and State Department officials told us that reaching such an agreement on a bilateral basis can be extremely challenging and time-consuming and that reaching consensus with all 28 NATO member nations is therefore expected to be even more challenging and time-consuming. The U.S. and its NATO allies have already taken steps to address the political challenges inherent in multilateral BMD operations by beginning to explore and outline potential command and control relationships. One venue in which the U.S. and its allies have been examining BMD command and control challenges is the biennial U.S. Strategic Command-led wargame called Nimble Titan. In 2010, this wargame involved notional ballistic missile attack scenarios occurring a decade in the future against fictional adversaries. Nimble Titan 2010 participants came from around the world including representatives from many NATO member nations, such as Denmark, France, Germany, the Netherlands, and United Kingdom and observers from Belgium, Italy, Romania, Turkey, NATO, and Russia. One of the outcomes of the Nimble Titan 2010 wargame was the development of a document that described notional command and control relationships and established a framework for coalition BMD concept of operations. Additionally, the U.S. has participated in a Dutch-led BMD exercise that, according to EUCOM officials, is also helping them to understand and overcome command and control challenges. EUCOM officials also told us that their command has begun drafting a concept of operations as well. However, they emphasized that NATO agreement on a final command and control concept of operations would remain a challenge and require significant effort. At Lisbon, NATO also agreed to expand its missile defense command, control, and communications program to incorporate the territorial missile defense mission, thereby fulfilling another burden sharing goal established by the U.S. The NATO system, called Active Layered Theater Ballistic Missile Defense (ALTBMD) is currently designed to link allies’ missile defense assets together to protect deployed forces. Prior to the Lisbon Summit, NATO commissioned technical studies that concluded it was feasible to expand ALTBMD capabilities to include the territorial missile defense mission. As a result of the agreement reached at Lisbon, NATO plans to modify ALTBMD to be the command and control backbone into which allied BMD assets will link and through which NATO will conduct territorial BMD planning, tasking, engagement coordination, and share situation assessment. MDA and ALTBMD program officials estimated that an expanded ALTBMD for territorial defense would be operational and interoperable with the U.S. command and control system, C2BMC, by 2018. NATO and DOD officials stated that they do not see major technical challenges in meeting the 2018 operational target date for the territorial missile defense mission and interoperability with C2BMC. However, GAO did not assess the technical feasibility, cost, and schedule of ALTBMD, including interoperability with C2BMC. According to NATO, expanding ALTBMD capabilities to include the territorial missile defense mission would cost less than €200 million or around $260 million over 10 years, to be paid for through NATO common funding. The Secretary of Defense and NATO Secretary General stated that, as such, expansion of ALTBMD to include the territorial missile defense mission is not a significant financial burden to the alliance. Section 223 (a) of the Ike Skelton National Defense Authorization Act for Fiscal Year 2011, Pub. L. No. 111-383 (2011) restricts the obligation or expenditure of funds for Fiscal Year 2011 and beyond for site activation, construction, or deployment of missile defense interceptors on European land as part of the phased adaptive approach to missile defense in Europe until certain conditions are met, including host nation signing and ratification of basing agreements and status of forces agreements authorizing deployment of such interceptors. Section 223(c) allows the Secretary of Defense to waive the restrictions seven days after the Secretary submits to the congressional defense committees written certification that the waiver is in the urgent national security interests of the United States. The supplemental Status of Forces Agreements supplement the multilateral NATO Status of Forces Agreement, originally signed on June 19, 1951. establishing an Aegis Ashore facility. This revised agreement is now awaiting Polish parliamentary ratification. The U.S. has not yet reached agreement with a nation to host the AN/TPY-2 radar, which is a significant component of the first phase of EPAA and scheduled to be in place by the 2011 time frame. Although State Department officials expressed confidence that the U.S. could reach agreement with the yet to be determined host country for AN/TPY-2 in 2011, they also acknowledged that the U.S does not have control over how long it will take to reach bilateral agreements with foreign countries or how long it will take foreign countries to bring those agreements into force. Additionally, since the U.S. has not yet identified where other potential EPAA BMD assets will be based, it is unknown what kind of bilateral agreements will be necessary with future BMD asset host countries. A way in which NATO allies can share the burden in providing territorial missile defense of NATO is by contributing their national BMD assets; however, the U.S. is thus far the only NATO member nation developing BMD assets designed to provide territorial defense. BMD capabilities currently envisioned for a NATO territorial defense mission include point defenses using assets such as Patriot and area defenses such as THAAD and Aegis BMD. BMD assets that provide point defenses are designed to protect a relatively small area, such as an airport or port, primarily against short-range ballistic missiles whereas area defense BMD assets are designed to protect much larger swaths of territory and usually against medium-range or greater ballistic missiles. Territorial defense is thereby provided much more efficiently by area defenses than point defenses. For example, in a 1999 report to Congress, DOD reported the same territorial area could be protected by either 6 THAAD batteries or more than 100 Patriot Advanced Capability-3 (PAC-3) batteries. The report concluded that the Patriot option was impractical for territorial defense. Further, a senior DOD official testified that territorial defense of Europe cannot be done using point defenses and requires area defenses. Several NATO member nations have BMD point defense assets and, should they choose to contribute them to the NATO mission, these could be used to defend strategic assets primarily against short-range ballistic missiles. Additionally, several NATO allies could also contribute sensors to the BMD mission that, if compatible and appropriately interoperable, could provide early warning data to tracking data that enhances the capability of area defense assets. However, the U.S. remains the only NATO member nation with BMD assets designed to provide area defense needed for the NATO territorial BMD mission. Although NATO has adopted the territorial defense mission, the current fiscal situation of many NATO allies makes it less likely that they will start expensive new BMD development programs for area defense. Many NATO countries are trying to cut down on government spending due to current instability in the European economy, which could cause decreases in defense expenditures. In a June 2010 speech, the NATO Secretary General recognized the major defense cuts being made across NATO nations due to the current fiscal climate and asked allies not to make drastic defense budget cuts that would compromise NATO’s collective security missions. The Secretary of State and Secretary of Defense have also expressed their concern about defense budget cuts in NATO nations and the potential impact on NATO. Additionally, NATO and DOD officials stated that European countries are not likely to begin developing new area defense BMD programs in the near future. In addition to the contact named above, Marie Mak, Assistant Director; Nicolaas Cornelisse, Analyst-In-Charge; David Best; Cristina Chaplain, Laurie Choi; Tana Davis; Gregory Marchand; Wiktor Niewiadomski; Karen Richey; Matthew Spiers; Amie Steele; Alyssa Weir; Erik Wilkins-McKee; Gwyneth Woolwine; and Edwin Yuen made key contributions to this report. Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization, Transparency, and Accountability Challenges. GAO-11-179R. Washington, D.C.: December 21, 2010. Defense Acquisitions: Missile Defense Program Instability Affects Reliability of Earned Value Management Data. GAO-10-676. Washington, D.C.: July 14, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Missile Defense: DOD Needs to More Fully Assess Requirements and Establish Operational Units before Fielding New Capabilities. GAO-09-856. Washington, D.C.: September 16, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Defense Management: Key Challenges Should be Addressed When Considering Changes to Missile Defense Agency’s Roles and Missions. GAO-09-466T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: March 13, 2009. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Ballistic Missile Defense: Actions Needed to Improve the Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Defense Acquisitions: Missile Defense Agency’s Flexibility Reduces Transparency of Program Cost. GAO-07-799T. Washington, D.C.: April 30, 2007. Missile Defense: Actions Needed to Improve Information for Supporting Future Key Decisions for Boost and Ascent Phase Elements. GAO-07-430. Washington, D.C.: April 17, 2007. Defense Acquisitions: Missile Defense Needs a Better Balance between Flexibility and Accountability. GAO-07-727T. Washington, D.C.: April 11, 2007. Defense Acquisitions: Missile Defense Acquisition Strategy Generates Results but Delivers Less at a Higher Cost. GAO-07-387. Washington, D.C.: March 15, 2007. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goals. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000. | In September 2009, the President announced a revised approach for ballistic missile defense (BMD) in Europe. The European Phased Adaptive Approach (EPAA) is designed to defend against existing and near-term ballistic missile threats and build up defenses over four phases as threats mature and new BMD technologies become available. Although the approach will include capabilities such as radars and landand sea-based BMD assets, the Department of Defense (DOD) has not yet established EPAA life-cycle costs. EPAA is DOD's first implementation of its new, regional approach to BMD. GAO was asked to evaluate DOD's plans for implementing EPAA. GAO reviewed the extent to which: (1) DOD has developed guidance and addressed management of cost and schedule for EPAA, and (2) DOD planning for EPAA is informed by operational performance data. GAO reviewed key legislation, policy and guidance, and initial plans for implementation and asset allocation. DOD has initiated multiple simultaneous efforts to implement EPAA but faces three key management challenges--the lack of clear guidance, life-cycle cost estimates, and a fully integrated schedule--which may result in inefficient planning and execution, limited oversight, and increased cost and performance risks. Since the September 2009 announcement of EPAA, stakeholders throughout DOD--including U.S. European Command, the Missile Defense Agency, and the military services--as well as the State Department, have taken steps to implement this policy, including considering options for the deployment of assets, requesting forces, preparing for testing, and analyzing infrastructure needs. However, effective planning requires clear guidance regarding desired end states and key BMD stakeholders, including the combatant commands and military services, believe that such guidance is not yet in place for EPAA. Further, key principles for preparing cost estimates state that complete and credible estimates are important to support preparation of budget submissions over the short-term as well as to assess long-term affordability. DOD has not developed EPAA life-cycle cost estimates because it considers EPAA an adaptive approach that will change over time. However, best practices for cost estimating include methods for developing valid cost estimates even with such uncertainties. These estimates could serve as a basis for DOD to assess its goal of fielding affordable and cost-effective ballistic missile defenses as well as determine if corrective actions are needed. Finally, the EPAA phase schedule is not fully integrated with acquisition, infrastructure, and personnel activities that will need to be synchronized. As a result, DOD is at risk of incurring schedule slips, decreased performance, and increased cost as it implements the phases of EPAA. DOD also faces planning challenges for EPAA because DOD has not yet established key operational performance metrics that would provide the combatant commands with needed visibility into the operational capabilities and limitations of the BMD system they intend to employ. DOD is incorporating some combatant commands' requirements into BMD testing, in part, by having U.S. European Command participate in the test design process. However, the system's desired performance is not yet defined using operationally relevant quantifiable metrics, such as how long and how well it can defend. The combatant commands are attempting to define operational performance metrics to enable credible assessment of operational performance gaps. However, these metrics have yet to be finalized and implemented. Without a more complete understanding of BMD operational capabilities and limitations, the combatant commands face potential risk in EPAA operational planning. GAO recommends that DOD provide guidance on EPAA end states; develop EPAA life-cycle cost estimates; and integrate its phase schedule with acquisition, infrastructure, and personnel activities. GAO also recommends that DOD adopt operational performance metrics and include them in the BMD test program. DOD generally concurred with GAO's recommendations. |
The Marine Corps Logistics Command (MCLC) (Programs and Resources Department), in conjunction with the Office of the Marine Corps Deputy Commandant (Programs and Resources) are responsible for developing DMAG’s budget. The budget is then submitted to the Office of the Assistant Secretary of the Navy (Financial Management and Comptroller) for review and inclusion in the Navy Working Capital Fund budget submission to Congress. DMAG relies on sales revenue rather than direct appropriations to finance its continuing operations. DMAG is intended to (1) generate sufficient resources to cover the full costs of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers, such as the Marine Corps, use appropriated funds, (primarily operations and maintenance and to a lesser extent procurement appropriations), to finance orders placed with DMAG. DMAG repairs, overhauls, and modifies all types of ground combat and combat support equipment, including such major-end items as the HMMWV, Medium Tactical Vehicle Replacement, Assault Amphibious Vehicle, and the Light Armored Vehicle (LAV). DOD uses the term “carryover” to refer to the reported dollar value of working capital fund activities work that has been ordered and funded (obligated) by customers but not completed by the end of the fiscal year. As such, carryover consists of both the unfinished portion of working capital fund activities work started but not completed, as well as work that was accepted, but that has not yet begun. Both DOD and congressional defense committees have agreed that some carryover is appropriate at the end of the fiscal year in order for working capital funds to operate efficiently and effectively. For example, if customers do not receive new appropriations at the beginning of the fiscal year, carryover is necessary to ensure that working capital fund activities (1) have enough work to continue operations in the new fiscal year and (2) retain the appropriate number of personnel with sufficient skill sets to perform depot maintenance work. Too little carryover could result in some personnel not having work to perform at the beginning of the fiscal year. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year. By limiting the amount of carryover, DOD can use its resources in the most efficient and effective manner and minimize the backlog of work and “banking” of related funding for work and programs to be performed in subsequent years. DOD’s carryover policy is provided in DOD Financial Management Regulation 7000.14-R, volume 2B, chapter 9. Under the policy, the allowable amount of carryover each year is based on the amount of new orders received in a given year and the outlay rate of the customers’ appropriations financing the work. For example, DMAG received about $462 million in new orders funded with the Marine Corps operation and maintenance appropriation—one of several appropriations funding orders DMAG received in fiscal year 2010. The DOD outlay rate for this appropriation was 57.5 percent. Therefore, the amount of funds DMAG was allowed to carry over into fiscal year 2011 was $196 million ($462 million multiplied by 42.5 percent, which represents 1 minus the 57.5 percent outlay rate for the operation and maintenance, Marine Corps appropriation). The DOD carryover policy provides that the work on the fiscal year 2010 orders is expected to be completed by the end of fiscal year 2011. According to the DOD regulation, this carryover metric allows for an analytical-based approach that holds working capital fund activities to the same standard as general fund execution and allows for meaningful budget execution analysis. In accordance with the DOD Financial Management Regulation,nonfederal orders, (2) non-DOD orders, (3) foreign military sales, (4) work related to base realignment and closure, and (5) work-in-progress are to be excluded from the carryover calculation. The reported actual carryover (net of exclusions) is then compared to the amount of allowable carryover using the above-described outlay-rate method to determine whether the actual carryover amount is over or under the allowable carryover amount. (1) Our analysis of DMAG reports showed that from fiscal year 2004 through fiscal year 2011, reported actual carryover exceeded the allowable amounts in 6 of the 8 years. During the most recent 6-year period, the reported amounts of actual carryover exceeded the allowable amounts each year, ranging from a high of $59 million in fiscal year 2007 to a low of $7 million in fiscal year 2011. Our analysis also showed that the extent to which DMAG carryover exceeded the allowable amounts declined each year since fiscal year 2007. Table 1 shows the reported DMAG actual carryover, allowable carryover, and the amount over (or under) the allowable carryover for fiscal years 2004 through 2011. According to MCLC (Programs and Resources Department) officials, DMAG has implemented actions to reduce carryover, and actions taken have contributed to recent declines in the carryover amounts that exceeded the allowable amounts. Specifically, these officials cited the following four actions: Beginning in the second quarter of each fiscal year, MCLC evaluates new orders for their impact on the amount of workload that will carry over to the next fiscal year that potentially could exceed the allowable amount. Considering the impact that new orders have on carryover, in 2008 MCLC established criteria for the acceptance of new orders including (1) determining if the customer order supports OIF/OEF workload and if there are viable alternatives to the scope of the order and/or source of repair for which the customer is able to mitigate, and (2) determining whether there are alternatives that DMAG can use to avoid carrying over workload into the new fiscal year (i.e., increase overtime, augment personnel, and/or subcontract portions of the workload). Beginning in 2008, DMAG formed a working group to engage its customers and the Defense Finance and Accounting Service to reduce times for closing completed orders. In addition, DMAG closed out orders with small remaining balances that reduced the unobligated balances on the orders. For example, DMAG has earned an additional $2.3 million in revenue in fiscal year 2011 due to an August 2011 change to the Defense Industrial Financial Management System to automatically bill to revenue the residual dollar amounts left on customer orders. This reduced the customers’ outstanding unliquidated obligation balances to zero—thus eliminating $2.3 million in fiscal year 2011 that would carry over to the next fiscal year. Twice annually, DMAG formally meets with its major customers to validate the current and upcoming fiscal year workload requirements and modify plans at the MCLC maintenance centers to address current bona fide need of the orders. Beginning in the 2007-2008 time frame, MCLC began developing mitigating strategies for controlling or reducing carryover due to the elevated carryover levels (i.e., schedule realignments or alternate sources of repair or supply). MCLC’s two maintenance centers have implemented production and workflow efficiencies, including new concepts of operations intended to streamline operations, that are expected to reduce repair cycle times for damaged weapon systems. For example, efforts to improve the efficiency of production processes for a variant of the HMMWV at the Barstow maintenance center reduced repair cycle time by 45 days from 86 days in fiscal year 2010 to 41 days in fiscal year 2011. Reduced repair cycle times allow the centers to perform more work in the same period of time, generating more revenue by the centers and thus reducing carryover. DMAG budget estimates for carryover were consistently less than allowable amounts each year from fiscal year 2004 through fiscal year 2011. However, in contrast, for the most recent 6 years, DMAG’s actual reported amount of carryover exceeded budgeted carryover amounts by at least $50 million each year. Our analysis showed the actual amounts of reported carryover exceeded budgeted amounts primarily because the Marine Corps underestimated DMAG’s new orders received from customers. Reliable budget information on carryover is critical because decision makers use this information when reviewing DMAG’s budgets. Table 2 summarizes the dollar amounts of budgeted and actual DMAG carryover that was over or under the allowable dollar amounts and the difference as shown in DMAG budgets for fiscal years 2004 through 2011. Our analysis of DMAG budget documents showed that for fiscal years 2004 through 2011, the Marine Corps budgeted DMAG’s revenue (work to be performed) to be more than the budgeted dollar value of new orders to be received each year. Specifically, during the 8-year period, the DMAG budgets showed revenue would be approximately $2.5 billion. This revenue estimate was almost $400 million more than the budgeted $2.1 billion for expected new orders. As a result, planned carryover would remain relatively stable and under $100 million. Figure 1 shows the DMAG budgeted new orders, revenue, and carryover amounts for fiscal years 2004 through 2011. Although the Marine Corps budgeted for DMAG’s revenue to exceed new orders and carryover to remain relatively stable as shown in figure 1, subsequent DMAG reporting showed that for fiscal years 2004 through 2011, it actually received more new orders than revenue as shown in figure 2. In fact, for 5 of the 8 years, reported actual new orders received exceeded actual revenue. For the 8-year period, actual reported revenue was about $4.1 billion or $156 million less than the $4.3 billion in actual reported new orders received. As a result, reported actual carryover increased from a low of $168 million in fiscal year 2004 to a high of $326 million in fiscal year 2008 during the 8-year period. Figure 2 shows the actual new orders, revenue, and carryover amounts shown in DMAG budgets for fiscal years 2004 through 2011. Our further analysis of DMAG budget documents showed that the Marine Corps significantly underestimated the amount of new orders to be received from DMAG’s customers. As shown in figure 1, from fiscal years 2004 through 2011, DMAG budgeted to receive about $2.1 billion in new orders, but as shown in figure 2, subsequent DMAG records showed that it actually received about $4.3 billion in new orders. As a result, DMAG’s budget underestimated new orders received from customers by a cumulative total of about $2.2 billion over the 8-year period we reviewed. Furthermore, DMAG records showed that actual new orders received from customers exceeded budgeted orders ranging from 51 percent in fiscal year 2010 to 175 percent in fiscal year 2006 over the 8-year period. Table 3 shows a comparison between budgeted and actual new orders for fiscal years 2004 through 2011 based on DMAG budget documentation. According to MCLC (Programs and Resources Department) officials, the Marine Corps formulates its budgets for new orders based on documented customer requirements—that is, for which the customers provide DMAG with letters of intent. Letters of intent documents the dollar amount of work the customers intend to provide to DMAG for various weapon systems and equipment repair and overhaul. These estimated amounts are used by the Marine Corps for forecasting DMAG workload as well as for budgeting. In September 2009, the Marine Corps DMAG budget analyst from the Office of the Under Secretary of Defense (Comptroller) stated that DMAG needed more reliable information on budgeted new orders. In information exchanges between the office’s budget analyst and the Marine Corps, the analyst suggested that the Marine Corps increase fiscal year 2011 budgeted new orders to approximate the fiscal year 2009 levels assuming the new orders are still expected to be near the fiscal year 2009 level, even if DMAG did not have letters of intent to support this workload. However, the Marine Corps did not increase the budgeted new order amount because it was not documented in letters of intent. As shown in table 3, DMAG’s actual new orders for fiscal year 2011 exceeded budgeted new orders by $314 million or 106 percent. In developing the DMAG budgets for fiscal years 2004 through 2011, the Marine Corps did not consider recent years’ trends and consistently underestimated (1) the amounts of carryover that would exceed the allowable amounts and (2) the amounts of new orders to be received from customers. Specifically, the Marine Corps did not compare budgeted to actual data and consider this data in determining whether adjustments should be made to the estimated amounts of DMAG carryover and order data in its future budgets. Until the Marine Corps considers all future work (including work for which DMAG has not yet received letters of intent) in developing its budgets, its (1) budgeted amounts of carryover that are over or under the allowable amounts and (2) budgeted new orders will continue to be of limited value for managerial decision making. Our analysis of documents and interviews with Marine Corps officials disclosed that DMAG carryover significantly increased from $49 million in fiscal year 2002 to $271 million in fiscal year 2005. We determined this increase was primarily because new orders from DMAG customers more than tripled from $188 million in fiscal year 2002 to $583 million in fiscal year 2005. Available Marine Corps documentation, including DMAG’s budgets, showed that new orders increased to facilitate higher depot maintenance requirements in support of OIF/OEF operations. Since fiscal year 2005, available records show carryover remained at about the fiscal year 2005 level—averaging 6.4 months of workload. To assess the extent of growth in carryover due to OIF/OEF, we analyzed data for the period before and during the period of OIF/OEF operations. Figure 3 depicts DMAG actual new orders, revenue, carryover, and months of carryover beginning in fiscal year 1998 for a 14-year period based on available Marine Corps documentation. To illustrate the impact that changes in new orders and revenue had on carryover during the 14-year period, table 4 summarizes the data into three segments aligned with workload supporting peacetime operations (fiscal years 1998 through 2002), workload supporting the initial years of OIF/OEF (fiscal years 2003 through 2005), and workload supporting continuing elevated military operations under OIF/OEF (fiscal years 2006 through 2011). For fiscal years 1998 through 2002, DMAG was operating at levels to support peacetime workload. Specifically, during the 5-year period, the amount of workload received (new orders) from DMAG customers ($983 million) roughly equaled the amount of work performed (revenue) by DMAG ($1,022 million). DMAG carryover remained under $75 million and represented on average about 3.7 months of workload. Beginning in fiscal year 2003, depot maintenance workload increased in response to the buildup in deployed weapon systems and equipment and their associated increase in equipment wear and tear in support of OIF/OEF. While revenue generated from depot maintenance operations more than doubled, from $212 million in fiscal year 2002 to $480 million in fiscal year 2005, reported new orders more than tripled, from $188 million in fiscal year 2002 to $583 million in fiscal year 2005. Over the period, reported new orders exceeded revenue by about $223 million and carryover increased from $49 million at the end of fiscal year 2002 to $271 million at the end of fiscal year 2005—representing about 6.6 months of workload. The maintenance centers, according to MCLC (Programs and Resources Department) officials, experienced personnel and parts shortages during the early years of the buildup in OIF/OEF military operations overseas because (1) in some cases, the centers did not have sufficient numbers of personnel with the required skill sets needed to address the rapidly increasing workload and (2) the DOD supply system and supporting private sector commercial production did not keep pace with the increased spare parts and raw material requirements that were needed by the maintenance centers, such as obtaining steel needed to satisfy an emerging requirement to armor critical warfighting equipment. MCLC (Programs and Resources Department) officials informed us that by fiscal year 2007, the maintenance centers resolved much of their personnel and parts shortages that resulted from the increased new orders supporting OIF/OEF. To resolve personnel and supply shortages, these officials told us that the centers employed multiple strategies such as (1) working with local colleges to obtain skilled employees and implementing personnel strategies that allowed DMAG the flexibility to expand or contract personnel within 48 to 72 hours using a combination of full-time, temporary, and contractor personnel and (2) working with the Defense Logistics Agency to eliminate or significantly reduce spare parts shortages. Our review confirmed that personnel and parts shortages, including raw materials issues, are no longer a major contributing factor to increased carryover amounts. According to MCLC (Programs and Resources Department) officials, DMAG resolved much of its personnel and spare parts shortages. However, our analysis showed that, since fiscal year 2005, DMAG had not further reduced the carryover resulting from the increases in new orders between fiscal years 2003 through 2005. As shown in table 4, the DMAG carryover averaged $296 million and represented about 6.4 months of workload since fiscal year 2005. Further, for fiscal years 2006 through 2011, the amount of revenue generated by the centers’ operations ($3,337 million) have roughly equaled the new orders accepted by the centers ($3,356 million)—a difference over the 6-year period of only $19 million. Our analysis of 60 orders (and related amendments) with the largest amounts of carryover for fiscal years 2010 and 2011 (the most recent data available) identified three primary reasons for carryover: (1) unanticipated increases in quantities or workload requirements to customer orders, (2) starting work on new orders later in the fiscal year because the centers had not yet completed work on other existing orders from the current and prior fiscal year, and (3) accepting amendments to existing orders in the last quarter of the fiscal year that increased order quantities or the scope of work. Table 5 provides summary information on the primary reasons for DMAG carryover based on our review of 60 large- dollar carryover orders for the most recent 2-year period—30 each year. Accurately forecasting the scope of work on orders is essential for ensuring that the maintenance centers operate efficiently and complete work on orders as scheduled. However, we found that for 45 of the 60 orders for fiscal years 2010 and 2011 that we reviewed, customers increased quantities or added unanticipated workload requirements throughout the fiscal year that delayed completing work on existing orders. MCLC (Programs and Resources Department) officials cited unplanned workload requirements as a primary driver for carryover. These officials stated that (1) while the maintenance centers do not drive workload requirements, the maintenance centers must respond to the customers’ bona fide needs for the repair of the warfighters’ equipment to support emergent requirements in the field, and (2) since OIF/OEF operations began, DMAG began to accept all orders (and related amendments) associated with those operations where viable alternative sources of repair could not be found, even though accepting such orders may contribute to “over the allowable” carryover. For example, the DMAG fiscal year 2010 budget cited the acceptance of unplanned workload by the maintenance centers to repair war-ravaged equipment and weapon systems returning from overseas contingency operations as the reason for the maintenance centers exceeding the allowable carryover in fiscal year 2008. Below are two examples in which unplanned workload requirements affected carryover. In November 2009, MCLC accepted an order totaling $2.1 million that was financed with fiscal year 2010 operation and maintenance, Marine Corps appropriated funds to perform depot maintenance work on five Light Armored Vehicles (LAV) at the Barstow maintenance center. The order was amended three times in fiscal year 2010 to increase quantities to 28 vehicles, increase funding to $11.7 million, and extend the work completion date into fiscal year 2011. The center carried over $7.9 million of the $11.7 million order principally because many of the vehicles had excessive corrosion in the hull floor plate and side board areas that were much greater than anticipated and which required additional welding and plate replacement. Barstow officials said that the additional welding, plate replacement, and extensive corrosion repair caused the repair cycle time to be extended beyond the timelines originally anticipated to complete the work and caused work delays on other variants of LAVs concurrently being worked on at the center. The center completed work on the order in September 2011. In December 2010, MCLC accepted an order totaling $1.4 million that was financed with fiscal year 2011 operation and maintenance, Marine Corps appropriated funds to perform depot maintenance work (inspect and repair only as necessary) on 18 rough terrain forklifts at the Albany maintenance center. The order was amended nine times in fiscal year 2011 to increase the quantity to 118 forklifts, increase the total amount of the order to $9.4 million, and extend the work completion date into fiscal year 2012. According to center officials, the vehicles required more repair work than was initially planned in terms of cost and repair cycle time, which necessitated a change in the scope of the work. This change was reflected in one of the amendments showing that the statement of work changed from an “inspect and repair only as necessary” requirement to a “rebuild” requirement, which required a more comprehensive type and scope of depot maintenance work to be performed and required additional funds to finance the work. Because of the change to increase workload requirements, the center carried over $4.4 million of the $9.4 million into fiscal year 2012. Completing work on prior-year orders and beginning work on new orders for the current year early in the fiscal year is critical to maintaining low carryover balances. Our review of 60 orders and amendments for fiscal years 2010 and 2011 found that, for 27 orders reviewed, work was delayed on new orders because the maintenance centers had not yet completed work on other current- and prior-year orders. Below are two examples of orders that we reviewed. In November 2009, MCLC accepted an order totaling $2.4 million that was financed with fiscal year 2010 funds appropriated for Marine Corps operation and maintenance to perform depot maintenance work on 27 High Mobility Multi-purpose Wheeled Vehicles (HMMWV) at the Albany maintenance center. The order was amended four times from March 2010 to September 2010 to increase quantities to 45 HMMWVs, increase funding to $4.7 million, and extend the work completion date into fiscal year 2011. The center carried over almost the entire amount of the $4.7 million order because the center was working on other fiscal year 2010 HMMWVs. Due to customer requirements, as well as capability, utilization, and production constraints, the center could not begin work on this order until the center completed work on the other orders. The center did not begin work on this order until September 17, 2010—10 months after the initial order was accepted, and did not complete work on the order until August 2011—near the end of fiscal year 2011. In November 2010, MCLC accepted an order totaling $1.5 million that was financed with fiscal year 2011 funds appropriated for Marine Corps operation and maintenance to perform depot maintenance work on three LAVs at the Barstow maintenance center. Because the Barstow maintenance center was still performing depot maintenance work on a fiscal year 2010 LAV order for the same vehicle configuration, the Marine Corps Supply Management Center (SMC) (customer) issued an amendment in February 2011 to reduce quantities to zero and deobligate the entire amount on the fiscal year 2011 order. Our analysis of order documentation showed that the customer deobligated the funds because the Barstow maintenance center did not require the funds at that time, as the center was still performing work on the fiscal year 2010 LAV order. Beginning in March 2011, the MCLC accepted amendments to increase quantities on the fiscal year 2011 order to 20, increase funding to $10.5 million, and changed the scope of the work to include the replacement of missing communications equipment. Barstow maintenance center began performing depot maintenance work on this fiscal year 2011 order in April 2011—7 months into the fiscal year and 5 months after the order was originally placed. Because the Barstow center was working the fiscal year 2010 order, work was delayed on the fiscal year 2011 LAV order causing the center to carry over $3.8 million into fiscal year 2012. The timing of the receipt and acceptance of orders from customers affect the amount of carryover at year-end. Our examination of 60 orders for fiscal years 2010 and 2011 determined that, in 25 cases we reviewed, amendments to orders accepted in the last quarter of the fiscal year contributed to carryover. These amendments either increased order quantities or expanded the scope of work on existing orders with the maintenance centers. DOD Financial Management Regulation provides that depots cannot perform work until they receive and accept orders from customers. According to MCLC (Programs and Resources Department) officials, since some of these orders or amendments to these orders were planned and funded in the fourth quarter, DMAG could not start work until it received and accepted amended orders as specified in the DOD regulation. In June 2006, we reported that carryover is greatly affected by orders received late in the fiscal year. late in the fiscal year increase the amount of carryover. Further, we reported that the most frequent reason DOD activity groups accepted orders at the end of the fiscal year was because funds were provided to the customers late in the fiscal year to finance existing requirements. DOD customers stated that it is common for military services to provide funds to them late in the fiscal year after the military services review their programs to identify funds that will not be obligated by year-end. When these funds are identified, the military services realign the funds to programs that can use them. These funds are then used to finance orders placed with working capital fund activities at year-end. Our discussion with SMC officials—the largest DMAG customer—confirmed that the information reported in June 2006 was also applicable for fiscal years 2010 and 2011. The officials stated that there is a significant advantage to using unobligated funds at fiscal year-end that are set to expire on September 30 because the maintenance centers can perform more work to satisfy its customers’ unfunded requirements. Two examples we identified of amended orders received late in the fiscal year that increased carryover are presented below. GAO, Defense Working Capital Fund: Military Services Did Not Calculate and Report Carryover Amounts Correctly, GAO-06-530 (Washington D.C.: June 27, 2006). In fiscal year 2010, MCLC accepted an order and amendments to the order totaling $31.2 million that was financed with fiscal year 2010 operation and maintenance, Marine Corps appropriated funds to inspect and repair only as necessary 389 HMMWVs at the Albany maintenance center. The order was amended eight times in fiscal year 2010 to increase quantities from 25 vehicles to 389 vehicles, increase funding from $2.0 million to $31.2 million, and extend the work completion date into fiscal year 2011. During the fiscal year, the Albany center completed about 90 percent of the work on the order through the seventh amendment. However, on September 30, 2010— the last day of the fiscal year—MCLC received and accepted an amendment to the order increasing funding by $4.9 million and the quantity by 61 vehicles. The funding cited on this order was set to expire that same day. The entire amount of this amendment carried over into fiscal year 2011. According to a SMC official, the amendment was issued to the Albany maintenance center on September 30, 2010, because funding on a HMMWV order with the Barstow maintenance center was identified by the center as excess to requirements 2 days before the end of the fiscal year. The funding was set to expire at the end of the fiscal year and, if not obligated on another order by September 30, 2010, could not be used for new requirements. As a result, the funding from the Barstow maintenance center that was to expire on September 30, 2010, was applied to the Albany order. The center carried over $7.6 million into fiscal year 2011 on this order and completed work on the order in March 2011. In planning the fiscal year 2011 workload, SMC expected to issue orders to MCLC for the Albany and Barstow maintenance centers to perform depot maintenance on 179 Logistics Vehicle Systems’ front power units. In order to repair the vehicles, MCLC issued a commercial subcontract to the original equipment manufacturer to purchase cabs and other operating material and supplies needed to support work at the Albany and Barstow maintenance centers for the 179 vehicles. Subsequent to the subcontract being awarded, the Marine Corps maintenance strategy changed on the Logistics Vehicle Systems to reduce the need for the cabs purchased on the contract. In return, SMC (the customer) decided to decrease the scope of work from a planned 179 vehicles to 47 vehicles. Through May 2011, the MCLC accepted orders with amendments totaling $8.9 million to support work on the 47 vehicles. Because costs associated with terminating the contract with the original equipment manufacturer for the excess cabs and other operating material and supplies would be nearly as much as having the contractor complete the order, the Marine Corps decided to have the contractor complete the order and transfer ownership of the excess cabs and other operating material and supplies to MCLC headquarters. In June 2011, MCLC informed SMC that $9.8 million, in addition to the $8.9 million provided earlier, would need to be returned in order to pay for the cost of the contracted cabs and other operating material and supplies ordered from the contractor and to complete work on the remaining 47 vehicles. On September 15 and 29, 2011, SMC issued amendments to fund the additional $9.8 million using fiscal year 2011 Marine Corps operation and maintenance funds that became available when other programs indentified excess funds that would expire on September 30, 2011. Because funding was provided late in the fiscal year, the centers carried over $9.4 million of the $18.7 million in orders on this program into fiscal year 2012. The orders are to be closed in fiscal year 2012 when deliveries from the contractor are complete and ownership of the excess cabs and other operating material and supplies are transferred to MCLC headquarters. Reliable carryover information is essential for Congress and DOD to effectively perform their oversight responsibilities, including reviewing and making well-informed decisions on the Marine Corps’ DMAG budget. However, our review found that for the 8-year period from 2004 through 2011, the Marine Corps budgets show it underestimated the amount of new orders to be received from DMAG customers. As a result, while the Marine Corps budgets showed that DMAG’s carryover would be under the allowable amount, subsequent Marine Corps financial records showed the actual reported carryover exceeded the allowable amount for the most recent 6 years. The carryover information can be a management tool for (1) controlling the amount of work that can carry over from one fiscal year to the next, and (2) identifying problems in other areas such as developing budgets on the amount of new orders for depot maintenance work. However, because the Marine Corps has underestimated the amount of new orders in its budgets, current reported data on carryover is of limited utility for decision-making purposes. We recommend that the Secretary of Defense direct the Secretary of the Navy and Commandant of the Marine Corps to take the following two actions to improve the budgeting and management of Marine Corps’ DMAG carryover: Augment DMAG budget development and review procedures to require a comparison of recent years’ trends in budgeted carryover amounts that are over or under the allowable amounts to the actual carryover amounts to identify any differences, reasons for any such differences, and make any appropriate adjustments to budget estimates on carryover. Augment DMAG budget development and review procedures to require a comparison of recent years’ trends in budgeted orders to actual orders to identify any differences, reasons for any such differences (including work for which DMAG had not received letters of intent), and make any appropriate adjustments to budget estimates on new orders to be received from customers. DOD provided written comments on a draft of this report. In its comments, DOD concurred with both of our recommendations and cited actions planned to address them. Specifically, the Office of the Under Secretary of Defense (Comptroller) stated that it will evaluate if actual year end carryover trends were included as a factor for the carryover estimates in the fiscal year 2014 budget. It stated that a review of the fiscal year 2012 actual carryover to the budgeted carryover will be part of the budget analysis including an evaluation of deviations from the allowable amount of carryover. Further, the Office of the Under Secretary of Defense (Comptroller) stated it will evaluate if customer order trends were included as a factor in developing customer order estimates in the fiscal year 2014 budget. It stated that a review of the fiscal year 2012 actual customer orders to the fiscal year 2012 budgeted customer orders will be part of the analysis. While DOD concurred with the two recommendations in our draft report and commented on its plans to perform additional carryover and customer order trend analyses as part of the fiscal year 2014 budget process, DOD’s comments did not clearly provide that it will augment its budgeting process to incorporate carryover and customer order trend analyses beyond the fiscal year 2014 budget process as we recommended. In discussing DOD’s comments with Office of the Under Secretary of Defense (Comptroller) officials, they clarified that, in accord with the intent of our recommendations, they intended to take the cited corrective actions to improve the DMAG budgeting for carryover and customer orders for all fiscal years beginning in fiscal year 2014. Specifically, they stated that DOD intended to ascertain how prior year DMAG carryover and new order trends were incorporated into the carryover estimates and use the data in evaluating Marine Corps DMAG budgets in not only the fiscal year 2014 budget, but in all future budgets. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Secretary of the Navy; and the Commandant of the Marine Corps. The report also is available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine if (1) the depot maintenance activity group (DMAG) reported actual carryover exceeded the allowable amount of carryover from fiscal years 2004 through 2011 and any actions the Marine Corps is taking to reduce carryover, and (2) budget information on DMAG carryover from fiscal years 2004 through 2011 approximated reported actual results, we obtained and analyzed DMAG reports that contained information on budgeted and reported actual carryover and the allowable amount of carryover for fiscal years 2004 through 2011. We analyzed carryover since fiscal year 2004 because prior to fiscal year 2004, DOD had a different policy for determining the allowable amount of carryover. We met with responsible officials from Navy and Marine Corps headquarters and the Marine Corps Logistics Command (MCLC) (Programs and Resources Department) to determine the reasons for significant variances between (1) reported actual carryover and the allowable amount or (2) budgeted and reported actual carryover. We also met with these officials to discuss the actions the Marine Corps has and is taking to reduce the amount of carryover. To determine if there was growth in DMAG carryover during the period of Operation Iraqi Freedom/Operation Enduring Freedom (OIF/OEF) and the reasons for any such growth, we analyzed reported order, revenue, and carryover amounts and months of carryover from fiscal years 1998 through 2011 to determine the extent to which OIF/OEF impacted DMAG workload and carryover. We selected the period from fiscal years 1998 through 2011 to highlight any changes in reported actual carryover information from the period before and during OIF/OEF. We reviewed and analyzed Marine Corps documentation including the DMAG budgets to determine the reasons for the growth in carryover. Further, we met with responsible officials from MCLC (Programs and Resources Department) to discuss reasons for variances between reported actual carryover from one year to the next. To determine the reasons for fiscal years 2010 and 2011 carryover, we met with responsible officials from the Navy and Marine Corps headquarters and MCLC (Programs and Resources Department) to identify contributing factors that led to the carryover. We also performed walkthroughs of the Albany and Barstow maintenance centers’ operations and discussed with officials reasons for workload carrying over from one fiscal year to the next for fiscal years 2010 and 2011. Further, in order to more fully understand the reasons for carryover at the maintenance centers, we obtained and analyzed 60 orders (30 orders for fiscal year 2010 and 30 orders for fiscal year 2011) that had the largest dollar amount of carryover. Carryover amounts associated with these orders represented 49 percent and 68 percent of DMAG’s total carryover for fiscal years 2010 and 2011, respectively. We selected 60 orders because they were the largest and most recent orders at the time of our audit. We reviewed the orders with amendments for each of the orders and discussed the information in these documents with MCLC (Programs and Resources Department) and maintenance center Albany and Barstow officials to determine the reasons for the carryover. We summarized and categorized the results. Financial information in this report was obtained from official Navy and Marine Corps budget documents and accounting reports. To assess the reliability of the data, we (1) reviewed and analyzed the factors used in calculating carryover for the completeness of the elements included in the calculation, (2) interviewed Navy and Marine Corps officials knowledgeable about the carryover data, (3) reviewed GAO reports on depot maintenance activities, and (4) reviewed fiscal years 2010 and 2011 customer orders submitted to DMAG to determine whether they were adequately supported by documentation. In reviewing these orders, we obtained the status of the carryover at the end of the fiscal year. On the basis of procedures performed, we have concluded that these data were sufficiently reliable for the purposes of this report. We performed our work at the Headquarters of the Office of the Under Secretary of Defense (Comptroller), the Office of the Assistant Secretary of the Navy (Financial Management and Comptroller), and Marine Corps Deputy Commandant (Programs and Resources), Washington, D.C.; Marine Corps Logistics Command, Albany, Georgia; the maintenance center at Albany, Georgia and the maintenance center at Barstow, California. We conducted this performance audit from July 2011 through June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Greg Pugnetti, Assistant Director; Steve Donahue; Keith McDaniel; and Hal Santarelli made key contributions to this report. | The Marine Corps DMAG repairs and overhauls weapon systems and support equipment to battle-ready condition for deployed and soon-to-be deployed units. To the extent that DMAG does not complete work at year-end, the work and related funding will be carried over into the next fiscal year. Carryover is the reported dollar value of work that has been ordered and funded by customers but not completed by DMAG at the end of the fiscal year. GAO was asked to determine (1) if DMAGs actual carryover exceeded the allowable amount and actions the Marine Corps is taking to reduce carryover; (2) if budget information on DMAG carryover approximated actual results; (3) if there was growth in carryover during the period of OIF/OEF and the reasons for any such growth; and (4) reasons for recent years carryover. To address these objectives, GAO (1) reviewed relevant carryover guidance, (2) obtained and analyzed reported carryover and related data for DMAG against requirements, and (3) interviewed DOD, Navy, and Marine Corps officials. GAOs analysis of Marine Corps depot maintenance activity group (DMAG) reports showed that from fiscal years 2004 through 2011, reported actual carryover exceeded the allowable amounts in the most recent 6 years of the 8- year period, ranging from $59 million in fiscal year 2007 to $7 million in fiscal year 2011. GAOs analysis also showed that the amounts of carryover exceeding the allowable amounts have declined in each of the past 4 years. These reductions could be attributed to DMAG actions, including implementing production efficiencies that reduced the time required to repair weapon systems. In contrast, DMAGs budgeted carryover amounts were less than the allowable amounts for all 8 years GAO reviewed. In the most recent 6 years, DMAGs reported actual carryover amounts exceeded budgeted carryover by at least $50 million. GAOs analysis showed this occurred because the Marine Corps underestimated DMAGs new orders every year during this 6-year period from a low of 51 percent to a high of 175 percent. The reported dollar value of DMAG carryover significantly increased during the initial years of Operation Iraqi Freedom/Operation Enduring Freedom (OIF/OEF) from $49 million in fiscal year 2002 to $271 million in fiscal year 2005. This increase could be primarily attributed to new orders from customers more than tripling over this period. GAOs analysis found that the increase in new orders was the result of higher depot maintenance requirements supporting OIF/OEF. Since fiscal year 2005, reported actual carryover amounts have remained relatively stable, averaging $296 million or 6.4 months of work. GAO identified three factors that were key to DMAGs carryover in fiscal years 2010 and 2011 including: (1) experiencing unanticipated increases in its workload requirements, (2) starting DMAG work on new orders later in the fiscal year because it was already performing work on other orders, and (3) accepting amendments on existing orders in the last quarter of the fiscal year that increased the scope of work. GAO recommends that DOD improve the budgeting and management of DMAG carryover by comparing budgeted to actual information on carryover and orders and making adjustments to budget estimates as appropriate. DOD concurred with GAOs recommendations and cited related actions planned. |
Since Hurricane Katrina, the population of the greater New Orleans area has decreased, and the health care delivery system for the low-income and uninsured population in the area has begun to change from one that was largely hospital-based to a more community-based system of primary care. Since the disruption to the health care system caused by the hurricane, several federal agencies have awarded grants that facilitate access to primary care. The estimated population of the greater New Orleans area decreased from 999,349 in July 2005 to 807,032 in July 2008, a level of about 81 percent of the population before Hurricane Katrina. Most of the decrease in population was in Orleans and St. Bernard parishes. (See table 1.) Before Hurricane Katrina, most health care for the low-income and uninsured population in the greater New Orleans area was provided in emergency rooms and outpatient clinics at Charity and University hospitals, which together were known as the Medical Center of Louisiana at New Orleans (MCLNO). MCLNO is part of Louisiana State University’s (LSU) statewide system of public hospitals. About half of MCLNO’s patients were uninsured, and about one-third were covered by Medicaid. As a result of damage from Hurricane Katrina and the subsequent flooding, Charity and University hospitals were closed. In November 2006, LSU reopened University Hospital, under its new, temporary name, Interim LSU Public Hospital. Charity Hospital remained closed as of June 2009. In addition to the hospital outpatient clinics, other types of clinics provided primary care, including mental health care, for the low-income and uninsured population before Hurricane Katrina. These included health centers participating in HRSA’s Health Center Program. Under Section 330 of the Public Health Service Act, HRSA provides grants to health centers nationwide to increase access to primary care. HRSA uses a competitive process to award grants, including New Access Point grants for new grantees or for existing grantees to establish additional sites. Existing grantees may also compete for Expanded Medical Capacity grants to increase service capacity, such as by expanding operating hours, or Service Expansion grants to add or expand services, such as mental health, oral health, and pharmacy services. All health center grantees are Federally Qualified Health Centers (FQHC), which enjoy certain federal benefits such as enhanced Medicare and Medicaid payment rates. However, not all FQHCs receive Health Center Program grants, and those that do not are sometimes referred to as having an FQHC Look-Alike designation. Four health center grantees served the greater New Orleans area at the time HHS awarded the PCASG in July 2007. In 2007, Louisiana enacted the Health Care Reform Act of 2007, which directed LDHH to develop and implement a new health care delivery system for the state’s Medicaid recipients and low-income uninsured citizens. LDHH proposed short-term and long-term recommendations, which included changes to the Louisiana Children’s Health Insurance Program (LaCHIP) in 2008 to expand coverage to more children. LDHH also submitted a demonstration waiver application to CMS for its Medicaid program to expand coverage and create a coordinated system of care. In response to Hurricane Katrina, several federal agencies provided grants that assist with the restoration of primary care in the greater New Orleans area. (See fig. 1.) FEMA provided CCP funds to Louisiana for certain mental health services. ACF provided supplemental SSBG funds for primary health care services, among other things. In addition, CMS provided Professional Workforce Supply Grant funds to reduce health care provider shortages and PCASG funds to restore access to primary care. The CCP provided funds for crisis counseling services—including stress reduction and coping education, community outreach, individual and group crisis counseling, and referral for other services—to Louisiana. The state subsequently distributed $29 million of these funds in the greater New Orleans area. The CCP was designed to meet the short-term mental health needs of people affected by disasters. State officials told us that, generally, the CCP allows a person to have three to five counseling visits but does not provide for a traditional mental health diagnostic assessment and cannot be used for traditional mental health or substance abuse services. CCP grantees may, however, provide information to families and individuals about available mental health and substance abuse services. Additional assistance may be available to certain families through the Louisiana CCP’s Specialized Crisis Counseling Services. ACF administers SSBG funding to assist states in delivering social services, which generally do not include health care services. In 2006, however, Congress appropriated emergency SSBG supplemental funding that could be spent on, among other things, health care services. From this appropriation, ACF awarded more than $220 million to Louisiana. The Louisiana Department of Social Services (LDSS) served as the state-level administrator and collaborated with LDHH and the Office of the Governor to develop a spending plan that dedicated about $168 million of this amount for resuming and restoring health care services. LDHH received $101.7 million, which it divided into two service categories. First, LDHH designated $80 million specifically for mental health care, including substance abuse and developmental disability services, to meet the emerging mental health crisis. Second, LDHH designated $21.7 million for primary care, which could include mental health care, to restore and resume services to meet the health care needs of people affected by the hurricanes. The primary care funds were intended to target the southernmost parishes and regions that had experienced a devastating blow to their primary care infrastructure. Each local parish could develop a proposal for restoring services its population needed and for responding to the challenges it faced in rebuilding its basic health care system. LDSS awarded the remaining health care services funds directly to LSU Health Sciences Center and Tulane University Health Sciences Center. Louisiana has until September 30, 2009, to spend these funds, which are distributed as reimbursements after services are delivered. LDHH distributed the mental health funds to various offices in the department and to the state’s four regional human services districts, which then contracted with various individuals and organizations to provide some of the services. A state official told us that the mental health funds were available statewide in part because many people from the greater New Orleans area who needed mental health services following the hurricanes were dispersed throughout the state. The $50 million Professional Workforce Supply Grant was awarded by the Secretary of HHS in March 2007. The purpose of the grant was to reduce shortages in the professional health care workforce following Hurricane Katrina, and CMS gave Louisiana flexibility to design its program within broad federal guidelines. LDHH, which administers the grant, used the funds to create and fund the Greater New Orleans Health Service Corps, which recruits individual health care providers for health care organizations by offering incentive payments to the individuals. Incentive amounts are based on an individual’s medical specialty and range from $10,000 to $110,000. To be eligible, a health care provider must, among other things, agree to serve Medicare, Medicaid, and uninsured patients; have a sliding fee scale; and provide services in a federally designated health professional shortage area (HPSA). Health care providers are also expected to enter into an agreement with LaCHIP to provide services to children enrolled in that program, if appropriate. Financial incentive payments can be given to health care providers who remain in their qualifying job or to newly hired health care providers; individuals may receive only one financial incentive payment. In July 2007, CMS awarded the PCASG to LDHH, which selected LPHI as the local partner responsible for administering the grant program. The PCASG was established by HHS under the authority of the Deficit Reduction Act of 2005, which allowed HHS to allocate funds to restore access to health care in communities affected by Hurricane Katrina, and to provide funds for other services, such as those provided by Medicaid and the State Children’s Health Insurance Program. The greater New Orleans area was targeted to receive PCASG funds because of the unique impact Hurricane Katrina and its resulting floods had on the area. LDHH and LPHI determined that 25 organizations met the PCASG requirements that CMS established, and they were all awarded funding. The 25 organizations varied in size and other characteristics. For example, some recipients are affiliated with an institution such as a university or state or local government, and some are grantees of HRSA’s Health Center Program. (For more information on the characteristics of the PCASG fund recipients, see app. II.). In addition to primary care services—medical, mental health, and dental care services—PCASG fund recipients could use grant funds to provide specialty care, such as cardiology and podiatry services, and ancillary services, including supporting services such as translation, health education, transportation, and outreach. After being awarded PCASG funding, outpatient provider organizations had to meet several CMS requirements, including creating referral relationships with local specialists and hospitals, establishing a quality assurance or improvement program, and providing a long-term sustainability plan. LPHI is responsible for distributing funds to PCASG fund recipients, including an initial disbursement and five supplemental disbursements. These are lump sum payments and are not reimbursement for individual services provided. The 25 recipients received initial disbursements totaling $17 million. The supplemental disbursements are to be made over the grant period. CMS requires that more of the funds be disbursed during the early part of the grant period and that funding decline over the 3 years to ensure that recipients do not rely primarily on PCASG funds for their continued operation and sustainability. LDHH and CMS provide oversight of the PCASG program. LDHH oversees the work performed by LPHI, conducts site visits at PCASG fund recipients, reviews budgets for LPHI and recipients, reviews and approves payments to recipients, and determines whether to approve recipients’ requests to renovate sites. CMS visits recipients to observe their operations and reviews reports from LDHH and LPHI in collaboration with officials from other HHS agencies. Although the PCASG does not include a requirement for a program evaluation, a private foundation is scheduled to evaluate the PCASG program, and CMS officials plan to review and approve this evaluation before it is published. PCASG fund recipients reported that they used PCASG funds to hire or retain health care providers and other staff, add primary care services, and open new sites. Recipients also said that the PCASG funds have helped them improve service delivery and access to care. Most of the PCASG fund recipients that responded to our survey reported they used PCASG funds to hire health care providers or other staff. Twenty of the 23 responding recipients reported using PCASG funds to hire health care providers. (See fig. 2.) Sixteen recipients hired mental health care providers, including mental health counselors and psychiatrists. One recipient reported that by hiring one psychiatrist, it could significantly increase clients’ access to services by cutting down a clinic’s waiting list and by providing clients with a “same-day” psychiatric consultation or evaluation. Fourteen of the recipients responded they used PCASG funds to hire medical care providers. One recipient reported that it hired 23 medical care providers, some of whom were staffed at its new sites. Eighteen of the 23 PCASG fund recipients that responded to our survey reported they used PCASG funds to hire other staff, such as a medical director and a medical office assistant, in addition to hiring health care providers. Some recipients reported that the ability to hire providers enabled them to expand the hours some of their sites were open. PCASG fund recipients responded that in addition to hiring health care providers and other staff, they also used PCASG funds to retain health care providers and other staff. Of the 23 recipients that responded to our survey, 17 reported they used PCASG funds to retain health care providers, and 15 of these reported that they also used grant funds to retain other staff. For example, one recipient reported that PCASG funds were used to stabilize positions that were previously supported by disaster relief funds and donated services. Nineteen of the 23 PCASG fund recipients that responded to our survey reported using PCASG funds to add or expand medical, mental health, or dental care services, and more than half of these added or expanded more than one type of service. (See table 2.) PCASG fund recipients also reported using grant funds to add or expand specialty care services or to add ancillary services. Eight recipients added or expanded specialty care services. For example, one of these recipients reported that it added podiatry services. The ancillary services that recipients used grant funds to add included health education, transportation, and outreach activities. One recipient reported that it used PCASG funds to create a television commercial announcing that a clinic was open and that psychiatric services were available there, including free care for those who qualified financially. Almost all of the PCASG fund recipients that responded to our survey reported they used PCASG funds for their physical space. Fifteen recipients used the funds to open new sites or relocate sites. One of these recipients reported that it relocated to a larger site, which allowed providers to have additional examination rooms. Ten recipients reported using grant funds to renovate existing sites. Some of these recipients made renovations—such as expanding a waiting room, adding a registration window, and adding patient restrooms—to accommodate more patients. PCASG fund recipients that responded to our survey reported that certain program requirements have had a positive effect on their delivery of primary care services. Almost three-quarters of responding recipients reported a requirement that they develop a network of local specialists and hospitals for patient referrals has had a positive effect. Similarly, over two-thirds of the responding recipients reported that the requirement to establish a quality assurance and improvement program, which must include developing clinical guidelines or evidence-based standards of care, has had a positive effect on the provision of primary care within their organization. Various PCASG fund recipients have stated that PCASG funds helped them improve access to health care services for residents of the greater New Orleans area. One recipient reported to LPHI that PCASG funds allowed it to expand its services beyond residents in its shelter and housing programs to include community residents who were not homeless but previously lacked access to health care services. Representatives of other recipients have publicly stated that their organization improved access to health care by expanding services in medically underserved neighborhoods or to people who were uninsured or underinsured. In addition, representatives of local organizations told us that the PCASG provided an opportunity to rebuild the health care system and shift the provision of primary care from hospitals to community-based primary care clinics. PCASG fund recipients also used other federal hurricane relief funds to provide services. They used SSBG supplemental funds designated by Louisiana for primary care to pay for staff salaries and equipment, and they used SSBG supplemental funds designated for mental health care to provide a range of mental health services. PCASG fund recipients also benefited from the Professional Workforce Supply Grant, which provided incentives for health care providers, and one used funds from the CCP to provide counseling services. Nearly half of PCASG fund recipients received SSBG supplemental funds designated for primary care and used them to pay staff salaries, purchase medical equipment, and support operations. According to LDHH data, 11 PCASG fund recipients expended $12.9 million of the $21.7 million in SSBG supplemental funds awarded to Louisiana and designated by the state for primary care, as of August 2008. After a competitive process in 2006, LDHH distributed SSBG supplemental funds ranging from $209,000 to over $2.6 million each to individual recipients. (See table 3.) Officials from PCASG fund recipient organizations that received these funds told us they had used SSBG supplemental funds to pay salaries, purchase supplies and medical equipment, and support their operations. For example, one recipient used SSBG supplemental funds to hire new medical and support staff and, as a result, expanded its services for mammography, cardiology, and mental health. It also used SSBG supplemental funds to remodel the associated examination rooms and lobby and to purchase operating services, such as accounting services and insurance. In addition to distributing SSBG supplemental funds to LDHH for primary care, LDSS distributed SSBG supplemental funds directly to one PCASG recipient to support, in part, primary health care services. Specifically, LSU Health Sciences Center New Orleans—which also received SSBG supplemental funds for primary care from LDHH—used $173,000 of the $33.5 million it received directly from LDSS to pay for staff salaries and benefits at its PCASG sites. The two PCASG fund recipients that received SSBG supplemental funds designated for mental health care used them to provide crisis intervention, substance abuse, and other mental health services. LDHH distributed almost $12 million of the $80 million in SSBG supplemental funds designated for mental health care to the two PCASG fund recipients that are state regional human services districts—$4.3 million to Metropolitan Human Services District (MHSD) and $7.6 million to Jefferson Parish Human Services Authority (JPHSA). MHSD and JPHSA in turn distributed most of these funds through contracts to other organizations and providers. They also retained a portion of these funds to spend on the direct provision of mental health care services or other expenses that were necessary for the restoration of these services, such as minor repairs or replacement of equipment and supplies. MHSD obligated $3.3 million under 30 contracts and retained $1 million for direct expenses; JPHSA obligated $4.3 million under 80 contracts and retained nearly $3.4 million. Except for just over $88,000 of JPHSA’s funds, all $12 million had been expended as of March 3, 2009. LDHH identified five mental health care service categories for the use of the SSBG supplemental funds. (See table 4.) Through March 3, 2009, the largest portion of funds that MHSD expended was for the category “substance abuse treatment and prevention.” The largest portion of funds that JPHSA expended was for the category “immediate intervention— crisis response,” with the second largest portion expended for the category “behavioral health services for children and adolescents.” MHSD officials told us they used the SSBG supplemental funds to help maintain staff and relocate them to community-based mental health centers, where clients could be assessed and treated for mental health and addiction problems. In addition, MHSD placed an addiction counselor in a school-based health center to provide early intervention and treatment for substance abuse. MHSD officials also reported that they used funds to support crisis and addiction counseling for adults and children in churches, grief counseling for children in elementary schools, a summer camp that included mental health counseling, and community outreach services. JPHSA officials told us they used SSBG supplemental funds to provide services such as assertive community treatment, crisis intervention teams, mobile crisis services, suicide prevention services, group and individual therapy, and psychiatric evaluation. For example, JPHSA expanded its assertive community treatment program, in which services are provided at home or in community-based locations and include help with medication administration and monitoring. JPHSA officials reported that this program focused on patients who had a history of noncompliance with mental health treatment and were generally considered to be the persons most in need of mental health services. JPHSA also used the funds to support a program of community-based services for patients who were no longer in need of inpatient services or who were in crisis but not in need of an inpatient psychiatric hospital stay. Patients were given 24-hour care and supervision and attended group and individual counseling designed to provide crisis resolution skills and coping strategies; they were also linked to community-based resources such as community mental health clinics and supportive or independent housing. This program also served to alleviate the burden on inpatient psychiatric hospitals. As of August 2008, 17 of the 25 PCASG fund recipients had retained or hired a health care provider who had received a Professional Workforce Supply Grant incentive payment to continue or begin working in the greater New Orleans area. Among the health care providers working for PCASG fund recipients, 69 received incentives that totaled $4.5 million. (See table 5.) The number of those health care providers who were employed by individual PCASG fund recipients ranged from 1 or 2 at 7 recipient organizations to 10 at 2 recipient organizations. These one-time, lump sum incentive payments, which could be used for purposes such as student loan repayment or relocation expenses, ranged from $10,000 to $110,000 each; the largest percentages of incentive payments and of funds went to primary care providers. In a 2008 survey conducted by LDHH, 88 percent of all incentive recipients reported that the availability of an incentive payment affected their decision to remain or practice in the greater New Orleans area. Three-quarters of recipients of incentive payments were existing employees who were retained, while one-quarter were newly hired. This pattern is consistent with the incentive payments that were made overall, regardless of employing organization. In addition, no PCASG fund recipient hired more than two new staff who had received an incentive payment. In discussing these payments, a state official commented that retaining an existing employee is generally easier than hiring a new one. One PCASG fund recipient provided counseling services with CCP funds. In 2005, immediately following Hurricane Katrina, the Louisiana Office of Mental Health contracted with Catholic Charities Archdiocese of New Orleans to be the sole CCP service provider in the four area parishes. This recipient expended $7.9 million of the $29 million in CCP funds awarded to Louisiana. In addition to providing counseling services, Catholic Charities’ counselors provided information about available services such as primary care; mental health services; substance abuse treatment; and food, clothing, and housing assistance. Catholic Charities terminated its CCP role in May 2007, and the Louisiana Office of Mental Health took over its role. PCASG fund recipients face significant challenges in hiring and retaining staff, as well as in referring patients outside of their organizations, and these challenges have grown since Hurricane Katrina. Recipients are taking actions to address the challenge of sustainability, but it is too early to know whether they will be successful. Although most of the 23 PCASG fund recipients that responded to our survey hired or retained staff with grant funds, most have continued to face significant challenges in hiring and retaining staff. Hiring has been especially challenging. For example, 11 of the 23 recipients reported the hiring of health care providers to be a great challenge, and 9 reported it was a moderate challenge. (For detailed information on recipients’ responses to the questions in our Web-based survey regarding challenges, see fig. 3.) Among those that reported hiring providers was a great or moderate challenge, over three-quarters responded that this challenge had grown since Hurricane Katrina. In discussing challenges, officials from one recipient organization told us that after Hurricane Katrina they had greater difficulty hiring licensed nurses than before the hurricane. They also told us that most of the nurses who were available to be hired were recruited by hospitals, where the pay was higher. Moreover, officials we interviewed from several recipient organizations said that the problems with housing, schools, and overall community infrastructure that developed after Hurricane Katrina made it difficult to attract health care providers and other staff. An additional indication of limited availability of primary care providers in the area is HRSA’s designation of all of Orleans, Plaquemines, and St. Bernard parishes and much of Jefferson Parish as HPSAs for primary care. While some portions of the greater New Orleans area had this HPSA designation before Hurricane Katrina, additional portions of the area received that designation after the hurricane. Retention of staff has also been a challenge for the PCASG fund recipients. (See fig. 3.) For example, 16 of the 23 recipients reported that retaining health care providers was a great or moderate challenge. Among those that reported retaining health care providers was a great or moderate challenge, about three-quarters also reported that this challenge had grown since Hurricane Katrina. Retaining other staff has also been a challenge, with 14 of the 23 recipients reporting it to be a great or moderate challenge. About two-thirds of those reporting that retaining other staff was a moderate or great challenge also said this challenge had grown since Hurricane Katrina. The PCASG fund recipients that primarily provide mental health services in particular faced challenges both in hiring providers and in retaining providers. Six of the seven that responded to our survey reported that both hiring and retaining providers were either a great or moderate challenge. Six recipients reported that hiring was a great challenge, and five of these reported that the challenge was greater than before Hurricane Katrina. Three recipients reported that retention was a great challenge, and two of these also reported that the challenge had grown since Hurricane Katrina. An indication of more limited availability of mental health care providers is HRSA’s designation of the four parishes of the greater New Orleans area as HPSAs for mental health in late 2005 and early 2006; before Hurricane Katrina, none of the parishes had this designation for mental health. Officials we interviewed from one recipient with multiple sites told us that while the Greater New Orleans Health Service Corps, which was funded through the Professional Workforce Supply Grant, had been helpful for recruiting and retaining physicians, it had not helped fill the need for social workers. Furthermore, officials we interviewed from two recipients with multiple sites told us that some staff had experienced depression and trauma themselves and found it difficult to work in mental health settings. Beyond challenges in hiring and retaining their own providers and other staff, PCASG fund recipients that responded to our survey reported significant challenges in referring their patients to other organizations for mental health, dental, and specialty care services. (See fig. 3.) Specifically, 14 of the 23 recipients reported that the availability of mental health providers willing to accept referrals was a great or moderate challenge, and over two-thirds of those reporting that level of challenge responded that this challenge had grown since Hurricane Katrina. In addition, 10 of the 16 recipients that indicated that the question on dental service referrals was applicable to them reported that the availability of dentists willing to accept referrals was a great or moderate challenge, and about two-thirds of those reporting that level of challenge also reported that this challenge was greater than before Hurricane Katrina. An additional indication of limited availability of dental care is that HRSA has designated all of Orleans, St. Bernard, and Plaquemines parishes and part of Jefferson Parish as HPSAs for dental care; before Katrina, only part of Orleans Parish and part of Jefferson Parish had this designation. Finally, 13 of the 20 recipients that indicated that the question on specialty care referrals was applicable to them reported that the availability of providers willing to accept referrals for specialty care was a great or moderate challenge, and two-thirds of those reported that this challenge had grown since Hurricane Katrina. An additional challenge that the PCASG fund recipients face is to be sustainable after PCASG funds are no longer available. All 23 recipients that responded to our survey reported that they had taken or planned to take at least one type of action to increase their ability to be sustainable— that is, to be able to serve patients regardless of their ability to pay after PCASG funds are no longer available. For example, all responding recipients reported that they had taken action—such as screening patients for eligibility—to facilitate their ability to receive reimbursement for services they provided to Medicaid or LaCHIP beneficiaries. Furthermore, 16 recipients reported that they were billing private insurance, with an additional 5 recipients reporting they planned to do so. However, obtaining reimbursement for all patients who are insured may not be sufficient to ensure a recipient’s sustainability, because at about half of the PCASG fund recipients, over 50 percent of the patients are uninsured. Many PCASG fund recipients reported that they intended to use Health Center Program funding or FQHC Look-Alike designation—which allows for enhanced Medicare and Medicaid payment rates—as one of their sustainability strategies. Four recipients were participating in the Health Center Program at the time they received the initial disbursement of PCASG funds. One of these recipients had received a Health Center New Access Point grant to open an additional site after Hurricane Katrina and had also received an Expanded Medical Capacity grant to increase service capacity, which it used in part to hire additional staff and buy equipment. Another of these recipients received a New Access Point grant to open an additional site after receiving PCASG funds. Beyond these four recipients, one additional recipient received an FQHC Look-Alike designation in July 2008 and a New Access Point grant in March 2009. Of the remaining 18 recipients that responded to our survey, 6 said they planned to apply for both a Health Center Program grant and an FQHC Look-Alike designation. In addition, 1 planned to apply for a grant only and another planned to apply for an FQHC Look-Alike designation only. Although many recipients indicated that they intended to use Health Center Program funding as a sustainability strategy, they may not all be successful in obtaining a grant. For example, in fiscal year 2008 only about 16 percent of all applications for New Access Point grants resulted in grant awards. About three-quarters of PCASG fund recipients reported that as one of their sustainability strategies they had applied or planned to apply for additional federal funding, such as Ryan White HIV/AIDS Program grants, or for state funding. In addition, a few reported that they had applied or planned to apply for private grants, such as from foundations. Although PCASG fund recipients have completed or planned actions to increase their ability to be sustainable, it is too early to know whether their various sustainability strategies will be successful. One factor that may affect the degree of challenge in achieving sustainability is whether a recipient is part of a larger institution, such as a university or government body, that could potentially provide additional funds after PCASG funds are no longer available. Similarly, sustainability may be a less difficult challenge for organizations that are already grantees of HRSA’s Health Center Program. HHS reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We focused our review on the 25 outpatient provider organizations that in September 2007 received funding through the Primary Care Access and Stabilization Grant (PCASG), which the Department of Health and Human Services (HHS) awarded to the Louisiana Department of Health and Hospitals (LDHH). The PCASG funds were targeted to the greater New Orleans area—specifically, Jefferson, Orleans, Plaquemines, and St. Bernard parishes—because of the impact Hurricane Katrina had on this area. In this report we examine (1) how PCASG fund recipients used the PCASG funds to support the provision of primary care services in the greater New Orleans area, (2) how PCASG fund recipients used and benefited from other federal hurricane relief funds that support the restoration of primary care services in the greater New Orleans area, and (3) challenges the PCASG fund recipients continued to face in providing primary care services, and recipients’ plans for sustaining services after PCASG funds are no longer available. In conducting our work, we reviewed relevant literature. We also interviewed officials at various agencies within HHS, including the Administration for Children and Families, Centers for Medicare & Medicaid Services (CMS), Health Resources and Services Administration (HRSA), and Substance Abuse and Mental Health Services Administration. To determine how the PCASG fund recipients used PCASG funds to support the provision of primary care services in the greater New Orleans area, we conducted site visits and developed and implemented a Web- based survey. We also reviewed the recipients’ grant applications and interviewed officials at the LDHH and Louisiana Public Health Institute (LPHI) about how the recipients used PCASG funds. LPHI administers the PCASG program and distributes the grant funds as the local partner of LDHH. We conducted site visits at 8 of the 25 PCASG fund recipients during April 2008. During these visits we collected documents and interviewed PCASG fund recipient, state, and local officials. To identify the locations for our site visits, we chose a selective sample of the recipients to include at least 1 from each of the area’s four parishes. We also selected recipients so that our sample would include some that offered mental health care services and some that offered dental care services, and we included 2 recipients that were grant recipients of HRSA’s Health Center Program. We developed a Web-based survey that focused on how PCASG fund recipients used PCASG funds, the challenges they continued to face, and their plans for sustainability. To develop our survey questions, we analyzed our interviews with officials from PCASG fund recipients, CMS, and state and local agencies; reviewed the recipients’ applications for funding; and reviewed the PCASG Notice of Award. In addition, before we disseminated the survey to the 25 recipients, the content of the survey questions was peer-reviewed by LPHI because of its expertise on the grant program. We received responses from 23 of the 25 recipients, a response rate of 92 percent. To assess the reliability of the survey data, we performed quality checks, such as reviewing survey data for inconsistencies and completeness and, when necessary, followed up with survey respondents via the telephone to resolve any inconsistencies and obtain missing information. Based on these efforts, we determined that the survey data were sufficiently reliable for the purposes of this report. To answer our question on how the PCASG fund recipients used and benefited from other federal funds for hurricane relief, we reviewed and analyzed data collected by LDHH on expenditures related to the supplemental Social Services Block Grant (SSBG). Where possible, we used documents from and interviews with state and PCASG fund recipient officials to identify SSBG supplemental funds expended at PCASG sites. In addition, we reviewed and analyzed data gathered by LDHH related to the incentive payments made under the Professional Workforce Supply Grant and expenditures under the Crisis Counseling Assistance and Training Program (CCP). For the incentive payments made using the Professional Workforce Supply Grant, LDHH provided us with information about health care providers working at PCASG sites. They used the employment address, rather than recipient name, to identify which providers to include. They provided data about the amount of payment, payment type (retention or hiring), and provider type (for example, internist or nurse). For the CCP, we obtained data from LDHH on program expenditures at PCASG sites. We also interviewed officials from LDHH and PCASG fund recipients about the implementation of these programs in the greater New Orleans area. To assess the reliability of the data we received from LDHH related to the SSBG, Professional Workforce Supply Grant, and CCP, we performed checks of internal consistency and verified information with state and local officials where possible. Based on these efforts, we determined that the data were sufficiently reliable for the purposes of this report. To answer our questions on challenges PCASG fund recipients continued to face in providing primary care services and how PCASG fund recipients plan to sustain primary care services after funds are no longer available, we used information collected from our Web-based survey. We also analyzed interviews we conducted with 10 recipients, including the 8 we visited, and from federal, state, and local agencies. In addition, to determine how recipients planned to sustain primary care services, we reviewed sustainability plans that the recipients included in their applications for PCASG funding. We also analyzed information provided by HRSA on Health Center Program grants awarded to PCASG fund recipients and on overall program grants awarded in fiscal years 2007 and 2008. To provide additional information on the PCASG fund recipients, we used data collected by LPHI about the recipients. We analyzed data that LPHI provided to us on each PCASG fund recipient for the period September 21, 2007, through March 20, 2008, regarding (1) patients and encounters, and (2) types of services that recipients offered. We obtained these data for this period because at the time of our request, this was the only period for which LPHI had completed its data accuracy and reliability checks on the patient and encounter data. We requested that LPHI summarize for us at the recipient level both the number of patients and the number of encounters, by age and insurance status. To assess the reliability of data we received from LPHI on patient and encounter data and on types of services offered, we did the following: (1) reviewed relevant documentation, (2) discussed with knowledgeable agency officials the data and the processes they used to establish the accuracy and reliability of the data provided, and (3) where possible, compared data to published sources. Based on these activities, we determined that these data were sufficiently reliable for the purposes of our report. We conducted our work from February 2008 through June 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. In July 2007, HHS awarded the $100 million PCASG to LDHH, which in turn provided funds to 25 outpatient provider organizations in the greater New Orleans area in September 2007. CMS is responsible for administering the program at the federal level. LPHI is LDHH’s local partner for administering the grant program. The 25 organizations that are PCASG fund recipients vary in size and in the geographical area they serve. (See table 6.) Furthermore, some recipients are affiliated with an institution such as a university or state or local government, and some receive funding from the Health Center Program of HHS’s HRSA. In addition to the person named above, Helene F. Toiv, Assistant Director; Martha R. W. Kelly; Carolyn Feis Korman; Deitra Lee; Roseanne Price; Dan Ries; Jennifer Whitworth; Rasanjali Wickrema; and Malissa Winograd made key contributions to this report. Hurricane Katrina: Barriers to Mental Health Services for Children Persist in Greater New Orleans, Although Federal Grants Are Helping to Address Them. GAO-09-563. Washington, D.C.: July 13, 2009. Disaster Assistance: Greater Coordination and an Evaluation of Programs’ Outcomes Could Improve Disaster Case Management. GAO-09-561. Washington, D.C.: July 8, 2009. Catastrophic Disasters: Federal Efforts Help States Prepare for and Respond to Psychological Consequences, but FEMA’s Crisis Counseling Program Needs Improvements. GAO-08-22. Washington, D.C.: February 29, 2008. Hurricane Katrina: Allocation and Use of $2 Billion for Medicaid and Other Health Care Needs. GAO-07-67. Washington D.C.: February 28, 2007. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. Washington, D.C.: September 29, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months after Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Mental Health Services: Effectiveness of Insurance Coverage and Federal Programs for Children Who Have Experienced Trauma Largely Unknown. GAO-02-813. Washington, D.C.: August 22, 2002. | The greater New Orleans area--Jefferson, Orleans, Plaquemines, and St. Bernard parishes--continues to face challenges in restoring health care services disrupted by Hurricane Katrina. In 2007, the Department of Health and Human Services (HHS) awarded the $100 million Primary Care Access and Stabilization Grant (PCASG) to Louisiana to help restore primary care services to the low-income population. Louisiana gave PCASG funds to 25 outpatient provider organizations in the greater New Orleans area. GAO was asked to study how the federal government can effectively leverage governmental resources to help area residents gain access to primary care services. This report examines (1) how PCASG fund recipients used the PCASG funds to support primary care services in greater New Orleans, (2) how PCASG fund recipients used and benefited from other federal hurricane relief funds that support the restoration of primary care services in the area, and (3) challenges PCASG fund recipients continued to face in providing primary care, and their plans for sustaining services after PCASG funds are no longer available. PCASG fund recipients reported that they used the PCASG funds to hire or retain health care providers and other staff, add primary care services, and open new sites. For example, 20 of the 23 recipients that responded to the GAO survey reported using PCASG funds to hire health care providers, and 17 reported using PCASG funds to retain health care providers. In addition, most of the recipients reported that they used PCASG funds to add primary care services and to add or renovate sites. Recipients also reported that the grant requirements and funding helped them improve service delivery and expand access to care in underserved neighborhoods. Other federal hurricane relief funds helped PCASG fund recipients pay staff, purchase equipment, and expand mental health services to help restore primary care. Eleven recipients received HHS Social Services Block Grant (SSBG) supplemental funds designated by Louisiana for primary care, and two received SSBG supplemental funds designated by Louisiana specifically for mental health care. The funds designated for primary care were used to pay staff and purchase equipment, and the funds designated for mental health care were used to provide a range of services for adults and children, including crisis intervention and substance abuse prevention and treatment. About two-thirds of the PCASG fund recipients benefited from the Professional Workforce Supply Grant incentives. These recipients hired or retained 69 health care providers who received incentives totaling over $4 million to work in the greater New Orleans area. In addition, one PCASG fund recipient expended $7.9 million it received from Louisiana to provide services through the federal Crisis Counseling Assistance and Training Program. PCASG fund recipients continue to face multiple challenges and have various plans for sustainability. Recipients face significant challenges in hiring and retaining staff, as well as in referring patients outside of their organizations, and these challenges have grown since Hurricane Katrina. For example, 20 of 23 recipients that responded to the GAO survey reported hiring was a great or moderate challenge, and among these 20 recipients over three-quarters reported that this challenge had grown since Hurricane Katrina. Six of the 7 recipients that primarily provide mental health services reported that both hiring and retention of providers were great or moderate challenges. Many PCASG fund recipients also reported challenges in referring patients outside their organization for mental health, dental, and specialty care services. Although all PCASG fund recipients have completed or planned actions to increase their ability to be sustainable, it is too early to know whether their various sustainability strategies will be successful. |
The U.S. aerospace industry contributes to the nation’s economic health and national security. The industry’s wide-ranging activities—including aircraft manufacturing and commercial aviation—make it a major contributor to U.S. economic growth. DOT and the FAA (an administration of DOT) each play a policy and regulatory role in aviation, with DOT involved in consumer and economic issues, such as licensing airlines and reviewing applications for antitrust immunity between airlines, and FAA’s overseeing the safety of civil aviation. To inform these efforts, various administrations and Congresses have periodically established committees or commissions comprised of external stakeholders to provide recommendations for DOT and FAA, as well as other agencies involved in aviation, to consider in their implementation of aviation policy. For example, in 1996, President Clinton established the White House Commission on Aviation Safety and Security, which provided 57 recommendations in the areas of safety, air traffic control, security, and accident response. In 2001, Congress established the Commission on the Future of the United States Aerospace Industry to study issues associated with the future of this industry in the global economy, and to recommend potential actions by the federal government to support the maintenance of a robust aerospace industry in the 21st century. In contrast to these two efforts, the FAAC was established as a federal advisory committee, subject to the Federal Advisory Committee Act (FACA). Federal advisory committees exist throughout the executive branch of the federal government, providing input and advice to agencies in a variety of ways, such as preparing reports and developing recommendations. GAO has noted that advisory committees can be effective tools for agencies to gather input on topics of interest by informing agency leaders about issues of importance to the agencies’ missions, consolidating input from multiple sources, and providing input at a relatively low cost. While an advisory group’s input or recommendations may form the basis for a federal agency’s decisions or policies, other factors may play a role in determining what action an agency ultimately takes. Because such groups are by design advisory, agencies are not required to implement their advice or recommendations. On April 16, 2010, Secretary Ray LaHood chartered the FAAC for a one- year term and asked the DOT Assistant Secretary for Aviation and International Affairs to lead the effort. The FAAC was directed to provide information, advice, and recommendations to the Secretary on ensuring the competitiveness of the U.S. aviation industry and its capability to address the evolving transportation needs, challenges, and opportunities of the global economy. The FAAC was comprised of a cross-section of aviation stakeholders. During its first meeting, the Secretary asked the FAAC to develop consensus-based recommendations that could be acted upon immediately or in the near future, with tangible results. He also stated that the FAAC should remain cognizant of the tools that DOT and FAA could use to implement the recommendations, such as the federal rulemaking process, proposing legislation to Congress, and recommending compliance measures for industry. The FAAC established subcommittees to develop recommendations in the five areas specified in the FAAC charter: environment, financing, competitiveness and viability, labor and workforce, and safety. The subcommittees met multiple times over the course of 2010, and the full FAAC briefed the Secretary on its 23 recommendations on December 15, 2010, with a final report outlining the recommendations and their underlying rationale released on April 11, 2011. The Secretary publicly emphasized that the FAAC recommendations would not “sit on a shelf,” and DOT established a process to implement the recommendations. DOT officials stated that the Office of Aviation and International Affairs has led DOT’s work on addressing the FAAC recommendations and has provided periodic updates to FAAC members. Each recommendation was assigned to an “owner”—DOT or FAA staff— some of whom were already conducting work relevant to the recommendation. DOT officials also noted that some of the recommendations with multiple subactions have more than one owner. DOT staff collaborated with recommendation owners to create a “smart sheet” for addressing each FAAC recommendation. This document provided information on interim goals, or actions that must occur to reach the final goal; beneficiaries and allies of the recommendation; and potential challenges. The owners were not required to update these documents as time progressed. DOT officials stated that for about a year after the FAAC report was released, DOT’s Office of Aviation and International Affairs had regular status meetings with the recommendation owners to ascertain their progress, and recommendation owners then reported progress periodically to DOT officials. DOT periodically updates a website on the status of the FAAC recommendations. During the course of our review, we noted that some of the recommendations had not been updated for over one year; however, DOT updated the status of all of the recommendations on its website in June 2013. DOT and FAA have taken actions on the 10 FAAC recommendations we reviewed. DOT and FAA officials noted that 3 of the 10 recommendations we reviewed—sustainable alternative fuels, global competitiveness, and a harmonized approach to carbon dioxide emission reductions—continue to be addressed as part of long-term efforts. For example, DOT officials noted that addressing the global competitiveness recommendation is tied to long-term policy efforts with no timetable for conclusion, such as negotiating agreements with other countries to reduce the barriers for U.S. carriers interested in serving markets in those countries. While officials stated that DOT and FAA have addressed the other seven recommendations, they highlighted ongoing work on issues related to some of these recommendations. For example, FAA officials noted they have addressed the recommendation to accelerate investment and installation of NextGen equipment on aircraft because they are working to develop a program to provide financial incentives and operational benefits to operators that install NextGen equipment early. However, the officials added that they are still working to determine what the program will entail, including soliciting input from aircraft operators and potential private partners to determine how to establish an incentive program that operators want to participate in. They also noted that the appropriations requirements for a credit program have not yet been met. For additional details on actions DOT and FAA have taken on the recommendations, see section 1 of this report. FAAC members acknowledged DOT and FAA efforts to address the 10 recommendations selected for our work; however, a majority of the FAAC subcommittee members believe more work remains to fully address 9 of the 10 recommendations. See table 2 for a summary of the recommendations as well as their status according to DOT and the FAAC subcommittee members we interviewed. Similar to DOT and FAA officials, FAAC members stated that some recommendations may not be fully addressed due to the recommendation’s being linked to ongoing or long-term efforts. For example, FAAC members noted that DOT and FAA efforts to address the sustainable alternative fuels recommendation require collaborating across many agencies, such as the Department of Energy (DOE), the United States Department of Agriculture (USDA), and the Environmental Protection Agency (EPA), and a continued, long-term federal focus. In some cases, FAAC members stated that DOT and FAA should continue their current efforts to address the recommendations; but some FAAC members felt that DOT and FAA should take additional actions to address certain recommendations. For example, five of the six finance FAAC subcommittee members stated that the recommendation to fund accelerated NextGen equipage of aircraft was not addressed, and two suggested that the department take additional steps to collaborate with industry to design an incentive program and develop a stronger business case for airlines to invest in equipping early. Conversely, five of the seven subcommittee members believe that the department’s actions addressed the recommendation on science, technology, engineering, and mathematics (STEM) education. For additional details on FAAC members’ perspectives on the DOT and FAA actions to address the recommendations, see section 1 of this report. While DOT is not required to monitor or report on the status of the FAAC recommendations, 13 of the 17 FAAC members generally agreed that the department should continue monitoring and reporting on the recommendations’ status, with some adding that this ensures continued focus on their implementation. On the other hand, two FAAC members stated that DOT should continue reporting only on certain recommendations, such as recommendations that would be implemented over the long term or recommendations it has not addressed. Another FAAC member stated that DOT and FAA should ensure that efforts to address the recommendations become ingrained into the daily work of agency staff, rather than DOT’s and FAA’s viewing them as a separate effort. The remaining FAAC member stated that DOT should reassess whether it makes sense to continue or close out the effort, adding that as time passes, the recommendations may become irrelevant or unlikely to be addressed. DOT officials told us that for ongoing recommendations, they will continue work on these recommendations and are determining the extent to which it will continue reporting on the status of the recommendations. However, DOT has not yet established a time frame for when it might make such a decision. Resource constraints and the need to collaborate with multiple stakeholders were cited most frequently by DOT and FAA officials, as well as by FAAC members, as implementation challenges. Specifically, DOT and FAA officials or FAAC members identified resource constraints, such as limited funding for programs or staff, as a challenge in implementing 6 of the 10 FAAC recommendations that we reviewed. These 6 include the recommendations pertaining to sustainable fuels, carbon dioxide emission reductions, reviewing eligibility criteria for the Airport Improvement and Passenger Facility Charge Programs, STEM education, predictive safety risk-discovery capability, and prioritizing rulemaking. Agency officials and committee members also explained how constrained resources might affect progress in addressing these recommendations. For example, when discussing the recommendation that DOT and FAA establish a harmonized approach for aviation carbon dioxide emission reductions, FAA officials told us that resource constraints could hamper their ability to conduct necessary research and participate in international forums designed to foster discussion on harmonized regulatory approaches. However, DOT and FAA officials also outlined their methods for operating within these constraints. For example, while noting that funding and limited resources pose challenges to maintaining long-term STEM efforts, officials stated that the agencies work to leverage resources through stakeholder partnerships. In addition, DOT and FAA officials or FAAC members identified challenges in collaborating with and gaining the consensus of a number of stakeholders for four recommendations, those related to sustainable fuels, carbon dioxide emission reductions, global competitiveness, and STEM education. These recommendations range in the type of involvement needed both within and outside DOT and FAA. For example, six of the seven FAAC labor and workforce subcommittee members we interviewed recognized DOT’s ongoing efforts on STEM issues require that DOT maintain the participation of outside groups, such as other agencies, industry, and other stakeholders. However, each recommendation also had unique challenges. For example, with respect to the recommendation that the agencies ensure that safety data submitted by air carriers and other stakeholders is protected from public disclosure, FAAC members noted that despite legal protections provided by recent legislation, industry concerns remain regarding the disclosure of safety data during legal proceedings. FAA officials have recognized that such concerns could limit the implementation of FAA efforts to promote safety management systems, which depend upon the open sharing of safety information among aviation stakeholders. In addition, DOT and FAA officials noted that in some cases they took actions to address the recommendation, but factors beyond DOT’s control, such as the need for legislative action, affected DOT’s ability to fully implement the recommendation. For example, while the agency supported a provision that would have extended the alternative minimum tax exemption for all private activity bonds, including airport private-activity bonds—as recommended by the FAAC—this provision did not become law. Agency officials view this recommendation as addressed since any further action would require legislative action, and there is not currently a legislative vehicle for this provision. Section 1 of this report includes the 10 recommendations in our review, detailed discussions of DOT and FAA completed or planned actions to address each of the 10 FAAC recommendations, FAAC members’ assessment of DOT and FAA progress, and challenges to implementing each recommendation. We provided DOT with a draft of this report for their review and comment. DOT responded by email and provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Transportation, and interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This section presents the 10 FAAC recommendations we reviewed and includes detail on DOT and FAA actions to address the recommendations, FAAC members’ assessment of DOT and FAA progress on the recommendations, and challenges in implementing each recommendation. Also, we numbered these recommendations for reporting purposes, but these numbers do not align with the numbers assigned to these recommendations in the FAAC report. environmental and cost analysis; funding certification and qualification testing to inform efforts by the standard-setting organization ASTM International; and enabling government and aviation industry coordination. FAA officials told us FAA does not have authority or funding to provide financial incentives for development and deployment of alternative aviation fuels. However, other federal agencies—such as the Internal Revenue Service, USDA, and DOE—have administered programs that provide numerous incentives, including subsidies through tax credits, to produce feedstocks, and refine and use biofuels. In its report, the FAAC stated that the aviation industry has unique fuel requirements and is well-positioned to be a national and international leader in the use of sustainable renewable alternative fuels. The FAAC recommended that DOT exercise strong national leadership to promote and display U.S. aviation as a first user of sustainable alternative fuels and provide increased support for FAA’s work on alternative fuels. The FAAC discussed four specific areas in supporting its recommendation. Table 4 provides a summary of the FAAC recommendation and underlying rationale, and DOT’s and FAA’s actions to address it, as of June 2013. FAAC recommendation—alternative fuels Ensure accepted environmental criteria for alternative fuels, domestically and internationally—The DOT/FAA should develop and execute a plan, working with government, industry, and other relevant domestic stakeholders to develop and confirm environmental criteria, including associated life-cycle analysis protocols, for aviation alternative fuels. The DOT/FAA should also work to facilitate international acceptance of these criteria so the benefits of alternative aviation fuels can be available domestically and internationally. DOT/FAA actions As of May 2013, FAA had six on-going efforts—some of which are through one of its Centers of Excellence and the John A. Volpe National Transportation Systems Center—that focus on environmental impacts, economic impacts, feasibility, and sustainability of alternative jet fuels, among other things. FAA also collaborates with other Federal agencies—including DOD, EPA, and DOE—on domestic efforts to develop life-cycle analysis protocols. CAAFI stakeholders are currently examining how CO emissions would differ under different policies and regulatory assumptions, including those under EPA’s RFS2 program and the European Union’s renewable energy directive. FAA follows international efforts and shares best practices with the United Nation’s International Civil Aviation Organization (ICAO) and other countries. For example, FAA has formal agreements to cooperate with Australia, Brazil, Germany, and Spain on issues related to alternative aviation fuels. gallons of petroleum jet fuel with alternative jet fuel by 2018 and to achieve carbon neutral growth by 2020 (baseline year of 2005)—are several years or more away. Because of the long-term nature of this effort, officials told us they plan to continue FAA’s research and collaboration with other agencies, industry groups, and other countries. All five of the FAAC environmental subcommittee members we interviewed expressed support for FAA’s actions to address this recommendation, adding that fully addressing this recommendation will require a long-term, ongoing effort and collaboration with a number of parties. For example, some subcommittee members identified additional actions that they think FAA or the federal government should take in response to this recommendation. Two subcommittee members noted the need for an increased federal focus on supporting the deployment and commercial viability of alternative fuels through tax incentives, public- private partnerships, direct support, or other means to encourage industry to invest its capital. One subcommittee member stated that while DOT or FAA may not have the statutory authority to provide incentives to accelerate development and deployment, they should explore their authority to support additional deployment activities. In our discussions with FAA officials on this issue, they stated that FAA has not sought direct authority for these activities because they believe that it is a wiser use of federal resources to use existing federal support mechanisms that are funded by USDA and DOE. Another subcommittee member stated that more overall federal funding is needed for research on conversion technology and getting through the fuel approval process, and that the research projects that receive federal funding should be focused specifically on producing the data needed for fuel approval, rather than addressing broader issues. In addition, this subcommittee member stated that FAA should increase its outreach and collaboration with other groups such as the Sustainable Aviation Fuel Users Group, the Sustainable Aviation Biofuels for Brazil effort, and the Midwest Aviation Sustainable Biofuels Initiative. According to FAA officials, the agency is engaged in the latter two initiatives. Resource constraints. FAA officials and two of the subcommittee members stated that resources, specifically funding for the programs and staff time to work on the projects, pose a challenge in addressing this recommendation. Subcommittee members raised concerns about budget uncertainty affecting the agencies’ abilities to support long- term efforts in this area. Scalable, affordable, and sustainable fuel supply. FAA officials and two subcommittee members also pointed out the challenges related to scalability—that is, being able to produce sufficient quantities of alternative aviation fuel at a reasonable price. FAA officials noted that building a sustainable supply chain would be a challenge. However, officials said that the production activities by USDA, DOE, and DOD could help establish the market, and the programs at DOE, EPA, and USDA should help lower costs and make commercialization possible. FAA officials added that more time, continued funding, and political support will help them achieve their goals. Uncertainties inherent in new fuels technologies. According to FAA officials, part of the underlying challenge of developing a sustainable supply chain is the limited financial investment by private industry due, in part, to risk associated with uncertainties. Such uncertainties include the ability of alternative fuels to compete with petroleum and the effects of possible future regulatory changes, such as the sustainability of tax credits, grants, or loan guarantees that support the development and commercialization of alternative jet fuel, factors which are not under FAA’s authority. Collaboration with many stakeholders. Three subcommittee members identified challenges related to the need to collaborate with many stakeholders on this recommendation. For example, a subcommittee member also noted the need to collaborate with non- aviation stakeholders, such as agriculture-financing representatives, while two other members highlighted that work on this recommendation is a multi-agency process. We recently began work in this area examining the progress made in developing alternative jet fuels in the United States, as well as the key challenges that exist and federal efforts that should be taken to address those challenges. We plan to report the results of our work in 2014. Biofuels: Potential Effects and Challenges of Required Increases in Production and Use. GAO-09-446. Washington, D.C.: August 25, 2009. Aviation and Climate Change: Aircraft Emissions Expected to Grow, but Technological and Operational Improvements and Government Policies Can Help Control Emissions. GAO-09-554. Washington, D.C.: June 8, 2009. Aviation and the Environment: NextGen and Research and Development Are Keys to Reducing Emissions and Their Impact on Health and Climate. GAO-08-706T. Washington, D.C.: May 6, 2008. As previously noted, commercial aviation’s contribution to greenhouse gas emissions is reported to be relatively small, but is forecasted to grow, with carbon dioxide (CO The 1997 Kyoto Protocol, an international agreement to minimize the adverse effects of climate change, stated that greenhouse gases from aviation fuels should be limited or reduced, and that such efforts should be conducted through ICAO. In 2010, the ICAO Assembly agreed upon the following goals: achieve a global annual average fuel efficiency improvement rate of 2 percent until 2020 and pursue an aspirational global fuel efficiency improvement rate of 2 percent per year from 2021 to 2050; and achieve global carbon-neutral growth from 2020 onward. emissions and aircraft emissions certification procedures. and set another explicit goal to halve CO reductions and DOT’s and FAA’s actions to address it, as of June 2013. FAA officials stated that addressing this recommendation is a long-term effort and noted their ongoing efforts in this area. For example, FAA officials noted that continued domestic and international coordination are required to address this recommendation. FAA officials said implementing its Reduction Plan will continue to require sustained collaboration with partner agencies such as the National Aeronautics and Space Administration and the aviation industry. FAA officials said they would advocate that countries’ reduction plans be updated on a triennial basis and would plan to update the U.S. Reduction Plan in 2015 and triennially afterward, if requested by future ICAO agreements. Officials also told us that they will participate in discussions on market-based measures at the upcoming ICAO Assembly meeting in September 2013. All five of the FAAC environmental subcommittee members told us that the recommendation was not fully addressed, noting that work in this area was ongoing and that FAA should continue their involvement in ICAO’s ongoing efforts. baseline, but said that they have adopted this goal because it is consistent with the U.S. government’s commitment made during the 2009 round of U.N. negotiations on climate change (the Copenhagen Accord) and reflects the position that the United States, Mexico, and Canada jointly took prior to the ICAO Assembly meeting in 2010. Resource constraints. FAA officials and a FAAC member told us that sustained federal financial support will be necessary to achieve the technical and operational improvements that are expected to result in emissions reductions consistent with ICAO and FAA goals. For example, resource constraints at the agency could hamper efforts to implement NextGen within the expected time frames; conduct necessary research and development of new technologies, including sustainable alternative jet fuels; and participate in the work and discussions at ICAO and other forums. NextGen Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits. GAO-13-264. Washington, D.C.: April 8, 2013. Aviation and Climate Change: Aircraft Emissions Expected to Grow, but Technological and Operational Improvements and Government Policies Can Help Control Emissions. GAO-09-554. Washington, D.C.: June 8, 2009. International Climate Change Programs: Lessons Learned from the European Union’s Emissions Trading Scheme and the Kyoto Protocol’s Clean Development Mechanism. GAO-09-151. Washington, D.C.: November 18, 2008. Aviation and the Environment: NextGen and Research and Development Are Keys to Reducing Emissions and Their Impact on Health and Climate. GAO-08-706T. Washington, D.C.: May 6, 2008. Aviation and the Environment: Strategic Framework Needed to Address Challenges Posed by Aircraft Emissions. GAO-03-252. Washington, D.C.: February 28, 2003. Municipal bond proceeds are a significant funding source for airports’ capital development. Municipal bonds for airports are generally classified as private activity bonds, since the bond proceeds are used for private business purposes. The private activity bonds for airports are tax-exempt (also known as “qualified” private activity bonds). However, qualified private activity bonds are subject to restrictions that do not apply to governmental bonds. Among these restrictions, the interest income from qualified private activity bonds is included in income when calculating the alternative minimum tax (AMT), whereas the interest on governmental bonds is not. A bondholder whose total income reached the level subject to the AMT would have to pay tax on the interest earned from airport qualified private activity bonds. AMT in 2009 and 2010, and allowed for the refinancing of some private activity bonds into non-AMT debt. Table 6 provides a summary of the FAAC recommendation to support extending the AMT exemption and DOT’s and FAA’s actions to address it, as of June 2013. FAA officials told us that any further action on extending the AMT exemption to private activity bonds would require legislative action, and there is not currently a legislative vehicle for this provision. As a result, DOT does not plan to take any additional actions and considers the recommendation to be closed. subcommittee member did not think that an AMT exemption should be included in legislation without a cost-benefit analysis. Mixed perspectives on the appropriateness of the AMT exemption. FAA may face challenges encouraging congressional action on this recommendation due to mixed perspectives on the appropriateness of tax-exemptions for bond financing. Our prior work has shown the importance of determining the economic efficiency of applying preferential tax treatment to selected investments. High- level analyses of the AMT exemption have shown it leads to a loss of federal revenue, but data specific to airports is limited. However, FAA and industry stakeholders counter that exempting airport private activity bonds has led to significant savings. FAA conducted an analysis of the financial impact of the AMT exemption in the American Recovery and Reinvestment Act of 2009, and stated that the exemption resulted in significant savings for airports, increased capital investment, and increased employment. Our tax expenditure guide provides useful resources for evaluating the efficiency of applying preferential tax treatment to selected investments, and could assist Congress as it considers these provisions. Legislative change. As previously noted, any further action on extending the AMT exemption to private activity bonds will require legislative action. One FAAC subcommittee member noted that airport stakeholders may face challenges in lobbying for such an exemption given their limited financial resources and lack of natural alliances with other industries that previously received an exemption from the AMT. Tax Expenditures: Background and Evaluation Criteria and Questions. GAO-13-167SP. Washington, D.C.: November 29, 2012. Tax Policy: Tax-Exempt Status of Certain Bonds Merits Reconsideration, and Apparent Noncompliance with Issuance Cost Limitations Should Be Addressed. GAO-08-364. Washington, D.C.: February 15, 2008. Airport Finance: Observations on Planned Airport Development Costs and Funding Levels and the Administration’s Proposed Changes in the Airport Improvement Program. GAO-07-885. Washington, D.C.: June 29, 2007. FAA is transforming the nation’s ground-based air-traffic control system to an air-traffic management system using satellite-based navigation and other technology. This transformation is referred to as NextGen. NextGen is intended to enhance airspace safety, reduce delays, save fuel, and reduce carbon dioxide emissions and other adverse environmental impacts. While some operational improvements can be made with existing aircraft equipment, realizing more significant benefits of NextGen necessitates additional investment by airlines in new technologies to establish a critical mass of properly equipped aircraft. However, GAO and others have noted that a variety of disincentives may deter operators from investing early in NextGen equipment. For example, we have reported that aircraft operators may be hesitant to make investments in equipment if they do not have confidence that FAA will deliver the systems, procedures, and capabilities to realize the benefits from their investments. In addition, the FAAC report identified challenges for FAA to overcome in encouraging operators to equip early, including: (1) prior instances in which operators equipped aircraft but received little or no benefit because the FAA did not implement quickly enough the necessary procedures or approvals to enable operators to derive benefits from the equipment; and (2) the business case may be weak for individual operators to purchase and install equipment early, with costs far exceeding expected direct benefits to users. The FAAC noted that accelerated deployment of NextGen could lead to capacity, efficiency, environmental, and safety benefits. It also emphasized the need to overcome challenges in encouraging operators to equip early by providing some form of public financing to incentivize equipage. One FAAC finance subcommittee member formally dissented to this recommendation and questioned the need for public financing of airline equipment, citing a lack of evidence that the benefits gained from equipment investments would not be sufficient to encourage industry adoption without government subsidies. Table 7 provides a summary of the FAAC recommendation to accelerate investment and installation of NextGen equipment on aircraft and FAA’s actions to address it, as of June 2013. Section 1: Selected FAAC Recommendations and DOT’s and FAA’s Actions to Address Them Section 1: Selected FAAC Recommendations and DOT’s and FAA’s Actions to Address Them prioritization criteria at a national level, joint investment priorities, and location and timing of capability implementation. FAA officials stated FAA has addressed this recommendation, with the help of the authority granted in the FAA Modernization and Reform Act. Officials noted that they are still working to determine what the program will entail, including soliciting input from aircraft operators and potential private partners to determine how to establish an incentive program that operators want to participate in. In addition, the appropriations requirements for a loan guarantee program has not yet been met. FAA officials also noted that they are considering how to administer the loan guarantee program and FAA may need to issue a contract for an external group to fulfill this role; however, FAA currently does not have funding to do so. officials noted that while they have been focused on possibly establishing a loan guarantee program given the mandate to maximize private investment and a lack of funding for a different type of program, they have solicited input from industry on ideas for other incentive programs that would maximize private investment and have not received any specific suggestions. operators to equip, with two subcommittee members noting that sequestration has introduced an additional level of uncertainty into NextGen implementation time frames. In April 2013, we noted that stakeholders, including RTCA and the NextGen Advisory Committee, have stressed the need for additional information to understand the potential direct costs, benefits, and return on investments that might be realized from technological and equipage investments. While FAA’s NextGen plans include some examples of benefits, RTCA reported in 2011 that available FAA plans do not include sufficient information for airlines making investment decisions such as forecast benefits by either location or usage, or the proportion of the local fleet that is currently equipped. We noted that without greater certainty on when and where NextGen improvements are planned, airlines and others are unlikely to invest in the equipment, staffing, and training needed to help achieve the full benefits of NextGen implementation. We recommended FAA assure that NextGen planning documents provide stakeholders information on how and when operational improvements are expected to achieve NextGen goals and targets. In June 2013, DOT concurred with our recommendation and stated that it has efforts underway to better integrate various NextGen plans to provide more of this type of information. NextGen Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits. GAO-13-264. Washington, D.C.: April 8, 2013. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. Washington, D.C.: September 12, 2012. Next Generation Air Transportation System: FAA Has Made Some Progress in Implementation, but Delays Threaten to Impact Costs and Benefits. GAO-12-141T. Washington, D.C.: October 5, 2011. Next Generation Air Transportation System: Challenges with Partner Agency and FAA Coordination Continue, and Efforts to Integrate Near-, Mid-, and Long-term Activities Are Ongoing. GAO-10-649T. Washington, D.C.: April 21, 2010. Next Generation Air Transportation System: FAA Faces Challenges in Responding to Task Force Recommendations. GAO-10-188T. Washington, D.C.: October 28, 2009. Next Generation Air Transportation System: Progress and Challenges Associated with the Transformation of the National Airspace System. GAO-07-25. Washington, D.C.: November 13, 2006. up to $4.50 for every boarded passenger at commercial airports. Airports can use this funding for FAA-approved projects related to enhancing airport safety, capacity, security, noise compatibility, and for enhancing competition among airlines. Project eligibility is almost identical between AIP and the PFC program except that airports may use PFC funding for repaying bonds and for airline waiting areas and gates that generally are not eligible for AIP grants because they are considered revenue producing for airlines. In its report, the FAAC noted interest among the aviation community in broadening AIP and PFC eligibility criteria to support aviation infrastructure projects, including those related to NextGen—FAA’s initiative to transform the nation’s ground-based air-traffic control system to an air-traffic management system using satellite-based navigation and other advanced technology. However, the FAAC noted that current regulations do not generally allow AIP or PFC funds to be used for NextGen-related projects. Table 8 provides a summary of the FAAC recommendation to assess AIP and PFC eligibility criteria for NextGen projects and FAA’s actions to address it, as of June 2013. Although FAA officials told us they consider this recommendation addressed, they said that they continue to track other potential NextGen- related projects that could be recommended for funding through AIP and PFC. As noted in table 8, DOT’s June 2013 update on the status of this recommendation stated that for the next FAA authorization, FAA is considering recommending a pilot program to permit states to fund installation of ADS-B ground stations to provide airborne surveillance coverage. Four of the six FAAC finance subcommittee members felt that the recommendation was not fully addressed. Of the two remaining subcommittee members, one did not feel that he had enough information about DOT’s actions on the recommendation to determine whether it was fully addressed, while the other said that the recommendation with respect to AIP was addressed, but it was unclear what action DOT had taken with respect to the PFC program. Two of the subcommittee members, including one that did not provide an opinion on the status of the recommendation, stated that FAA’s approach to the recommendation was very narrow and that the FAAC envisioned a broader review to determine how to expand the scope of the AIP and PFC programs. One of these subcommittee members stated FAA should have had additional meetings with airport representatives after the FAAC completed its work to explore options for rewriting the AIP and PFC program. Another subcommittee member stated that discussions related to AIP and PFC should reflect the context of the FAAC’s recommendation to commission an independent study of federal aviation taxes and fees, which we did not examine as part of this report. Two subcommittee members noted the need to address funding issues, which are discussed in the challenges section. permit states to fund installation of ADS-B ground stations—a key NextGen technology—which could potentially expand the use of PFC funds. Limited funding. FAA officials and two subcommittee members stated that airports are struggling to address their immediate needs with the existing AIP and PFC funds, making it difficult to expand use of these funds to include NextGen projects. Appropriations for AIP have been flat—roughly $3.5 billion for fiscal years 2005 through 2011—and are reduced under current authorized levels to be $3.35 billion from fiscal year 2012 through fiscal year 2015. In addition, a recent statute allowed DOT to transfer funds from AIP to avoid air traffic controllers’ furloughs and reduce the impacts of other reductions. With respect to PFC, two subcommittee members noted that the PFC has not been increased. (The PFC has remained capped at $4.50 since 2000.) As we have noted, airports have long sought to increase the PFC cap, arguing that the fee cap has not been adjusted for inflation, while airlines counter that raising the PFC inhibits demand for air travel. One subcommittee member stated that expanding AIP and PFC eligibility to include additional costs would require a significant study to determine the most appropriate use of the funds. Transportation: Alternative Methods for Collecting Airport Passenger Facility Charges. GAO-13-262R. Washington, D.C.: February 14, 2013. Airport Noise Grants: FAA Needs to Better Ensure Project Eligibility and Improve Strategic Goal and Performance Measures. GAO-12-890. Washington, D.C.: September 12, 2012. Airport Improvement Program. GAO-07-885. Washington, D.C.: June 29, 2007. Ongoing growth in the demand for international air travel presents opportunities for expansion for U.S. carriers. However, we previously found that U.S. carriers’ ability to establish new routes can be limited by the policies of foreign governments. Preferential treatment of national airlines restricts U.S. carriers’ access to possibly large international markets. Other impediments to entry to foreign markets include barriers, such as slot restrictions, that limit the potential for service by U.S. carriers. DOT has taken steps to address restrictive market access policies. In 1992, DOT launched an initiative to enter into new “Open Skies” agreements with foreign countries. Open Skies agreements remove the vast majority of restrictions on how airlines of the two signatory countries may operate between and beyond their respective territories. For example, these agreements remove prohibitions on the routes that airlines of the signatory countries can fly or the number of airlines that can fly them. Open Skies agreements also provide underlying traffic rights and provisions for cooperative marketing arrangements that allow airlines from different countries to form alliances with one another. Operating in an alliance allows an airline to greatly expand its service network, without having to increase the number of routes it flies using its own aircraft. U.S. and foreign air carriers wishing to enter into an alliance may also request that DOT grant them immunity from the U.S. antitrust laws. Antitrust immunity allows these airlines to coordinate their fares, services, and capacity as if they were a single carrier in these markets, subject to certain conditions. As part of its antitrust immunity review, DOT determines the effects of the immunized alliance on competition and whether the alliance serves the public interest. However, there is not universal agreement that alliances best serve the traveling public. DOT issued a 1995 Statement of U.S. International Air Transportation Policy recognizing the importance of international air service and the need to enable U.S. carriers to serve foreign markets. The policy provides a broad umbrella under which actions such as bilateral negotiations and antitrust immunity alliance reviews are taken and emphasized a number of objectives, including, but not limited to: providing carriers with unrestricted opportunities to develop services and systems to meet market demand; eliminating market distortions internationally, such as government subsidies and unequal access to infrastructure; and, encouraging the development of a cost-effective and productive air- transportation industry through addressing infrastructure needs, privatizing airlines, and reducing barriers to the creation of global aviation systems, such as limitations on cross-border investment, wherever possible. The Secretary of Transportation also serves as a member of the President’s Export Promotion Cabinet. This cabinet was established through executive order in 2010 and was tasked with implementing the National Export Initiative, which seeks to double the dollar value of U.S. exports by the end of 2014. The FAAC report applauded DOT’s efforts to open foreign markets to U.S. airlines, but also noted that some of the world’s fastest-growing aviation markets—including those in Asia, South America, and the Near East—remain restricted to U.S. air carriers. In addition, the FAAC noted that in some key markets, U.S. passenger and cargo air carriers not only face restrictive aviation agreements, but also must confront a wide range of practical market access barriers—including slot restrictions, airspace limitations, and local ground-handling rules—that increase their operating costs and limit U.S. air carriers’ ability to compete, both domestically and globally. In addition, while many different factors can be included in DOT’s public interest analysis of antitrust immunity for alliances, including wages and working conditions, some FAAC members expressed concerns on the impacts of immunized alliances between U.S. and foreign carriers on U.S. workers. DOT officials told us that their actions on this recommendation consisted of a three-prong approach: (1) fostering conditions that enable global alliances to develop as well as ensuring DOT gives weight to existing statutory criteria when reviewing requests for antitrust immunity for alliances between U.S. and foreign carriers; (2) continuing efforts to open foreign markets to U.S. carriers; and (3) expanding DOT’s role in promoting aviation exports. Table 9 provides a summary of the FAAC recommendation to promote global competitiveness and DOT’s actions to address it, as of June 2013. FAAC recommendation—global competitiveness Leverage the Secretary of Transportation’s appointment to the President’s Export Promotion Cabinet, and support an expansion of the DOT’s role in promoting aviation exports for U.S. air carriers, manufacturers, and airports, and facilitating international tourism. DOT/FAA actions The Secretary serves on the President’s Export Promotion Cabinet, and DOT is also part of a working group for the National Export Initiative. This initiative was established in 2010 to support trade promotion, is led by the Secretary of Commerce, and includes representatives from the United States Trade Representative, Department of State, and the U.S. Trade and Development Agency. The initiative includes an emphasis on addressing aerospace market access and infrastructure issues, seeking to establish Open Skies agreements and eliminate restrictive aviation practices worldwide. Removal of such restrictions could allow U.S. airlines to more fully participate in the transport of U.S. exports overseas. DOT officials noted that they are working to ingrain these recommendations into their existing efforts and processes. They stated that DOT continues to pursue Open Skies agreements with China, Mexico, Argentina, South Africa, and Russia, but given the nature of negotiating such agreements, there is no timetable for concluding these discussions. DOT is also considering a “best practices” template outlining how to implement Open Skies provisions in other countries but has not set a timeframe for finalizing this template. Homeland Security, and Commerce on the National Export Cabinet. Two members stated that the agencies should work to make it easier for tourists and business travelers to obtain visas, and another member noted the need to ensure U.S. carriers receive the same opportunities overseas that foreign carriers receive in the U.S. market. Three members stated that DOT should take a more aggressive approach when addressing anti-competitive practices by other countries and supporting the U.S. aviation industry. One subcommittee member stated that DOT needs to refocus on its statutory mandate to strengthen the competitive position of air carriers to at least ensure equality with foreign air carriers; and that the federal government should pursue a national airline policy to follow through on this and other FAAC recommendations. Another member noted that concerns remain with respect to equal access to airport slots during popular travel times while the third member said that DOT should push harder to get an agreement in place with China. In addition, two subcommittee members, including one that felt the recommendation was addressed, emphasized the need for DOT to consider the labor impacts of Open Skies agreements and the resulting alliances between U.S. and foreign carriers. Another member noted the need to consider the safety impacts. These members noted a lack of consensus during the FAAC discussions regarding how DOT should address these issues; as a result, these issues are not included in the FAAC recommendation but were discussed in the FAAC report as “other areas of significant discussion.” Obtaining consensus from many stakeholders, including foreign governments. DOT officials and four FAAC subcommittee members noted challenges in negotiating Open Skies agreements with other countries—a process that can take many years and ultimately depends on securing the agreement of another nation. Two subcommittee members also noted that addressing competitiveness issues involves input from a number of other stakeholders with conflicting viewpoints, including airlines, airports, labor, other government agencies, and elected officials. As previously noted, DOT officials stated that they had established a dialogue with stakeholders regarding impediments to the implementation of alliances around the world, and conducted outreach to get airlines’ and labor stakeholders’ perspectives on DOT’s public interest analysis of alliances. Slot-Controlled Airports: FAA’s Rules Could be Improved to Enhance Competition and Use of Available Capacity. GAO-12-902. Washington, D.C.: September 13, 2012. Airline Industry: Potential Mergers and Acquisitions Driven by Financial and Competitive Pressures. GAO-08-845. Washington, D.C.: July 31, 2008. U.S. Aerospace Industry: Progress in Implementing Aerospace Commission Recommendations, and Remaining Challenges. GAO-06-920. Washington, D.C.: September 13, 2006. Transatlantic Aviation: Effects of Easing Restrictions on U.S.-European Markets. GAO-04-835. Washington, D.C.: July 21, 2004. Issues Relating to Foreign Investment and Control of U.S. Airlines. GAO-04-34R. Washington, D.C.: October 30, 2003. International Aviation: DOT’s Efforts to Promote U.S. Air Cargo Carriers’ Interests. GAO/RCED-97-13. Washington, D.C.: October 18, 1996. Aviation and aerospace employers, including government transportation agencies, airlines, and manufacturers, are facing a number of workforce challenges, such as an aging workforce, a lack of needed skills in the current and future workforce, and the need to adapt to rapidly evolving technology and compete in a global marketplace. A number of organizations, including the FAAC, have noted that addressing these challenges will require a focus on science, technology, engineering, and mathematics (STEM) education. The challenges faced by aviation and aerospace employers reflect a larger national trend, as research has shown that the United States lacks a strong pipeline of future workers in STEM fields and that U.S. students continue to lag behind students in other highly technological nations in mathematics and science achievement. STEM, which is described in more detail in the next paragraph. Last year, the Office of Management and Budget established a cross-agency priority goal to improve the quality of STEM education at all levels. In May 2012, we reported that by naming STEM education as a cross- agency goal, the administration is taking the first step towards creating a government-wide plan to achieve its goal. However, we also stated that a number of limitations could hamper progress, such as overlapping STEM programs; agencies that did not connect STEM education activities to agency goals in their annual performance plans or measure the progress of their STEM activities; and a lack of information about the effectiveness of STEM programs. We reiterated our prior recommendations. agencies, including DOT. In December 2011, CoSTEM published an inventory of the federal STEM education portfolio. In May 2013, CoSTEM released a 5-year strategic federal STEM education plan. DOT officials told us that DOT’s Research and Innovative Technology Administration (RITA) represents DOT on the CoSTEM effort, and participated in the development of an inventory of federal STEM programs and the development of the Federal STEM education plan. The bulk of DOT’s and FAA’s activities that are related to STEM education are conducted through FAA’s Aviation and Space Education (STEM-AVSED) outreach program. STEM-AVSED’s goals include encouraging students to explore aviation and aerospace career opportunities; promoting the skills and knowledge critical to aviation safety; and increasing awareness and understanding of the agency’s role in aviation and aerospace. DOT and FAA officials have taken steps to address the wide-ranging actions identified in the recommendation by improving internal coordination, collaborating with external stakeholders, and conducting STEM outreach to educational institutions and students through a variety of programs. Table 10 provides a summary of the FAAC recommendation on STEM outreach and DOT’s and FAA’s actions to address it, as of June 2013. FAAC recommendation—STEM Consider improving programs and connections with 2- and 4-year educational institutions that give students hands-on experience applicable to the aviation and aerospace workplace. DOT/FAA actions FAA officials stated they improved lines of communication with the Centers of Excellence, which award aerospace research grants to colleges. Also, FAA has an MOU to increase the visibility of technical careers in aviation and aerospace with the National Coalition of Certification Centers (NC3), a network of schools and industry leaders that develop aviation certifications. As part of this effort, FAA and NC3 developed a brochure on aviation careers for college recruiters to share with high school counselors, embarked on a national poster campaign—Yes I Can Do That, and implemented a job shadowing program called Walk in Your Boots. Establish an award for innovation to recognize persons, businesses, or organizations that develop unique scientific and engineering innovations in aerospace and aviation. DOT established the Secretary’s Recognizing Aviation and Aerospace Innovation in Science and Engineering (RAISE) award to recognize innovative scientific and engineering achievements that will have a significant impact on the future of aerospace or aviation. The award is open to students at the high school, undergraduate, and graduate levels. DOT awarded its first RAISE award in October 2012. Work with the Secretary of Labor as an integral part of the Interagency Aerospace Revitalization Task Force, originally established in 2006, to implement a national strategy focused on recruiting, training, and cultivating the aerospace workforce. Work with the Department of Education to provide resources that would create state-of-the-art STEM elementary and secondary educational facilities. DOT officials noted that while the Interagency Aerospace Revitalization Task Force was disbanded, DOT has held semiannual Aviation Industry Workforce-Management Conferences with the Departments of Labor and Education. In addition, in September 2011, the Secretaries of Transportation, Education, and Labor signed a memorandum of understanding to collaborate on implementing strategies for using STEM education to develop a qualified aerospace workforce. DOT officials stated that the agencies developed an informal work plan, and have coordinated on efforts to provide information on transportation careers for the Department of Labor’s American Job Centers. In addition, as previously mentioned, RITA represents DOT on a federal agency-wide effort to coordinate STEM programs through the Committee on Science, Technology, Engineering, and Math Education. important that the agency continue to keep a spotlight on STEM education. To maintain that focus, officials stated that the agency could include FAA’s STEM work into FAA’s Business Plan Goals. FAA officials stated they would like to eventually augment the existing STEM-AVSED efforts to target middle and high school students with additional programs for elementary students but have not established time frames to do so. DOT officials noted that they are working to determine how to attract students through STEM programs, as well as attracting students more interested in technical training. Five of the seven FAAC labor and workforce subcommittee members stated that DOT and FAA had addressed the recommendation, but all of the members stated that the DOT’s and FAA’s work on this issue should continue. Four of the subcommittee members praised DOT efforts such as establishing the RAISE award or the Aviation Industry Workforce- Management Conferences. Two of the subcommittee members—one that thought the recommendation was addressed and one that did not—stated that DOT and FAA should conduct outreach outside of Washington, D.C., on workforce and STEM issues and made diverse suggestions, including involving FAA regional offices, hosting regional workforce summits, providing a regional RAISE award, and interacting with students in university and high school settings. One of the FAAC members who did not think the recommendation was addressed stated that DOT should provide more detail on what they are doing, and how they are measuring their performance. seven FAAC subcommittee members we interviewed recognized that STEM education is a broad policy issue, and efforts on this recommendation require collaborating with and maintaining the participation of outside groups, such as other agencies, industry, and other stakeholders. The administration is also taking steps to address these issues through establishing a cross-agency priority goal and creating a 5-year strategic plan for STEM education. As previously noted, RITA has participated in the government-wide effort to coordinate STEM programs. Sustaining long-term efforts. DOT and FAA officials and a FAAC subcommittee member noted that education is an evolving, ongoing effort, which can be difficult to sustain interest and support for over the long term. They added that funding and limited resources pose challenges to maintaining these efforts, but the agencies work to leverage resources through the partnerships previously noted. Attracting students to aviation careers. FAA officials stated that attracting students to technical careers can be challenging given misconceptions about aviation maintenance careers. Five of the FAAC subcommittee members also noted the challenges in enticing students into the STEM fields or related aviation careers, and three of the members stated that a viable and sustainable aviation industry would help address this challenge. FAA officials also noted challenges in developing web content that appealed to students and educators while staying within FAA’s branding guidelines, which are typically geared toward FAA’s traditional audience of aviation professionals. However, FAA officials stated they were working across departments within FAA to address these issues. Science, Technology, Engineering, and Mathematics Education: Governmentwide Strategy Needed to Better Manage Overlapping Programs. GAO-13-529T. Washington, D.C.: April 10, 2013. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington, D.C.: September 12, 2012. Managing for Results: GAO’s Work Related to the Interim Crosscutting Priority Goals under the GPRA Modernization Act. GAO-12-620R. Washington, D.C.: May 31, 2012. 2012 Annual Report Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Science, Technology, Engineering, and Mathematics Education: Strategic Planning Needed to Better Manage Overlapping Programs across Multiple Agencies. GAO-12-108. Washington, D.C.: January 20, 2012. Federal Aviation Administration: Agency Is Taking Steps to Plan for and Train Its Technician Workforce, but a More Strategic Approach Is Warranted. GAO-11-91. Washington, D.C.: October 22, 2010. For decades, FAA and the aviation industry have used data to identify the causes of aviation accidents and incidents and take actions to prevent their recurrence. Our prior work has shown that FAA is in the midst of a shift toward a proactive, data-driven safety oversight approach, commonly referred to as a safety management system (SMS) approach. Under this new approach, FAA plans to use aviation safety data to identify system- wide trends in aviation safety and manage emerging hazards before they result in incidents or accidents. analysis of multiple databases, FAA has partnered with industry through the Aviation Safety Information Analysis and Sharing (ASIAS) program. According to FAA documents, ASIAS leverages data from multiple sources, including FAA data sets, airline proprietary safety data, publicly available data, and manufacturer data, allowing FAA to (1) perform integrated queries across multiple databases, (2) search an extensive warehouse of safety data, and (3) display pertinent elements in an array of useful formats. The ASIAS home page shows that as of February 2013, 44 airlines were participating in ASIAS, including 13 of the 14 airlines with at least 1 percent of total domestic scheduled passenger service revenue. FAA is also in two rulemaking processes to require SMS—which will include the development of systems for reporting, tracking, and analyzing safety data—for air carriers and airports. Our prior work has shown that the success of a SMS program depends upon the open sharing of safety information among aviation stakeholders; however, FAA officials have recognized that aviation industry concerns about data protection and legal liability could hinder the implementation of SMS. The FAAC stated that the development, analysis, and availability of shared safety information could be inhibited by the potential that this information may be used for other purposes, such as exposure through the media, admissions in criminal or administrative prosecution, or use in civil litigation. Table 11 provides a summary of the FAAC recommendation to ensure safety data protections and DOT’s and FAA’s actions to address it, as of June 2013. completed in October 2013. FAA officials also stated they will continue to encourage the voluntary reporting of data into ASIAS. The evolving nature of aviation safety and remaining concerns regarding the potential disclosure of data led to varying opinions on the extent to which the recommendation was addressed. Two subcommittee members said it was addressed, with one citing the provisions protecting safety data from disclosure in the FAA Modernization and Reform Act, and another noting the work is ongoing. Four of the seven FAAC safety subcommittee members did not believe that the recommendation was addressed, but three of them noted that work was ongoing or that safety was an evolving issue. One subcommittee member stated he did not have enough information to comment on the status of this recommendation. Three subcommittee members expressed concern on the potential disclosure of data during criminal and civil litigation while another member expressed concern on protecting data that is mandated to be submitted—neither of which is addressed by current law but were included in the FAAC recommendation. One member noted that FAA needs to better communicate to Congress how the lack of protections could have serious ramifications on voluntary reporting of safety data. Two subcommittee members suggested additional areas of focus beyond the recommendation, including collecting data on the amount of time pilots spend flying to the city where they begin their work day—an issue discussed in the aftermath of the 2009 Colgan accident—and the need for international harmonization on the definition of legal protections for safety data. Addressing remaining data protection concerns. FAA officials as well as two FAAC subcommittee members acknowledged industry concern with respect to protecting safety data during criminal and civil litigation and recognized its potential impact on SMS implementation. However, they noted that it will be difficult to determine the extent to which this issue could affect SMS and data sharing until it is tested in court. Four of the subcommittee members stated that the potential for disclosure could also affect operators’ or their employees’ willingness to voluntarily report safety issues. While protection of airport safety data was not specifically discussed as part of the FAAC’s recommendation, we reported in September 2012 that these data are subject to state-specific FOIA laws, which could make air carriers less willing to share safety information with airports. Specifically, while air carriers are not directly subject to state FOIA laws because they are privately owned, data that airports collect and submit to FAA for SMS—such as information on hazards or other safety data—may be subject to public disclosure under state FOIA laws. FAA officials and experts stated that state FOIA laws could affect the willingness of air carriers to share safety data with airports because any data they choose to share with airports could then be subject to these laws. We recommended that the FAA Administrator consider strategies to address airports’ concerns, including asking Congress to provide additional protection for SMS data collected by public entities. Officials stated that FAA is working to address this recommendation. Aviation Safety: FAA Efforts Have Improved Safety but Challenges Remain in Key Areas. GAO-13-442T. Washington, D.C.: April 16, 2013. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington, D.C.: September 12, 2012. Aviation Safety: FAA Is Taking Steps to Improve Data, but Challenges for Managing Safety Risks Remain. GAO-12-660T. Washington, D.C.: April 25, 2012. Aviation Safety: Improved Data Quality and Analysis Capabilities Are Needed as FAA Plans a Risk-Based Approach to Safety Oversight. GAO-10-414. Washington, D.C.: May 6, 2010. As previously noted, FAA is in the midst of a shift toward a proactive, data-driven safety oversight approach, commonly referred to as a safety management system (SMS) approach. Under this new approach, FAA plans to use aviation safety data to identify system-wide trends in aviation safety and manage emerging hazards before they result in incidents or accidents. The FAAC noted that to develop the robust predictive risk- discovery capabilities needed for SMS, FAA must develop advanced analytical tools and methods, as well as modeling and simulation capabilities. According to FAA, the Aviation Safety Information Analysis and Sharing (ASIAS) program—a joint industry and FAA effort that serves as a central exchange of safety information—is a cornerstone of its effort to implement SMS. According to FAA documents, ASIAS leverages data from multiple sources—including FAA data sets, airline proprietary safety data, publicly available data, and manufacturer data—allowing FAA to (1) perform integrated queries across multiple databases, (2) search an extensive warehouse of safety data, and (3) display pertinent elements in an array of useful formats. The ASIAS home page shows that as of February 2013, 44 airlines were participating in ASIAS. According to the FAAC, the initial results of ASIAS analyses have demonstrated the value of using safety information to produce a system safety baseline. However, the FAAC noted that realization of predictive safety risk-discovery requires investment in expanding, accelerating, and maturing ASIAS capabilities. Table 12 provides a summary of the FAAC recommendation on developing predictive safety risk analysis capabilities and FAA’s actions to address it, as of June 2013. By securing the funds for execution, FAA believes that it has fulfilled the spirit of the recommendation and is positioned to deliver predictive safety risk-discovery capabilities. Officials told us that FAA intends to continue executing its 5-year ASIAS Plan. recommendation was fully addressed, and three noted this was due to the ongoing nature of the recommendation, but stated that they were satisfied with FAA’s actions. For example, one subcommittee member said that developing predictive safety risk-discovery capabilities requires constantly refining and making the capabilities more sophisticated. to proactively identify risks. According to its ASIAS Plan, FAA will address and monitor these issues regularly as new data sources are added. It plans to develop taxonomies to better merge data from programs like ASAP, the Air Traffic Safety Action Program, and FOQA in 2013 and 2014. Resource constraints. Three FAAC subcommittee members raised concerns about how budget issues would affect FAA’s efforts to implement the recommendation. One subcommittee member noted that making substantive progress on this recommendation will not be possible until budget issues are addressed. According to FAA officials, funding reductions due to sequestration will not have a major effect on the program’s development in fiscal year 2013. However, the officials noted that these reductions will delay some specific improvements, such as ASIAS’s ability to detect instances where aircraft situational awareness is lost. Analytical capabilities. According to FAA officials, labor agreements and other factors can affect the agency’s ability to fully develop the analytical capabilities of ASIAS. For example, the ASIAS Executive Board must authorize analyses of ASIAS data, and in some cases, the members of the board are constrained by labor agreements that limit how their airlines’ data can be used. FAA officials noted that while they provide the airlines with industry benchmarks from ASIAS so the airline can see how it compares on certain metrics, it is unclear how the airlines are using the data. In addition, FAA officials told us that because some of the identifying information is stripped from the data, FAA is limited in its analyses that would indicate the relative safety performance of a particular airline. For example, if the information is left with identifiers, then FAA would be able to pull other relevant reports associated with a particular flight, and could merge that data to understand all the relevant factors, and then de-identify the information. As mentioned in Table 12, FAA stated it is working with three airlines to demonstrate the benefits of using this approach. General Aviation Safety: Additional FAA Efforts Could Help Identify and Mitigate Safety Risks. GAO-13-36. Washington, D.C.: October 4, 2012. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington, D.C.: September 12, 2012. Aviation Safety: FAA Is Taking Steps to Improve Data, but Challenges for Managing Safety Risks Remain. GAO-12-660T. Washington, D.C.: April 25, 2012. Aviation Safety: Improved Data Quality and Analysis Capabilities Are Needed as FAA Plans a Risk-Based Approach to Safety Oversight. GAO-10-414. Washington, D.C.: May 6, 2010. The safety of the nation’s flying public depends, in large part, on the aviation industry’s compliance with safety regulations and FAA’s enforcement of those regulations when violations occur. As part of FAA’s oversight of aviation safety, FAA develops rules and regulations through the rulemaking process. FAA, the National Transportation Safety Board (NTSB), and we have previously expressed concerns about the efficiency and timeliness of FAA’s rulemaking efforts. In 2010, we noted that a number of issues may contribute to a lengthy rulemaking process at FAA, including the need to conduct research, obtain public comment, and provide industry time to comply with the rule. In addition, we reported that external pressures—such as highly publicized accidents, recommendations by NTSB, and congressional mandates—as well as internal pressures, such as changes in management’s emphasis, continued to add to and shift the agency’s priorities, and that shifting priorities can add to delays. The FAAC report raised similar concerns, stating that the queue of potential rulemaking projects far exceeds the FAA capacity for action in a reasonable time period, and there does not appear to be a universally understood methodology for FAA to ensure the most effective projects receive the highest priority. suspending or revoking operating certificates. However, as previously noted, FAA and the U.S. aviation industry are moving toward the adoption of safety management systems (SMS)—a data-driven, risk-based safety oversight approach. The FAAC report raised concerns that FAA’s enforcement policies are not reflective of the shift to SMS, because FAA focuses its enforcement on regulatory non-compliance, regardless of risk level, rather than prioritizing its efforts to address unacceptable risks. Table 13 provides a summary of the FAAC recommendation to review FAA’s rulemaking priorities and FAA’s actions to address it, as of June 2013. implementation team is evaluating how to integrate the recommended model into the current rulemaking process without creating unnecessary redundancies. FAA officials stated they requested external feedback on the model through the ARAC, at ARAC’s June 20, 2013, meeting. According to FAA officials, ARAC members generally support implementation activities and FAA is considering the feedback ARAC provided on the tool and future ARAC involvement. FAA plans to beta test the model in June 2013 and fully implement it in fiscal year 2014. FAA officials stated that the current tool is supported by MS Excel, but noted that FAA is developing an automated version of all rulemaking tools and is including the prioritization model in the requirements. Four of the six FAAC safety subcommittee members felt the recommendation was not fully addressed; two members felt it was addressed, although one stated that FAA should take additional actions as its work on this recommendation continues, and one member did not provide an opinion. One of the subcommittee members who stated the recommendation was not addressed, as well as one member who stated it was, suggested that FAA should be more transparent about its process. Another subcommittee member felt that FAA should communicate more with the NTSB when prioritizing rulemakings. Another subcommittee member stated FAA was making progress, but he was unsure where FAA was in the process since receiving the recommendations from the ARAC. The remaining member had concerns beyond prioritization of rulemaking, such as the clarity of FAA rules and the role of external input versus data when crafting rules. prioritization process is appropriately addressing aviation safety issues raised by external parties, such as the public, Congress, the DOT Inspector General, and GAO, without shifting FAA’s focus from continuing to address other top priority issues. Culture change. Two other FAAC subcommittee members noted that implementing this recommendation would require a culture change at FAA, which could be difficult. For example, one stressed the need for FAA to be proactive in addressing safety issues, and the other stated that FAA will need to develop a holistic approach toward rulemaking that allows it to address issues quickly. Federal Rulemaking: Agencies Could Take Additional Steps to Respond to Public Comments. GAO-13-21. Washington, D.C.: December 20, 2012. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington, D.C.: September 12, 2012. Aviation Safety: Improved Planning Could Help FAA Address Challenges Related to Winter Weather Operations. GAO-10-678. Washington, D.C.: July 29, 2010. Aviation Safety: Better Management Controls are Needed to Improve FAA’s Safety Enforcement and Compliance Efforts. GAO-04-646. Washington, D.C.: July 6, 2004. Aviation Rulemaking: Incomplete Implementation Impaired FAA’s Reform Efforts. GAO-01-950T. Washington, D.C.: July 11, 2001. Aviation Rulemaking: Further Reform Is Needed to Address Long- standing Problems. GAO-01-821. Washington, D.C.: July 9, 2001. 1. Exercise strong national leadership to promote and showcase U.S. aviation as a first user of sustainable alternative fuels. 2. Support research and development related to airframe and engine technologies. 3. Secure operational and infrastructure improvements (NextGen, ground taxi delay management programs, and airport energy efficiency and emissions reduction program). 4. Establish a harmonized approach for aviation carbon dioxide emission reductions. 5. Support extending the alternative minimum tax exemption for airport private activity bonds. 6. Fund accelerated Next Generation Air Transportation System (NextGen) equipage of aircraft. 7. Deliver the benefits of NextGen. 8. Review eligibility criteria for the Airport Improvement Program and Passenger Facility Charge Program. 9. Promote the global competitiveness of the U.S. aviation industry. 10. Commission an independent study of federal aviation taxes and fees. 11. Ensure transparency in ticket pricing, fees, code-share, contracts of carriage, and travel statistics. 12. Support intermodalism by establishing a task force, examining the Essential Air Service Program, and recommending that legislation prioritize intermodal links. 13. Reform the Essential Air Service program. 14. Continue to be involved in efforts to address jet fuel price volatility. 15. Ensure coordination and focus on science, technology, engineering, and math education programs. 16. Urge the National Mediation Board to implement the Dunlop II recommendations. 17. Implement a semi-annual Aviation Industry Workforce-Management Conference. 18. Seek legal protections for safety data program participants. 19. Support predictive analytic capabilities for safety data and information. 20. Identify new sources of safety data and establish criteria for inclusion in voluntary data-sharing programs. 21. Include safety performance standards and training into NextGen planning and implementation. 22. Review and reprioritize FAA’s rulemaking initiatives. 23. Address issues related to child safety for air travel. The Department of Transportation (DOT) chartered the Future of Aviation Advisory Committee (FAAC) on April 16, 2010, to develop a manageable, actionable list of recommendations for DOT. The FAAC included 19 representatives —1 government official (DOT Assistant Secretary for Aviation and International Affairs) and 18 non-government representatives—from a cross-section of stakeholders, such as air carriers, airports, airline labor unions, manufacturers, and representatives from the finance community, academia, and passenger interests. The FAAC established five subcommittees to develop recommendations in the five areas of interest specified in the FAAC charter: environment, financing, competitiveness and viability, labor and workforce, and safety. A list of the FAAC members, as well as the subcommittees they served on, is provided in table 14. In addition to the contact named above the following individuals made important contributions to this report: Heather Krause (Assistant Director); Amy Abramowitz; Melissa Bodeau; Anne Dore; Kevin Egan; Crystal Huggins; Bert Japikse; Aaron Kaminsky; Bill Keller; Tim Minelli; SaraAnn Moessbauer; Susan Offutt; Paul Revesz; Marylynn Sergent; Gretchen Snoey; Pamela Vines; and Jessica Wintfeld. | The aviation industry is important to the U.S. economy and is a critical link in the nation's transportation infrastructure. However, the industry has faced challenges, such as an outdated national air-traffic management system and an increasingly competitive global market. In 2010, in response to these and other challenges, DOT established the FAAC to develop a manageable, actionable list of recommendations for DOT. In April 2011, the FAAC released a report outlining 23 recommendations in five areas: environment, financing, competitiveness and viability, labor and workforce, and safety. GAO was asked to review the status of DOT's efforts to implement the FAAC recommendations. GAO examined 10 of the FAAC's 23 recommendations to determine (1) DOT's progress in addressing the selected recommendations, and any planned future actions; (2) the FAAC members' perspective on the extent to which DOT's actions address these recommendations; and (3) the challenges, if any, that DOT faces in addressing the recommendations. The 10 selected recommendations covered each of the 5 areas and allowed GAO to leverage ongoing or recent GAO work. GAO did not analyze the validity of the FAAC's recommendations, and our work does not take a position on, or represent an endorsement of, the recommendations. GAO reviewed agency documents and literature, and interviewed FAAC members and DOT and FAA officials. DOT provided technical comments, which were incorporated as appropriate. While the Department of Transportation (DOT) is not required to implement the Future of Aviation Advisory Committee (FAAC) recommendations, DOT and the Federal Aviation Administration (FAA) have taken actions on the 10 FAAC recommendations that GAO reviewed. DOT and FAA officials noted that they continue to work on three recommendations as part of long-term efforts and have ongoing work related to some of the seven recommendations that they believe are addressed. FAAC members recognized DOT's actions to address the recommendations. However, a majority of the FAAC subcommittee members believe that more work remains to fully address 9 of the 10 recommendations. FAAC members stated that some recommendations may not be fully addressed because they are linked to ongoing efforts that DOT also identified. DOT, FAA officials, and FAAC members most frequently identified resource constraints and the need to collaborate with multiple stakeholders as implementation challenges and in some cases, noted efforts to address these challenges. DOT officials noted that fully addressing some recommendations may depend on factors outside of DOT's control, such as extending the alternative minimum tax exemption, which would require legislation, and developing sustainable alternative fuels, which is a long-term, multi-agency effort. |
Three main types of pipelines—gathering, transmission, and distribution—carry hazardous liquid and natural gas from producing wells to end users (residences and businesses) and are managed by about 3,000 operators. Transmission pipelines carry these products, sometimes over hundreds of miles, to communities and large-volume users, such as factories. Transmission pipelines tend to have the largest diameters and operate at the highest pressures of any type of pipeline. PHMSA has estimated there are more than 400,000 miles of hazardous liquid and natural gas transmission pipelines across the United States. PHMSA administers two general sets of pipeline safety requirements and works with state pipeline safety offices to inspect pipelines and enforce the requirements. The first set of requirements is minimum safety standards that cover specifications for the design, construction, testing, inspection, operation, and maintenance of pipelines. The second set is part of a supplemental risk-based regulatory program termed “integrity management.” Under transmission pipeline integrity management programs, operators are required to systematically identify and mitigate risks to pipeline segments that are located in highly populated or environmentally sensitive areas (called “high-consequence areas”). According to PHMSA, industry, and state officials, responding to either a hazardous liquid or natural gas pipeline incident typically includes detecting that an incident has occurred, coordinating with emergency responders, and shutting down the affected pipeline segment. Under PHMSA’s minimum safety standards, operators are required to have a plan that covers these steps for all of their pipeline segments and to follow that plan during an incident. Officials from PHMSA and state pipeline safety offices perform relatively minor roles during an incident, as they rely on operators and emergency responders to take actions to mitigate the consequences of such events. Operators must report incidents that meet certain thresholds—including incidents that involve a fatality or injury, excessive property damage or product release, or an emergency shutdown—to the federal National Response Center. Operators must also conduct an investigation to identify the root cause and lessons learned, and report to PHMSA. Federal and state authorities may use their discretion to investigate some incidents, which can involve working with operators to determine the cause of the incident. While prior research shows that most of the fatalities and damage from an incident occur in the first few minutes following a pipeline rupture, operators can reduce some of the consequences by taking actions that include closing valves that are spaced along the pipeline to isolate segments. The amount of time it takes to close a valve depends upon the equipment installed on the pipeline. For example, valves with manual controls (referred to as “manual valves”) require a person to arrive on site and either turn a wheel crank or activate a push-button actuator. Valves that can be closed without a person at the valve’s location (referred to as “automated valves”) include remote-control valves, which can be closed via a command from a control room, and automatic-shutoff valves, which can close without human intervention based on sensor readings. Automated valves generally take less time to close than manual valves. PHMSA’s minimum safety standards dictate the spacing of all valves, regardless of type of equipment installed to close them, while integrity management regulations require that transmission pipeline operators conduct a risk assessment for pipelines in high-consequence areas that includes the consideration of automated valves. Multiple variables—some controllable by transmission pipeline operators—can influence the ability of operators to respond quickly to an incident, according to PHMSA officials, pipeline safety officials, and industry stakeholders and operators. Ensuring a quick response is important because according to pipeline operators and industry stakeholders, reducing the amount of time it takes to respond to an incident can reduce the amount of property and environmental damage stemming from an incident and, in some cases, the number of fatalities and injuries. For example, several natural gas pipeline operators noted that a faster incident response time could reduce the amount of property damage from secondary fires (after an initial pipeline rupture) by allowing fire departments to extinguish the fires sooner. In addition, hazardous liquid pipeline operators told us that a faster incident response time could result in lower costs for environmental remediation efforts and less product lost. We identified five variables that can influence incident response time and are within an operator’s control, and four other variables that influence a pipeline operator’s ability to respond to an incident but are beyond an operator’s control. The effect a given variable has on a particular incident response will vary according to the specifics of the situation. The five variables within an operator’s control are: location of qualified operator response personnel, control room management, and relationships with local first responders. The four factors beyond an operator’s control are: weather conditions, and other operators’ pipelines in the same area. (See table 1 for further detail.) Appendix II provides several examples of response time in past incidents; response time varied from several minutes to days depending on the presence and interaction of the variables just mentioned. As noted, one variable that influences operators’ response times to incidents is the type of valve installed on the pipeline. Research and industry stakeholders indicate that the primary advantage of installing automated valves—as opposed to other safety measures—is related to the time it takes to respond to an incident. Although automated valves cannot mitigate the fatalities, injuries, and damage that occur in an initial blast, quickly isolating the pipeline segment through automated valves can reduce subsequent damage by reducing the amount of hazardous liquid and natural gas released. Research and industry stakeholders also identified two disadvantages operators should consider when determining whether to install automated valves related to potential accidental closures and the monetary costs of purchasing and installing the equipment. Specifically, automated valves can lead to accidental closures, which can have severe, unintended consequences, including loss of service to residences and businesses. In addition, according to operators, vendors and contractors, the monetary costs of installing automated valves can range from tens of thousands to a million dollars per valve, which may be significant expenditures for some pipeline operators. According to operators and other industry stakeholders, considering monetary costs is important when making decisions to install automated valves because resources spent for this purpose can take away from other pipeline safety efforts. Specifically, operators and industry stakeholders told us they often would rather focus their resources on incident prevention to minimize the risk of an incident instead of focusing resources on incident response. PHMSA officials stated that they generally support the idea that pipeline operators be given some flexibility to target spending where the operator believes it will have the most safety benefit. Research and industry stakeholders also indicate the importance of determining whether to install valves on a case-by-case basis because the advantages and disadvantages can vary considerably based on factors specific to a unique valve location. These sources indicated that the location of the valve, existing shutdown capabilities, proximity of personnel to the valve’s location, the likelihood of an ignition, type of product being transported, operating pressure, topography, and pipeline diameter, among other factors, all play a role in determining the extent to which an automated valve would be advantageous. Operators we met with are using a variety of methods for determining whether to install automated valves that consider—on a case-by-case basis—whether these valves will improve response time, the potential for accidental closure, and monetary costs. For example, two natural gas pipeline operators told us that they applied a decision tree analysis to all pipeline segments in highly populated and frequented areas. They used the decision tree to guide a variety of yes-or-no questions on whether installing an automated valve would improve response time to less than an hour and provide advantages for locations where people might have difficulty evacuating quickly in the event of a pipeline incident. Other hazardous liquid pipeline operators said they used computer-based spill modeling to determine whether the amount of product release would be significantly reduced by installing an automated valve. In our report, we note that PHMSA has not developed a performance- based framework for incident response times, although some organizations in the pipeline industry have done so. We and others have recommended that the federal government move toward performance- based regulatory approaches to allow those being regulated to determine the most appropriate way to achieve desired, measurable outcomes. According to our past work, such a framework should include: (1) national goals, (2) performance measures that are linked to those national goals, and (3) appropriate performance targets that promote accountability and allow organizations to track their progress toward goals. While PHMSA has established a national goal for incident response times, it has not linked performance measures or targets to this goal. Specifically, PHMSA directs operators to respond to certain incidents—emergencies that require an immediate response—in a “prompt and effective” manner, but neither PHMSA’s regulations nor its guidance describe ways to measure progress toward meeting this goal. Without a performance measure and target for a prompt and effective incident response, PHMSA cannot quantitatively determine whether an operator meets this goal and track their performance over time. PHMSA officials told us that because pipeline incidents often have unique characteristics, developing a performance measure and associated target for incident response time would be difficult. In particular, it would be challenging to establish a performance measure using incident response time in a way that would always lead to the desired outcome of a prompt and effective response. In addition, officials stated it would be difficult to identify a single response time target for all incidents, as pipeline operators likely should respond to some incidents more quickly than others. Defining performance measures and targets for incident response can be challenging, but one possible way for PHMSA to move toward a more quantifiable, performance-based approach would be to develop strategies to improve incident response based on nationwide data. For example, performing an analysis of nationwide incident data—similar to PHMSA’s current analyses of fatality and injury data—could help PHMSA determine response times for different types of pipelines (based on characteristics such as location, operating pressure, and diameter); identify trends; and develop strategies to improve incident response. However, we found that PHMSA does not have the reliable nationwide data on incident response time data it would need to conduct such analyses. Specifically, the response time data PHMSA currently collects are unreliable for two reasons: (1) operators are not required to fill out certain time-related fields in the PHMSA incident-reporting form and (2) when operators do provide these data, they are interpreting the intended content of the data fields in different ways. Our report recommended that PHMSA improve incident response data and use these data to evaluate whether to implement a performance-based framework for incident response times. PHMSA agreed to consider this recommendation. We also found that PHMSA needs to do a better job of sharing information on ways operators can make decisions to install automated valves. For example, many of the operators we spoke with were unaware of existing PHMSA enforcement and inspection guidance that could be useful for operators in determining whether to install automated valves on transmission pipelines. In addition, while PHMSA inspectors see examples of how operators make decisions to install automated valves during integrity management inspections, they do not formally collect this information or share it with other operators. Given the variety of risk- based methods for making decisions about automated valves across the operators we spoke with, we believe that both operators and inspectors would benefit from exposure to some of the methods used by other operators to make decisions on whether to install automated valves. Our report recommended that PHMSA share guidance and information on operators’ decision-making approaches to assist operators with these determinations. PHMSA also agreed to consider this recommendation. Chairman Rockefeller this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Committee may have at this time. For questions about this statement, please contact Susan Fleming, Director, Physical Infrastructure, at (202) 512-3824 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Sara Vermillion (Assistant Director), Sarah Arnett, Melissa Bodeau, Russ Burnett, Matthew Cook, Colin Fallon, Robert Heilman, David Hooper, and Josh Ormond. GAO recently issued two reports related to the safety of certain types of pipelines. The first, GAO-12-388, reported on the safety of gathering pipelines, which currently are largely unregulated by the federal government. The second, GAO-12-389R, reported on the potential safety effects of applying less prescriptive requirements, currently levied on distribution pipelines, to low-stress natural gas transmission pipelines. Further detail on each report is provided below. For the full report text, go to www.gao.gov. Included in the nation’s pipeline network are an estimated 200,000 or more miles of onshore gathering pipelines, which transport products to processing facilities and larger pipelines. Many of these pipelines have not been subject to federal regulation because they are considered less risky due to their generally rural location and low operating pressures. For example, out of the more than 200,000 estimated miles of natural gas gathering pipelines, the Pipeline and Hazardous Materials Safety Administration (PHMSA) regulates roughly 20,000 miles. Similarly, of the 30,000 to 40,000 estimated miles of hazardous liquid gathering pipelines, PHMSA regulates about 4,000 miles. While the safety risks of onshore gathering pipelines that are not regulated by PHMSA are generally considered to be lower than for other types of pipelines, PHMSA does not collect comprehensive data to identify the safety risks of unregulated gathering pipelines. Without data on potential risk factors—such as information on construction quality, maintenance practices, location, and pipeline integrity—pipeline safety officials are unable to assess and manage safety risks associated with gathering pipelines. Further, some types of changes in pipeline operational environments could also increase safety risks for federally unregulated gathering pipelines. Specifically, land-use changes are resulting in development encroaching on existing pipelines, and the increased extraction of oil and natural gas from shale deposits is resulting in the construction of new gathering pipelines, some of which are larger in diameter and operate at higher pressure than older pipelines. As a result, PHMSA is considering collecting data on federally unregulated gathering pipelines. However, the agency’s plans are preliminary, and the extent to which PHMSA will collect data sufficient to evaluate the potential safety risks associated with these pipelines is uncertain. In addition, we found that the amount of sharing of information to ensure the safety of federally unregulated pipelines among state and federal pipeline safety agencies appeared limited. For example, some state and PHMSA officials we interviewed had limited awareness of safety practices used by other states. Increased communication and information sharing about pipeline safety practices could boost the use of such practices for unregulated pipelines. We recommended that PHMSA should collect data on federally unregulated onshore hazardous liquid and gas gathering pipelines, subsequent to an analysis of the benefits and industry burdens associated with such data collection. Data collected should be comparable to what PHMSA collects annually from operators of regulated gathering pipelines (e.g., fatalities, injuries, property damage, location, mileage, size, operating pressure, maintenance history, and the causes of incidents and consequences). Also, we recommended that PHMSA establish an online clearinghouse or other resource for states to share information on practices that can help ensure the safety of federally unregulated onshore hazardous liquid and gas gathering pipelines. This resource could include updates on related PHMSA and industry initiatives, guidance, related PHMSA rulemakings, and other information collected or shared by states. PHMSA concurred with our recommendations and is taking steps to implement them. Gas transmission pipelines typically move natural gas across state lines and over long distances, from sources to communities. Transmission pipelines can generally operate at pressures up to 72 percent of specified minimum yield strength (SMYS). By contrast, local distribution pipelines generally operate within state boundaries to receive gas from transmission pipelines and distribute it to commercial and residential end users. Distribution pipelines typically operate well below 20 percent of SMYS. Connecting the long-distance transmission pipelines to the local distribution pipelines are lower stress transmission pipelines that may transport natural gas for several miles at pressures between 20 and 30 percent of SMYS. Applying PHMSA’s distribution integrity management requirements to low-stress transmission pipelines would result in less prescriptive safety requirements for these pipelines. Overall, requirements for distribution pipelines are less prescriptive than requirements for transmission pipelines in part because the former operate at lower pressure and pose lower risks in general than the latter. For example, the integrity management regulations for transmission pipelines allow three types of in-depth physical inspection. In contrast, distribution pipeline operators can customize their integrity management programs to the complexity of their systems, including using a broader range of methods for physical inspection. While PHMSA officials stated that “less prescriptive” does not necessarily mean less safe, they also stated that distribution integrity management requirements for distribution pipelines can be more difficult to enforce than integrity management requirements for transmission pipelines. In general, the effect of changing PHMSA’s requirement for low-stress transmission pipelines for pipeline safety is unclear. While the consequences of a low-stress transmission pipeline failure are generally not severe because these pipelines are more likely to leak than rupture, the point at which a gas pipeline fails by rupture is uncertain and depends on a number of factors in addition to pressure, such as the size or type of defect and the materials used to conduct the pipeline. In addition, the mileage and location of pipelines that would be affected by such a regulatory change are currently unknown, although PHMSA recently changed its reporting requirements to collect such information. The concern is that because distribution pipelines are located in highly populated areas, the low-stress transmission pipelines that are connected to them could also be located in highly populated areas. As a result, we considered the current regulatory approach of applying more prescriptive transmission pipeline requirements reasonable. Operators we spoke with stated that the amount of time it takes to respond to an incident can vary depending on a number of variables (see table 2). | Pipelines are a relatively safe means of transporting natural gas and hazardous liquids; however, catastrophic incidents can and do occur. Such an incident occurred on December 11, 2012, near Sissonville, West Virginia, when a rupture of a natural gas transmission pipeline destroyed or damaged 9 homes and badly damaged a section of Interstate 77. Large-diameter transmission pipelines such as these that carry products over long distances from processing facilities to communities and large-volume users make up more than 400,000 miles of the 2.5 million mile natural gas and hazardous liquid pipeline network in the United States. The Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA), working in conjunction with state pipeline safety offices, oversees this network, which transports about 65 percent of the energy we consume. The best way to ensure the safety of pipelines, and their surrounding communities, is to minimize the possibility of an incident occurring. PHMSA's regulations require pipeline operators to take appropriate preventive measures such as corrosion control and periodic assessments of pipeline integrity. To mitigate the consequences if an incident occurs, operators are also required to develop leak detection and emergency response plans. One mitigation measure operators can take is to install automated valves that, in the event of an incident, close automatically or can be closed remotely by operators in a control room. Such valves have been the topic of several National Transportation Safety Board (NTSB) recommendations since 1971 and a PHMSA report issued in October 2012. As mandated in the Pipeline Safety, Regulatory Certainty, and Job Creation Act of 2011, we issued a January 2013 report on the ability of transmission pipeline operators to respond to a hazardous liquid or natural gas release from an existing pipeline segment. This statement is based on this report and addresses (1) variables that influence the ability of transmission pipeline operators to respond to incidents and (2) opportunities to improve these operators' responses to incidents. This statement also provides information from two other recent GAO reports on pipeline safety. Numerous variables--some of which are under operators' control--influence the ability of transmission pipeline operators to respond to incidents. For example, the location of response personnel and the use of manual or automated valves can affect the amount of time it takes for operators to respond to incidents. However, because the advantages and disadvantages of installing an automated valve are closely related to the specifics of the valve's location, it is appropriate that operators decide whether to install automated valves on a case-by-case basis. Several operators we spoke with have developed approaches to evaluate the advantages and disadvantages of installing automated valves, such as using spill-modeling software to estimate the potential amount of product released and extent of damage that would occur in the event of an incident. One method PHMSA could use to improve operator response to incidents is to develop a performance-based approach for incident response times. While defining performance measures and targets for incident response can be challenging, PHMSA could move toward a performance-based approach by evaluating nationwide data to determine response times for different types of pipeline (based on location, operating pressure, and pipeline diameter, among other factors). First, though, PHMSA must improve the data it collects on incident response times. These data are not reliable because operators are not required to fill out certain time-related fields in the reporting form and because operators told us they interpret these data fields in different ways. Furthermore, while PHMSA conducts a variety of information-sharing activities, the agency does not formally collect or share evaluation approaches used by operators to decide whether to install automated valves, and not all operators we spoke with were aware of existing PHMSA guidance designed to assist operators in making these decisions. We recommended that PHMSA should: (1) improve incident response data and use those data to explore the feasibility of developing a performance-based approach for improving operators' responses to pipeline incidents and (2) assist operators in deciding whether to install automated valves by formally collecting and sharing evaluation approaches and ensuring operators are aware of existing guidance. PHMSA agreed to consider these recommendations. |
The NCR is a unique regional partnership, in that it is the only region that has a statutorily created and federally funded office devoted solely to supporting coordination and cooperation within the region. Appendix I provides more information about the region and the organizations responsible for supporting preparedness coordination. We have reported in the past on preparedness efforts for the NCR. Our past work for Congress has tracked the evolution and development of increasingly effective efforts to develop a coordinated NCR preparedness strategy, along with some opportunities for continuing improvement in strategy-related efforts. See appendix II for more information about our past NCR work. We have previously identified six characteristics of effective strategies that could be applied to the NCR. We noted that these six characteristics would help to enable its implementers to effectively shape policies, programs, priorities, resource allocations, and standards and enable relevant stakeholders to achieve intended results. These characteristics call for strategies to include (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) goals, subordinate objectives, activities, and performance measures; (4) resources, investments, and risk management; (5) organizational roles, responsibilities, and coordination; and (6) integration and implementation. More information about the six desirable strategy characteristics and their application to a regional preparedness strategy appears in appendix III. The 2010 NCR strategy addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. The introduction to the plan specifies that it was produced to help identify the capabilities needed to strengthen the region’s homeland security efforts and to define the framework for achieving those capabilities. The scope of the plan, as outlined in the introduction, is strategic investment in new and existing capabilities to help all localities in the NCR prepare for, prevent, protect against, respond to, and recover from all-hazards threats and events. Specifically, the plan’s goals and objectives are designed to build new and expanded capabilities and to ensure maintenance of previous investments. Additionally, the aim of these capabilities, according to the plan, is to help support the localities in the NCR as they execute their operational plans in all phases of homeland security. The plan’s methodology appendix specifies that the effort to produce the 2010 plan started with an NCR partner-led assessment of progress under the 2006 NCR Strategic Plan and stakeholder recommendations on how best to update the goals to reflect current priorities of the NCR. As part of this effort, subject-matter experts identified priority capabilities from the 2010 UASI Investment Justifications that serve as the foundation for the plan’s goals and objectives. Additionally, the appendix outlines how the NCR partners (1) accounted for legislative, policy, and economic factors; (2) facilitated stakeholder engagement; (3) drew on capabilities-based analysis to identify priorities; and (4) designed capability initiatives to be specific and measurable. The 2010 NCR strategy generally addresses the particular problems and threats the strategy is directed towards, and the NCR has undertaken efforts to assess threats, vulnerabilities, and consequences. In our September 2006 statement on NCR strategic planning, we noted that an ongoing risk-assessment methodology is important to help ensure identification of emerging risks. It is not clear from the strategy how the NCR plans to update risk information, but according to responsible NCR officials, a regional risk assessment will be conducted every 2-4 years, and during this fiscal year the NCR will be making decisions about the timing and methodology for the next regional risk assessment. In addition, the officials said risk information can enter prioritization decisions as subject matter experts bring to bear their knowledge of critical- infrastructure sector-specific risk assessments and lessons learned from regional and worldwide incidents. The 2010 NCR Strategic Plan includes a profile of the region that details how particular social, economic, and critical-infrastructure factors in the region serve to increase both the threat and consequence components of its profile. For example, the plan’s profile explains that the NCR has more than 340,000 federal workers; 2,000 political, social, and humanitarian nonprofit organizations; more than 20 million tourists per year; 4,000 diplomats at more than 170 embassies; and some of the most important symbols of national sovereignty and democratic heritage. The plan notes that the region needs to be prepared for a variety of threats and challenges. The region has historically experienced, and in some cases routinely experiences, natural events such as ice, snowstorms, and flooding; special events such as international summits, inaugurations, and parades; and human-caused threats such as terrorist attacks. The plan identifies previously conducted risk-assessment efforts that, along with other information, helped inform the identification of priority goals, objectives, and activities. First, the NCR’s Hazard Information and Risk Assessment, conducted in 2006, was used to identify threats and vulnerabilities and then to consider consequences of various incidents. Second, NCRC conducted another assessment—the NCR Strategic Hazards Identification Evaluation for Leadership Decisions (SHIELD)—in 2008. NCRC developed SHIELD with input from federal, state, local, and private-sector partners and in collaboration with DHS’s Office of Risk Management and Analysis. SHIELD’s analysis ranks potential critical- infrastructure hazards and provides options for risk reduction, with a focus on probable scenarios for the region. The 2010 NCR strategy addresses what the strategy is trying to achieve, and steps to achieve those results in the next 3 to 5 years; however, the Performance Management Plan to help monitor progress toward those results is not expected to be finalized until December 31, 2011. The strategy clearly identifies updated and prioritized goals from the previous version of the strategy. Each of these four goals is accompanied by supporting objectives, which in turn, are supported by more targeted initiatives. According to the strategy, the goals, objectives, and initiatives were developed by multiple stakeholders, including emergency managers, first responders, health-care officials, and information- technology specialists, among others, and focus on developing and sustaining key capabilities in the region. (A full description of the goals, objectives, and initiatives identified in the 2010 NCR strategy appears in appendix IV.) In our work on desirable strategy characteristics, we reported that identification of priorities, milestones, and performance measures can aid implementing parties in achieving results in specific timeframes—and could enable more effective oversight and accountability. The strategy states that a Performance Measurement Plan will guide monitoring of the strategy’s implementation to evaluate progress in achieving its goals and objectives. NCR provided us with a draft copy of the Performance Measurement Plan, which is currently under development. Our review of this draft showed that the NCR has begun efforts to develop measures. While the 2010 plan states that the initiatives it defines are intended to be attained during the next 3 to 5 years, the strategy does not currently communicate specific milestones for achieving the plan’s objectives and initiatives. However, according to NCR officials, with the annual planning and implementation cycle beginning in January 2012, they plan to enter into a new phase of their strategy efforts, designed to make the strategy process more data-driven and project-management focused. According to the officials, this phase entails each objective being assigned a designated leader, who will be responsible for setting milestones and monitoring project plans for achieving his or her objective across the region. The Performance Measurement Plan template information for each initiative includes (1) the strategic goal and objective the initiative supports; (2) a scale to track progress toward achieving the initiative; (3) the initiative’s relationship to DHS’s Target Capabilities List; (4) applicable national standards; and (5) multiple metrics for each initiative to be tracked separately for Maryland, Virginia, and Washington, D.C. For example, in the draft plan, the NCR initiative to “catalog all critical infrastructure and key resources in the NCR and conduct consequence- of-loss analysis” ties in with three separate DHS Target Capabilities and is based on the DHS National Infrastructure Protection Plan’s definition of Tier-2 Critical Assets. It then provides five separate metrics to monitor the identification and documentation of assets, as well as the completion of consequence and loss analyses. A senior official in the NCR said that subject-matter experts are currently completing progress reports on the metrics for each of the initiatives in the strategy. The 2010 NCR strategy contains information and processes designed to help address what the strategy will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted based on balancing risk reductions with costs. According to the strategic plan, its implementation will be guided by investment plans that define the activities required to achieve the goals and objectives, and an annual work plan will lay out grant-funded projects needed to complete the investment plans. We have reviewed draft copies of 16 investment plans, which are out for NCR partner comment until December 22, 2011. Our review of the draft investment plans show that they specify their relationship to the strategic objective they are designed to support, but we did not evaluate how well the specific content of each investment plan is designed to achieve those objectives. In our work on desirable strategy characteristics, we reported that, ideally, a strategy would identify appropriate mechanisms to allocate resources, such as grants, in-kind services, loans, and user fees, based on identified needs. The strategic plan notes that the UASI grant program provides a key source of funding for achieving the priority capabilities in the NCR’s Strategic Plan. The strategic plan’s methodology appendix states that the 2010 UASI Investment Justifications serve as the foundation for the strategic plan’s goals and objectives. In previous NCR work, we raised concerns about NCR’s singular focus on UASI resources.plan states that the NCR draws upon federal grant programs outside of those provided by DHS, such as public health–related grants from the Department of Health and Human Services and Department of Justice. However, it is not clear that NCR has a systematic process for identifying and allocating funding other than UASI to help achieve priority objectives. According to responsible officials, NCR officials coordinate with local, state, and federal jurisdictions to help ensure UASI investments do not duplicate existing federal, state, and local assets. These officials also said the new Management Review Process, set to begin in January 2012, is to help with the identification and documentation of available resources. Similarly, the plan does not identify nonfinancial resources—such as Department of Defense (DOD) NORTHCOM or National Guard Bureau resources—that potentially could support priority objectives.government has an array of resources that can be made available, at request, to assist state and local response. For example, DOD has significant capabilities to augment a federal chemical, biological, radiological, nuclear, and high-yield explosive (CBRNE) response, like The federal those identified in the strategic plan, and also contributes to the organization, training, and equipping of state-controlled military units focused on consequence management. According to the 2010 strategic plan’s methodology appendix, the region’s priorities are informed by risk assessments—specifically SHIELD—gap analyses, after-action reports, and other studies. According to NCR officials, NCR and its jurisdictions coordinate with various DOD organizations to ensure the availability of CBRNE assets. Moreover, they said that subject-matter experts also bring their knowledge of other resources and capabilities to bear during efforts to identify gaps and prioritize resources. However, they acknowledged they have not systematically considered how existing federal capabilities—like DOD resources—relate to efforts to build the capabilities within their priority objectives, but are considering how they might further enhance coordination in the future. We will continue to monitor this issue as we conduct future work on NCR preparedness. The 2010 NCR strategy addresses the roles and responsibilities of the various NCR organizations. We previously reported that identifying which organizations will implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts helps answer the fundamental question about who is in charge, not only during times of crisis, but also during all phases of preparedness efforts: prevention, vulnerability reduction, and response and recovery. The NCR has responsibility for coordinating information and resources from multiple jurisdictions at the federal, state, and local levels to ensure that strategic goals are met. According to the 2010 NCR strategy, NCR stakeholders have constructed the strategy to complement state and local operational plans. Operational plans remain the responsibility of state and local emergency-management agencies, and state and local emergency-operations plans describe how each jurisdiction will coordinate its response to an event regionally. The Governance appendix to the NCR strategic plan details the various organizations involved in preparedness for all-hazards disasters in the region and their roles and responsibilities. For example, the Emergency Preparedness Council is described as the body that provides oversight of the Regional Emergency Coordination Plan and the NCR Strategic Plan to identify and address gaps in readiness in the NCR, among other responsibilities. Additionally, the appendix lays out the Regional Emergency Support Function committees for functions most frequently used to provide support for disasters and emergencies in the region. According to the plan, representatives from various sectors work together toward building capabilities within each support function and the chairs of the committees provide leadership in identifying gaps in regional capabilities in the committee’s areas of responsibility and identify the need for UASI funds or other resources to address those gaps. An example of a Regional Emergency Support Function committee is the Agriculture and Natural Resources Committee which focuses on nutrition assistance, animal and plant disease and pest response, food safety and security, as well as the safety and well-being of household pets. Finally, the appendix highlights the Regional Programmatic Working Groups which consist of practitioners, policymakers, and representatives from the government, civic, and private sectors. The groups serve to fill gaps, coordinate across the Regional Emergency Support Function, and provide more focused attention on high-priority areas. For example, the Exercise and Training Operations Panel Working Group supports training and exercises for all Regional Emergency Support Functions. The 2010 NCR strategy addresses how the plan is intended to integrate with the NCR jurisdictions’ strategies’ goals, objectives, and activities and their plans to implement the strategy. An appendix dedicated to the plan’s alignment with national and state strategic plans lays out how the NCR’s strategic plan aligns with related federal, state, and local strategies, programs and budgets, and emergency plans. The appendix states that the aim of the NCR strategic plan is to align regional strategic planning efforts with federal, state, and local planning efforts by identifying common goals, objectives, and initiatives to be implemented by the region. In addition, it says the strategic plan provides a framework by which state and local entities can plan, resource, and track priority homeland security–related programs and budgets. The NCR faces a significant challenge coordinating federal, state, local, and regional authorities for domestic preparedness activities. Due to the size and complexity of the NCR, coordination with relevant jurisdictions may confront challenges related to, among other things, different organizational cultures, varying procedures and work patterns among organizations, and a lack of communication between departments and agencies. A well-defined, comprehensive homeland security strategic plan for the NCR is essential for effectively coordinating investments in capabilities to address the risks that the region faces, and our preliminary observations are that the 2010 Strategic Plan was comprehensively developed. However, we have previously noted that strategies themselves are not endpoints, but rather, starting points. As with any strategic planning effort, implementation is the key. The ultimate measure of value for a strategy is how useful it is as guidance for policymakers and decisionmakers in allocating resources and balancing priorities. It remains to be seen the extent to which the plan is implemented effectively. We will continue to monitor this as part of our ongoing work. Chairmen Akaka and Pryor, Ranking Members Johnson and Paul, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this statement, please contact William O. Jenkins, Jr., Director, Homeland Security and Justice Issues, at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, the following individuals from GAO’s Homeland Security and Justice Team also made key contributions to this testimony: Chris Keisling, Assistant Director; Kathryn Godfrey, Susana Kuebler, David Lysy, Linda Miller, and Tracey King. The National Capital Region (NCR) is a complex multijurisdictional area comprising the District of Columbia and surrounding counties and cities in the states of Maryland and Virginia (as shown in figure 1) and is home to the federal government, many national landmarks, and military installations. In addition to being the headquarters to all three branches of the federal government, the NCR receives more than 20 million tourists each year. The NCR is the fourth-largest U.S. metropolitan area in the country and is also close to other densely populated areas, including Baltimore and Philadelphia. Those living and working in the NCR rely on a variety of critical infrastructure and key resources including transportation, energy, and water. The transportation system contains the nation’s second-largest rail transit and fifth-largest bus systems. The intricate network of major highways and bridges serve the region’s commuters and businesses, and the NCR also has two major airports within its borders. These attributes both heighten the threat and raise the consequences to the region in the instance of human-caused incidents. An incident caused by any hazard could result in catastrophic human, political, and economic harm to the region, as well as the entire nation. The Homeland Security Act established the Office of National Capital Region Coordination (NCRC) within the Department of Homeland Security. The NCRC is responsible for overseeing and coordinating federal programs for and relationships with state, local, and regional authorities in the NCR and for assessing and advocating for the resources needed by state, local, and regional authorities in the NCR to implement efforts to secure the homeland, among other things. One of the NCRC mandates is to coordinate with federal, state, local, and regional agencies and the private sector in the NCR to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities. Figure 2, below, depicts the NCR organizational structure. GAO product Homeland Security: Management of First Responder Grants in the National Capital Region Reflects the Need for Coordinated Planning and Performance Goals, GAO-04-433 (Washington, D.C.: May 28, 2004) Findings and recommendations NCR faced several challenges organizing and implementing efficient and effective regional preparedness programs. Among these challenges included the lack of a coordinated strategic plan, performance standards, and reliable, centrally sourced data on funds available and the purposes for which they were spent. We concluded that, without these basic elements, it would be difficult to assess first-responder capacities, identify first-responder funding priorities, and evaluate the effective use of federal funds to enhance first-responder capacities and preparedness. We recommended, for example, that the Secretary of Homeland Security (1) work with local National Capital Region (NCR) jurisdictions to develop a coordinated strategic plan to establish goals and priorities. Department of Homeland Security (DHS) generally agreed with our recommendations and NCR finalized its first strategic plan in 2006. Homeland Security: Effective Regional Coordination Can Enhance Emergency Preparedness, GAO-04-1009 (Washington, D.C. Sept. 15, 2004) Homeland Security: Managing First Responder Grants to Enhance Emergency Preparedness in the National Capital Region, GAO-05-889T (Washington, D.C.: July 14, 2005) The characteristics of effective regional coordination we previously identified were applicable to the NCR’s efforts to coordinate emergency preparedness. We noted that, if implemented as planned and as observed in its early stage, the NCR’s Urban Area Security Initiative (UASI) program would include a collaborative regional organization. While we remained concerned that the NCR did not include a full array of homeland-security grants in its planning, we reported that the NCR’s UASI program planned to address those issues by identifying non-UASI funding sources and collecting information about the funding allocations, expenditures, and purposes, as well as data on spending by NCR jurisdiction. NCR is currently planning to implement a process to help ensure identification of other funding resources. In this statement, we reported on the implementation of the recommendations from our May 2004 report. DHS was working with the NCR jurisdictions to develop a coordinated strategic plan. At that time, we identified the need for NCR to gather data regarding the funding available and used for implementing the plan and enhancing first-responder capabilities in the NCR—data that were not routinely available. We reported that such data would allow DHS to implement and monitor the future plan, identify and address preparedness gaps, and evaluate the effectiveness of expenditures by conducting assessments based on established guidelines and standards. We remained concerned that no systematic gap analysis had been completed for the region. We noted that the NCR planned to complete an effort to use the Emergency Management Accreditation Program (EMAP) as a means of conducting a gap analysis and assess NCR jurisdictions against EMAP’s national preparedness standards. Since we last reported, the District of Columbia has received its EMAP accreditation. Homeland Security: The Status of Strategic Planning in the National Capital Region, GAO-06-559T (Washington, D.C.: Mar. 29, 2006) At the time of this report, a completed NCR strategic plan was not yet available. We identified five areas that would be important for the NCR as it completed a strategic plan. Specifically, we reported that a well-defined, comprehensive strategic plan for the NCR was essential for assuring that the region is prepared for the risks it faces and that the NCR could focus on strengthening (1) initiatives that will accomplish objectives under the NCR strategic goals, (2) performance measures and targets that indicate how the initiatives will accomplish identified strategic goals, (3) milestones or time frames for initiative accomplishment, (4) information on resources and investments for each initiative, and (5) organizational roles, responsibilities, and coordination and integration and implementation plans. GAO product Homeland Security: Assessment of the National Capital Region Strategic Plan, GAO-06-1096T (Washington, D.C.: Sept. 28, 2006) Findings and recommendations We concluded that the 2006 NCR strategic plan included all six characteristics we consider desirable for a regional homeland-security strategy. To illustrate, the plan includes regional priorities and presents the rationale for the goals and related objectives and initiatives. However, we noted that the substance of the information within these six characteristics could be improved to guide decision makers. We previously outlined a set of desirable characteristics for strategies involving complex endeavors that require coordination and collaboration among multiple entities. The desirable characteristics are presented in table 1, along with a brief description and the benefit of each characteristic. Goal Ensure Interoperable Communications Capabilities Ensure response partners have the ability to transmit and receive voice, data, and video communications. Initiatives Increase access to voice systems capable of transmitting and receiving voice information to and from National Capital Region (NCR) response partners. Ensure response partners can communicate and share necessary, appropriate data in all environments and on a day-to-day basis. Develop and maintain secure data communications governed by common standards and operating procedures. Share Computer Aided Dispatch data between jurisdictions and other related data systems to streamline the process of capturing 911 information and responding to incidents. Share Geographic Information System data between jurisdictions and other related data systems. Ensure response partners can communicate and share necessary, appropriate video information in all environments on a day-to-day basis. Increase access to video systems capable of transmitting and receiving video information to and from NCR response partners. Enhance Information Sharing and Situational Awareness Ensure NCR partners share the information needed to make informed and timely decisions; take appropriate actions; and communicate accurate, timely information with the public. Ensure the public has all information necessary to make appropriate decisions and take protective actions. Improve the dissemination of accurate, timely information to the public using multiple venues, including social media outlets, to ensure that the content of emergency messages and alerts is easily accessible and available to the public. Define, obtain, and share appropriate situational information with NCR partners so that they have the necessary information to make informed decisions. Define essential elements of data and information for situational awareness for each discipline and all partners in the NCR. Then develop, maintain, and utilize business practices and common technical standards for situational awareness in order to make informed decisions. Improve the NCR’s ability to collect, analyze, share, and integrate intelligence and law enforcement information so that NCR partners receive appropriate information. Ensure all NCR fusion centers share information through secure and open systems, produce relevant and standardized analytical products, and share information in a timely manner with appropriate NCR partners. Ensure NCR partners have the systems, processes, security clearances, tools, and procedures to access, gather, and share appropriate intelligence, law enforcement, and classified data. Goal Enhance Critical Infrastructure Protection Enhance the protection and resilience of critical infrastructure and key resources (CI/KR) in the NCR to reduce their vulnerability to disruption from all-hazards events. Objectives Understand and prioritize risks to CI/KR. Catalog all CI/KR in the NCR and conduct consequence-of-loss analysis. Conduct a comprehensive risk analysis of the NCR CI/KR, including a review of the critical systems upon which they depend and the interdependencies of those systems. Develop and implement a plan for sharing CI/KR information among public and private entities throughout the NCR. Reduce vulnerabilities and enhance resiliency of CI/KR. Develop and implement sector vulnerability- reduction plans. Conduct a technology-feasibility assessment and develop a plan for technology investments for CI/KR. Develop and implement a cybersecurity plan for NCR critical systems. Ensure continuity of critical services required during emergencies and disaster recovery. Identify key facilities throughout the NCR that require backup critical services. Assess facilities’ plans for loss of critical services. Promote broad participation in CI/KR community outreach and protection programs. Develop a community-awareness training and education program. Develop a strategy for using CI/KR data to inform law enforcement. Establish a regional business information- sharing committee. Monitor Critical Infrastructure to provide situational awareness and to promote rapid response. Develop and implement a plan for a comprehensive CI/KR monitoring program. Develop and implement a plan that integrates CI/KR monitoring information into response operations. Goal Ensure Development and Maintenance of Regional Core Capabilities Develop and maintain the basic building blocks of preparedness and response by ensuring the NCR develops a baseline of capabilities including: Mass Casualty, Health Care System Surge, and Mass Prophylaxis; Mass Care and Evacuation; Citizen Participation, Alert, and Public Information; Chemical, Biological, Radiological, Nuclear, and Explosive Detection and Response; and Planning, Training, and Exercises. Initiatives Ensure that private health care, federal, state, and local public health, and EMS programs and providers in the NCR can increase surge capacity to respond to mass- casualty incidents and events requiring mass prophylaxis. Establish a regional monitoring and response system that allows for health and medical-response partners to track patients, hospital bed availability, alerts, and EMS/hospital activity in a shared, secure environment. Ensure the ability to track patients from the start of pre-hospital care to discharge from the health-care system during both daily operations and mass-casualty incidents. Improve the region’s capacity to evacuate and provide mass care for the public, including special needs individuals, when impacted by an all-hazards event. Develop, coordinate, and integrate local and state evacuation plans so that evacuation polices and routes complement each other to ensure the NCR’s ability to coordinate evacuation across the region. Ensure the NCR’s ability to provide sheltering and feeding for the first 72 hours following an incident for individuals in the general population, persons with special needs, persons with special medical needs, and pets. Strengthen individual, community, and workplace preparedness for emergency events through public engagement and citizen participation designed to reach the general population and special needs citizens in response to and recovery from all-hazards events. Sustain the NCR’s ability to alert and warn residents, businesses, and visitors using multiple methods including social media. Bolster recruitment, management, and retention of volunteers through Community Emergency Response Team, other citizen corps programs, Volunteer Organizations Active in Disaster member agencies, the Medical Reserve Corps, and registration in Emergency System for Advance Registration of Volunteer Health Professionals programs. Initiatives Ensure post-incident human services and recovery assistance throughout the NCR including case management, emergency housing, behavioral health, spiritual care, and family reunification. Ensure the NCR has region-wide capacity to detect, respond, and recover in a timely manner from CBRNE events and other attacks requiring tactical response and technical rescue. Enhance the NCR’s ability to detect chemical, biological, radiological, and other types of contamination. Ensure region-wide access to Type 1 hazardous material (HazMat), bomb response/Explosive Ordnance Device units, and tactical teams and ensure each unit/team is able to respond in a reasonable amount of time. Ensure all responders in the NCR have access to Personal Protective Equipment, equipment, and apparatus that match the identified capability needs. Establish a regional monitoring and response system that provides health and medical-response partners with central access to biosurveillance. Improve capacity to develop and coordinate plans among all NCR partners and ensure the availability of region-wide training and exercise programs to strengthen preparedness, response, and recovery efforts from all-hazards events. Develop and exercise key regional emergency response and recovery plans. Ensure regional procedures, memoranda of understanding, and mutual-aid agreements are in place to allow for rapid coordination of resources including health assets across jurisdictional boundaries. Develop and update a matrix of training and exercises that meet Homeland Security Exercise and Evaluation Program standards needed to maintain core regional capabilities. This matrix should address new and emerging threats and concerns raised in gap analyses and after-action reports from events and exercises. Although the specific elements needed for situational awareness vary according to the field and area of expertise, the term “situational awareness” in the 2010 strategic plan refers to the ability to identify, monitor, and process important information, understand the interrelatedness of that information and its implications, and apply that understanding to make critical decisions in the present and near future. For example, if the region is threatened by a hurricane, awareness of the status of roads, shelters, traffic, available medical resources, power outages, and the like is important in making decisions about what type of assistance is needed and where it is needed. To coordinate an effective response, NCR partners need to share their information and have access to the information of others. The NCR fusion centers include the Maryland Coordination and Analysis Center, the Washington Regional Threat and Analysis Center, the NCR Intelligence Center, and the Virginia Fusion Center. A fusion center is a physical location where data can be collected from a variety of sources, including but not limited to police departments, fire departments, health departments, and the private sector. Experts analyze the incoming information and create intelligence products, which can be used to maximize resources, streamline operations, and improve the ability to address all-hazards incidents and threats. Fusion centers help to prevent terrorism and criminal activities as well as support preparedness for man-made and natural hazards to trigger quick and effective response to all- hazards events. Critical services are defined as life-sustainment services during an emergency and include energy (electric power and gas), water supply, transportation, food, and communications. These are all supplied routinely by the CI/KR sectors. During a disaster, providing critical life-sustaining services ensures that government and private health, safety, and emergency services continue, and that plans are in place to compensate for losses among interdependent systems. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the status of efforts to enhance emergency preparedness in the National Capital Region (NCR). The NCR is a partnership among the District of Columbia, the State of Maryland, the Commonwealth of Virginia, area local governments, the Department of Homeland Security's (DHS) Office for National Capital Region Coordination (NCRC) within the Federal Emergency Management Agency (FEMA), and nonprofit organizations and private sector interests. The partnership aims to help the region prepare for, prevent, protect against, respond to, and recover from "all-hazards" threats or events. Gridlock and hazardous conditions during recent events like the January 26, 2011, snow and ice storm and the August 23, 2011, earthquake demonstrate the importance of regional communication and coordination in the NCR and that challenges remain. Well-crafted and executed operational plans are critical for effective response to emergencies, but sound strategic planning is also important. A coordinated strategy to establish and monitor the achievement of regional goals and priorities is fundamental to enhancing emergency preparedness and response capabilities in the NCR. We reported on this issue repeatedly from 2004 through 2006. This testimony focuses on the extent to which strategic planning for NCR preparedness is consistent with characteristics we have previously identified as desirable for strategies for complex undertakings, such as NCR preparedness. This statement is based on work we recently completed for Congress. The 2010 NCR strategic plan, when accompanied by its supporting documents--investment plans, work plans, and a Performance Management Plan--collectively referred to in this statement as the NCR strategy, is largely consistent with the six characteristics of a strategy that we advocated for complex homeland-security undertakings where multiple organizations must act together to achieve goals and objectives. However, neither the Performance Management Plan nor the investment plans have yet been finalized; decisions remain regarding how the NCR will conduct future regional risk assessments; and it is not clear that NCR has systematic processes in place to identify the full range of resources available to support its goals. Finally, it is important to keep in mind that strategies themselves are not endpoints, but rather, starting points. As with any strategic planning effort, implementation is the key. The ultimate measure of the 2010 NCR strategy's value is how useful it is as guidance for policymakers and decisionmakers in allocating resources and balancing priorities. |
In 1986, the United States, the FSM, and the RMI entered into the original Compact of Free Association. The compact provided a framework for the United States to work toward achieving its three main goals: (1) to secure self-government for the FSM and the RMI, (2) to ensure certain national security rights for all of the parties, and (3) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency. Under the original compact, the FSM and RMI also benefited from numerous U.S. federal programs, while citizens of both nations exercised their right under the compact to live and work in the United States as “nonimmigrants” and to stay for long periods of time. Although the first and second goals of the original compact were met, economic self-sufficiency was not achieved under the first compact. The FSM and the RMI became independent nations in 1978 and 1979, respectively, and the three countries established key defense rights, including securing U.S. access to military facilities on Kwajalein Atoll in the RMI through 2016. The compact’s third goal was to be accomplished primarily through U.S. direct financial assistance to the FSM and the RMI that totaled $2.1 billion from 1987 through 2003. However, estimated FSM and RMI per capita GDP levels at the close of the compact did not exceed, in real terms, those in the early 1990s, although U.S. assistance had maintained income levels that were higher than the two countries could have achieved without support. In addition, we found that the U.S., FSM, and RMI governments provided little accountability over compact expenditures and that many compact-funded projects experienced problems because of poor planning and management, inadequate construction and maintenance, or misuse of funds. In 2003, the United States approved separate amended compacts with the FSM and RMI that (1) continue the defense relationship, including a new agreement providing U.S. military access to Kwajalein Atoll in the RMI through 2086; (2) strengthen immigration provisions; and (3) provide an estimated $3.6 billion in financial assistance to both nations from 2004 through 2023, including about $1.5 billion to the RMI (see app. I). The amended compacts identify the additional 20 years of grant assistance as intended to assist the FSM and RMI governments in their efforts to promote the economic advancement and budgetary self-reliance of their people. Financial assistance is provided in the form of annual sector grants and contributions to each nation’s trust fund. The amended compacts and their subsidiary agreements, along with the countries’ development plans, target the grant assistance to six sectors—education, health, public infrastructure, the environment, public sector capacity building, and private sector development—prioritizing two sectors, education and health. To provide increasing U.S. contributions to the FSM’s and the RMI’s trust funds, grant funding decreases annually and will likely result in falling per capita grant assistance over the funding period and relative to the original compact (see fig. 1). For example, in 2004 U.S. dollar terms, FSM per capita grant assistance will fall from around $1,352 in 1987 to around $562 in 2023, and RMI per capita assistance will fall from around $1,170 in 1987 to around $317 in 2023. Under the amended compacts, annual grant assistance is to be made available in accordance with an implementation framework that has several components (see app. II). For example, prior to the annual awarding of compact funds, the countries must submit development plans that identify goals and performance objectives for each sector. The FSM and RMI governments are also required to monitor day-to-day operations of sector grants and activities, submit periodic financial and performance reports for the tracking of progress against goals and objectives, and ensure annual financial and compliance audits. In addition, the U.S. and FSM Joint Economic Management Committee (JEMCO) and the U.S. and RMI Joint Economic Management and Financial Accountability Committee (JEMFAC) are to approve annual sector grants and evaluate the countries’ management of the grants and their progress toward compact goals. The amended compacts also provide for the formation of FSM and RMI trust fund committees to, among other things, hire money managers, oversee the respective funds’ operation and investment, and provide annual reports on the effectiveness of the funds. The RMI economy shows limited potential for developing sustainable income sources other than foreign assistance to offset the annual decline in U.S. compact grant assistance. In addition, the RMI has not enacted economic policy reforms needed to improve its growth prospects. The RMI’s economy shows continued dependence on government spending of foreign assistance and limited potential for expanded private sector and remittance income. Since 2000, the estimated public sector share of GDP has grown, with public sector expenditure in 2005—about two-thirds of which is funded by external grants—accounting for about 60 percent of GDP. The RMI’s government budget is characterized by limited tax revenue paired with growing government payrolls. For example, RMI taxes have consistently provided less than 30 percent of total government revenue; however, payroll expenditures have roughly doubled, from around $17 million in 2000 to around $30 million in 2005. The RMI development plan identifies fishing and tourism as key potential private sector growth industries. However, the two industries combined currently provide less than 5 percent of employment, and both industries face significant constraints to growth that stem from structural barriers and a costly business environment. According to economic experts, growth in these industries is limited by factors such as geographic isolation, lack of tourism infrastructure, inadequate interisland shipping, a limited pool of skilled labor, and a growing threat of overfishing. Although remittances from emigrants could provide increasing monetary support to the RMI, evidence suggests that RMI emigrants are currently limited in their income-earning opportunities abroad owing to inadequate education and vocational skills. For example, the 2003 U.S. census of RMI migrants in Hawaii, Guam, and the Commonwealth of the Northern Marianas Islands reveals that only 7 percent of those 25 years and older had a college degree and almost half of RMI emigrants lived below the poverty line. Although the RMI has undertaken efforts aimed at economic policy reform, it has made limited progress in implementing key tax, land, foreign investment, and public sector reforms that are needed to improve its growth prospects. For example: The RMI government and economic experts have recognized for several years that the RMI tax system is complex and regressive, taxing on a gross rather than net basis and having weak collection and administrative capacity. Although the RMI has focused on improving tax administration and has raised some penalties and tax levels, legislation for income tax reform has failed and needed changes in government import tax exemptions have not been addressed. In attempts to modernize a complex land tenure system, the RMI has established land registration offices. However, such offices have lacked a systematic method for registering parcels, instead waiting for landowners to voluntarily initiate the process. For example, only five parcels of land in the RMI had been, or were currently being, registered as of June 2006. Continued uncertainties over land ownership and land values create costly disputes, disincentives for investment, and problems regarding the use of land as an asset. Economic experts and private sector representatives describe the overall climate for foreign investment in the RMI as complex and nontransparent. Despite attempts to streamline the process, foreign investment regulations remain relatively burdensome, with reported administrative delays and difficulties in obtaining permits for foreign workers. The RMI government has endorsed public sector reform; however, efforts to reduce public sector employment have generally failed, and the government continues to conduct a wide array of commercial enterprises that require subsidies and compete with private enterprises. As of June 2006, the RMI had not prepared a comprehensive policy for public sector enterprise reform. Although the RMI development plan includes objectives for economic reform, until August 2006—two years into the amended compact— JEMFAC did not address the country’s slow progress in implementing these reforms. The RMI has allocated funds to priority sectors, although several factors have hindered its use of the funds to meet long-term development needs. Further, despite actions taken to effectively implement compact grants, administrative challenges have limited its ability to ensure use of the grants for its long-term goals. In addition, although OIA has monitored early compact activities, it has also faced capacity constraints. The RMI allocated compact funds largely to priority sectors for 2004-2006. The RMI allocated about 33 percent, 40 percent, and 20 percent of funds to education, infrastructure, and health, respectively (see fig. 2). The education allocation included funding for nine new school construction projects, initiated in October 2003 through July 2006. However, various factors, such as land use issues and inadequate needs assessments, have limited the government’s use of compact funds to meet long-term development needs. For example: Management and land use issues. The RMI government and Kwajalein landowners have been disputing the management of public entities and government use of leased land on the atoll. Such tensions have negatively affected the construction of schools and other community development initiatives. For example, the government and landowners disagreed about the management of the entity designated to use the compact funds set aside for Ebeye special needs; consequently, about $3.3 million of the $5.8 million allocated for this purpose had not been released for the community’s benefit until after September 2006. In addition, although the RMI has completed some infrastructure projects where land titles were clear and long-term leases were available, continuing uncertainty regarding land titles may delay future projects. Lack of planning for declining U.S. assistance. Despite the goal of budgetary self-reliance, the RMI lacks concrete plans for addressing the annual decrement in compact funding, which could limit its ability to sustain current levels of government services in the future. RMI officials told us that they can compensate for the decrement in various ways, such as through the yearly partial adjustment for inflation provided for in the amended compacts or through improved tax collection. However, the partial nature of the adjustment causes the value of the grant to fall in real terms, independent of the decrement, thereby reducing the government’s ability to pay over time for imports, such as energy, pharmaceutical products, and medical equipment. Additionally, the RMI’s slow progress in implementing tax reform will limit its ability to augment tax revenues. The RMI has taken steps to effectively implement compact assistance, but administrative challenges have hindered its ability to ensure use of the funds for its long-term development goals. The RMI established development plans that include strategic goals and objectives for the sectors receiving compact funds. Further, in addition to establishing JEMFAC, the RMI designated the Ministry of Foreign Affairs as its official contact point for compact policy and grant implementation issues. However, data deficiencies, report shortcomings, capacity constraints, and inadequate communication have limited the RMI and U.S. governments’ ability to consistently ensure the effective use of grant funds to measure progress, and monitor day-to-day activities. Data deficiencies. Although the RMI established performance measurement indicators, a lack of complete and reliable data has prevented the use of these indicators to assess progress. For example, the RMI submitted data to JEMFAC for only 15 of the 20 required education performance indicators in 2005, repeating the submission in 2006 without updating the data. Also, in 2005, the RMI government reported difficulty in comparing the health ministry’s 2004 and 2005 performance owing to gaps in reported data—for instance, limited data were available in 2004 for the outer island health care system. Report shortcomings. The usefulness of the RMI’s quarterly performance reports has also been limited by incomplete and inaccurate information. For example, the RMI Ministry of Health’s 2005 fourth-quarter report contained incorrect outpatient numbers for the first three quarters, according to a hospital administrator. Additionally, we found several errors in basic statistics in the RMI quarterly reports for education, and RMI Ministry of Education officials and officials in other sectors told us that they had not been given the opportunity to review the final performance reports compiled by the statistics office prior to submission. Capacity constraints. Staff and skill limitations have constrained the RMI’s ability to provide day-to-day monitoring of sector grant operations. However, the RMI has submitted its single audits on time. In addition, although the single audit reports for 2004 and 2005 indicated weaknesses in the RMI’s financial statements and compliance with requirements of major federal programs, the government has developed corrective action plans to address the 2005 findings related to such compliance. Lack of communication. Our interviews with U.S. and RMI department officials, private sector representatives, NGOs, and economic experts revealed a lack of communication and dissemination of information by the U.S. and RMI governments on issues such as JEMFAC decisions, departmental budgets, economic reforms, legislative decisions, and fiscal positions of public enterprises. Such lack of information about government activities creates uncertainty for public, private, and community leaders, which can inhibit grant performance and improvement of social and economic conditions. As administrator of the amended compact grants, OIA monitored sector grant and fiscal performance, assessed RMI compliance with compact conditions, and took action to correct persistent shortcomings. For example, since 2004, OIA has provided technical advice and assistance to help the RMI improve the quality of its financial statements and develop controls to resolve audit findings and prevent recurrences. However, OIA has been constrained in its oversight role owing to staffing challenges and time-consuming demands associated with early compact implementation challenges in the FSM. Market volatility and choice of investment strategy could lead to a wide range of RMI trust fund balances in 2023 (see app. III) and potentially prevent trust fund disbursements in some years. Although the RMI has supplemented its trust fund balance with additional contributions, other sources of income are uncertain or entail risks. Furthermore, the RMI’s trust fund committee has faced challenges in effectively managing the fund’s investment. Market volatility and investment strategy could have a considerable impact on projected trust fund balances in 2023. Our analysis indicates that, under various scenarios, the RMI’s trust fund could fall short of the maximum allowed disbursement level—an amount equal to the inflation- adjusted compact grants in 2023—after compact grants end, with the probability of shortfalls increasing over time (see fig. 3). For example, under a moderate investment strategy, the fund’s income is only around 10 percent likely to fall short of the maximum distribution by 2031. However, this probability rises to almost 40 percent by 2050. Additionally, our analysis indicates a positive probability that the fund will yield no disbursement in some years; under a moderate investment strategy the probability is around 10 percent by 2050. Despite the impact of market volatility and investment strategy, the trust fund committee’s reports have not yet assessed the fund’s potential adequacy for meeting the RMI’s long- term economic goals. RMI trust fund income could be supplemented from several sources, although this potential is uncertain. For example, the RMI received a commitment from Taiwan to contribute $40 million over 20 years to the RMI trust fund, which improved the RMI fund’s likely capacity for disbursements after 2023. However, the RMI’s limited development prospects constrain its ability to raise tax revenues to supplement the fund’s income. Securitization—issuing bonds against future U.S. contributions—could increase the fund’s earning potential by raising its balances through bond sales. However, securitization could also lead to lower balances and reduced fund income if interest owed on the bonds exceeds investment returns. The RMI trust fund committee has experienced management challenges in establishing the trust fund to maximize earnings. Contributions to the trust fund were initially placed in a low-interest savings account and were not invested until 16 months after the initial contribution. As of June 2007, the RMI trust fund committee had not appointed an independent auditor or a money manager to invest the fund according to the proposed investment strategy. U.S. government officials suggested that contractual delays and committee processes for reaching consensus and obtaining administrative support contributed to the time taken to establish and invest funds. As of May 2007, the committee had not yet taken steps to improve these processes. Since enactment of the amended compacts, the U.S. and RMI governments have made efforts to meet new requirements for implementation, performance measurement, and oversight. However, the RMI faces significant challenges in working toward the compact goals of economic advancement and budgetary self-reliance as the compact grants decrease. Largely dependent on government spending of foreign aid, the RMI has limited potential for private sector growth, and its government has made little progress in implementing reforms needed to increase investment opportunities and tax income. In addition, JEMFAC did not address the pace of reform during the first 2 years of compact implementation. Further, both the U.S. and RMI governments have faced significant capacity constraints in ensuring effective implementation of grant funding. The RMI government and JEMFAC have also shown limited commitment to strategically planning for the long-term, effective use of grant assistance or for the budgetary pressure the government will face as compact grants decline. Because the trust fund’s earnings are intended as a main source of U.S. assistance to the RMI after compact grants end, the fund’s potential inadequacy to provide sustainable income in some years could impact the RMI’s ability to provide government services. However, the RMI trust fund committee has not assessed the potential status of the fund as an ongoing source of revenue after compact grants end in 2023. Our prior reports on the amended compacts include recommendations that the Secretary of the Interior direct the Deputy Assistant Secretary for Insular Affairs, as chair of the RMI management and trust fund committees, to, among other things, ensure that JEMFAC address the lack of RMI progress in implementing reforms to increase investment and tax income; coordinate with other U.S. agencies on JEMFAC to work with the the RMI to establish plans to minimize the impact of declining assistance; coordinate with other U.S. agencies on JEMFAC to work with the RMI to fully develop a reliable mechanism for measuring progress toward compact goals; and ensure the RMI trust fund committee’s assessment and timely reporting of the fund’s likely status as a source of revenue after 2023. Interior generally concurred with our recommendations and has taken actions in response to several of them. For example, in August 2006, JEMFAC discussed the RMI’s slow progress in implementing economic reforms. Additionally, the trust fund committee decided in June 2007 to create a position for handling the administrative duties of the fund. Regarding planning for declining assistance and measuring progress toward compact goals, JEMFAC has not held an annual meeting since the December 2006 publication of the report containing those recommendations. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For future contacts regarding this testimony, please call David Gootnick at (202) 512-3149 or [email protected]. Individuals making key contributions to this testimony included Emil Friberg, Jr., Ming Chen, Tracy Guerrero, Julie Hirshen, Leslie Holen, Reid Lowe, Mary Moutsos, Kendall Schaefer, and Eddie Uyekawa. FSM grants (Section 211) (Section 215) (Section 211) (Section 216) (Section 212)a For both the FSM and the RMI, annual grant amounts include $200,000 to be provided directly by the Secretary of the Interior to the Department of Homeland Security, Federal Emergency Management Agency, for disaster and emergency assistance purposes. The grant amounts do not include the annual audit grant, capped at $500,000, that will be provided to both countries. These dollar amounts shall be adjusted each fiscal year for inflation by the percentage that equals two-thirds of the percentage change in the U.S. gross domestic product implicit price deflator, or 5 percent, whichever is less in any one year, using the beginning of 2004 as a base. Grant funding can be fully adjusted for inflation after 2014, under certain U.S. inflation conditions. “Kwajalein Impact” funding is provided to the RMI government, which in turn compensates Kwajalein Atoll landowners for U.S. access to the atoll for military purposes. FSM/RMI propoe grnt budget for ech ector tht inclde proviion report to used to: – Monitor gener – Expenditre, performnce go, nd pecific performnce indictor – Brekdown of peronnel expenditre nd other co – Informtion on U.S. federl progr nd other donor United Ste evuate the propoed ector grnt budget for: – Contency with fnding requirement in the compct nd relted – Identify poitive event thccelerte performnce otcome nd prolem encontered nd their impct on grnt ctivitie nd performnce measureopertion to ensure complince with grnt condition Submit nnual report to the U.S. U.S. dollar (in illion) U.S. dollar (in illion) 1. 1. 0. 0. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 2003, the U.S. government extended its economic assistance to the Republic of the Marshall Islands (RMI) through an Amended Compact of Free Association. From 2004 to 2023, the United States will provide an estimated $1.5 billion to the RMI, with annually decreasing grants as well as increasing contributions to a trust fund. The assistance, targeting six sectors, is aimed at assisting the country's efforts to promote economic advancement and budgetary self-reliance. The trust fund is to be invested and provide income for the RMI after compact grants end. The Department of the Interior (Interior) administers and oversees this assistance. Drawing on prior GAO reports ( GAO-05-633 , GAO-06-590 , GAO-07-163 , GAO-07-513 , GAO-07-514R), this testimony discusses (1) the RMI's economic prospects, (2) implementation of the amended compact to meet long-term goals, and (3) potential trust fund earnings. In conducting its prior work, GAO visited the RMI, reviewed reports, interviewed officials and experts, and used a simulation model to project the trust fund's income. Prior GAO reports recommended, among other things, that Interior work with the RMI to address lack of progress in implementing reforms; plan for declining grants; reliably measure progress; and ensure timely reporting on the fund's likely status as a source of revenue after 2023. Interior agreed with GAO's recommendations. The RMI has limited prospects for achieving its long-term development goals and has not enacted policy reforms needed to achieve economic growth. The RMI economy depends on public sector spending of foreign assistance rather than on private sector or remittance income. At the same time, the two private sector industries identified as having growth potential--fisheries and tourism--face significant barriers to expansion because of a costly business environment. RMI emigrants also lack marketable skills needed to increase revenue from remittances. Despite declining grants under the compact, RMI progress in implementing key policy reforms to improve the private sector environment, such as tax or land reform, has been slow. In August 2006, the RMI's compact management committee began to address the country's slow progress in implementing reforms. Although the RMI has made progress in implementing compact assistance, it faces several challenges in allocating and using this assistance to support its long-term development goals. RMI grant allocations have reflected compact priorities by targeting health, education, and infrastructure. However, political disagreement over land use and management of public entities has negatively affected infrastructure projects. The RMI also has not planned for long-term sustainability of services that takes into account declining compact assistance. Inadequate baseline data and incomplete performance reports have further limited the RMI's ability to adequately measure progress. Although single-audit reporting has been timely, insufficient staff and skills have limited the RMI's ability to monitor day-to-day sector grant operations. Interior's Office of Insular Affairs (OIA) has conducted administrative oversight of the sector grants but has been constrained by competing oversight priorities. The RMI trust fund may not provide sustainable income for the country after compact grants end. Market volatility and the choice of investment strategy could cause the RMI trust fund balance to vary widely, and there is increasing probability that in some years the trust fund will not reach the maximum disbursement level allowed--an amount equal to the inflation-adjusted compact grants in 2023--or be able to disburse any income. In addition, although the RMI has supplemented its trust fund income with a contribution from Taiwan, other sources of income are uncertain or entail risk. Trust fund management processes have also been problematic; as of June 2007, the RMI trust fund committee had not appointed an independent auditor or a money manager to invest the fund according to the proposed investment strategy. |
There are two broad forms of PLAs—national and local. National agreements generally are sponsored by union and industry organizations, which negotiate and sign the agreements in advance of the need for them. National agreements are ready for a contractor’s immediate use on a construction project after approval by the sponsoring organization. In contrast, local agreements result from direct negotiations between contractors and local unions for specific projects. PLAs cover new construction work and maintenance, repairs, and alterations of existing real property. Their provisions generally (1) apply to all work performed under a specific contract or project, or at a specific location; (2) require recognition of the signatory unions as the sole bargaining representatives for covered workers, whether or not the workers are union members; (3) supersede all other collective bargaining agreements; (4) prohibit strikes and lockouts; (5) require hiring through union referral systems; (6) require all subcontractors to become signatory to the agreement; (7) establish uniform work rules covering overtime, working hours, dispute resolution, and other matters; and (8) prescribe craft wages, either in the body of the agreement or in an appendix or attachment. Historically, the use of PLAs on federal and other publicly funded projects dates back to the construction of the Grand Coulee Dam in Washington state in 1938 and the Shasta Dam in California in 1940. During and after World War II, atomic energy and defense construction projects used PLAs. NASA used PLAs in construction at Cape Canaveral, FL, during the 1960s. In addition, the private sector has used PLAs on various projects, including the Trans-Alaska Pipeline and Disney World. More recently, PLAs gained particular public attention when, in July 1992, President Bush seemingly endorsed PLAs by siding with organized labor in litigation before the U.S. Supreme Court over the use of a PLA on the Boston Harbor cleanup project. However, in October 1992, he issued an Executive Order forbidding the use of PLAs by any parties to federal or federally funded construction projects. President Clinton revoked that Executive Order in February 1993. In early 1997, President Clinton had planned to issue an Executive Order requiring federal agencies to use PLAs on their construction contracts, but the proposal met with considerable political and industry opposition. Instead, the President issued the June 5, 1997, memorandum, described earlier, which encourages the use of PLAs on contracts over $5 million for the construction of facilities to be owned by a federal department or agency. The memorandum also states that PLAs can be used in other circumstances, like leasehold arrangements and federally funded projects. The memorandum defines “construction” to include not only new construction but also alteration and repair work. The U.S. Supreme Court’s March 1993 decision in the Boston Harbor case cleared the way for more frequent use of PLAs on public-sector construction projects. A lower court had required the Massachusetts Water Resources Authority, an independent state agency, to clean up pollution in Boston Harbor. The Authority’s contract bid specification for the project required the use of a PLA negotiated between its project manager (a private contractor) and local unions. The bid specification was challenged; and the case ended up before the U. S. Supreme Court, which upheld the use of the PLA. During the 1990s, according to one literature source, the use of PLAs on at least 25 other nonfederal public-sector projects faced court challenges in nine states (see app. III). Most challenges reportedly claimed, among other things, that the use of the PLA violated state or local competitive procurement laws. However, the courts upheld the use of the PLAs in 17 of the 25 cases and invalidated the PLAs in the other 8 cases. In addition, there was a court case concerning the PLA at DOE’s Oak Ridge Reservation, Tennessee. The PLA was entered into by DOE’s prime contractor—MK-Ferguson of Oak Ridge Corporation—and the Knoxville Building and Construction Trades Council. In 1992, the Sixth Circuit Court of Appeals concluded that the PLA violated neither the National Labor Relations Act nor the Competition in Contracting Act. The court held that it was unaware of any reason why DOE may not directly, or through an agent, enter into such an agreement, as long as it would be valid if entered into by private parties. We did not find any other court decisions ruling on the legality of PLAs, with respect to federal construction contracts. Under the authority of Public Law 85-804, August 28, 1958, as amended, certain federal agencies have extraordinary contracting authority to facilitate the national defense. Those agencies can take procurement actions they deem necessary, without regard to other provisions of law relating to the making, performance, amendment, or modification of contracts. Nine of the 13 federal agencies we reviewed have this authority, but as discussed later in this report, only DOE reported using that authority with regard to PLAs. The lack of available complete data on the use of PLAs precludes an exact count of their total numbers at any level—federal, state government, or private sector. The federal government has no central or agency-specific data system with information about PLAs used on federal construction contracts. In addition, we found no source of complete data on the use of PLAs at the state government or private-sector levels. Certain labor union and industry organizations compile data on standardized national PLAs they sponsor, but they have little or no data on PLAs negotiated locally. Nevertheless, our research disclosed that PLAs have been used in all 50 states and the District of Columbia on federal, state, local government, or private sector construction projects, including nonfederal projects that involve federal funds. The Federal Procurement Data System, maintained for OMB by the General Services Administration’s (GSA) Federal Procurement Data Center, contains statistical data about U.S. government executive branch agencies’ procurement contracts awarded since October 1, 1978. However, the Federal Procurement Data System does not collect or report data about PLAs used on federal construction contracts. In addition, OMB and the 13 federal agencies we reviewed reported that they are not aware of any external or internal data systems that report information about PLAs used on federal construction contracts. To respond to our requests, most federal agencies had to canvas their internal procurement organizations to determine any use of PLAs on their construction projects. The Building and Construction Trades Department of the American Federation of Labor-Congress of Industrial Organizations (AFL-CIO), maintains data on at least three current national PLAs that it sponsors (see app. IV). In addition, the National Constructors Association and National Maintenance Agreements Policy Committee, Inc., each sponsors a single national agreement and maintains data on the agreement. The Building and Construction Trades Department also sponsored at least one other national agreement in the past—the Nuclear Power Construction Stabilization Agreement. The five current national agreements cover varying types of construction and maintenance work performed by workers in various craft unions. On May 14, 1997, the Building and Construction Trades Department sent a letter regarding the use of PLAs to the secretaries of its affiliated state and local building and construction trades councils. The letter reminded the councils about the Department’s procedures and policies that have been in place, but frequently ignored, since at least 1976. The letter reiterated existing procedures on the use of PLAs and stated that councils ignoring the procedures would be subject to sanctions determined by the Department. In summary, the letter requires local councils to obtain separate written approval from the Department to negotiate or execute any PLA. In addition, the letter transmitted a copy of the Department’s standard PLA to each council, stating that it must be used in the negotiation of all future PLAs. This action has the potential to eliminate ad hoc local PLAs, replace them with a more uniform PLA that local parties can adapt to their projects, and facilitate a more complete database of PLAs. The Building and Construction Trades Department had no comprehensive data on PLAs negotiated locally before May 1997, but it provided examples of a few such PLAs. Overall, our research identified about 90 locally negotiated PLAs used in at least 20 states. However, contractors and labor experts told us that locally negotiated PLAs are used more frequently than national agreements. Therefore, it is likely that there are many more local agreements than those we identified. Possible reasons why the local agreements are not more readily identifiable are that they are common labor-relations tools used in the construction industry; and they are rarely publicized, particularly PLAs used in the private sector. Four of the 13 federal agencies we reviewed have current construction projects using the 26 PLAs that we could identify (see app. V). The four agencies are DOE with 12 PLAs, DOD with 10 PLAs, TVA with 2 PLAs, and NASA with 2 PLAs. Officials at the remaining nine agencies were not aware of any PLAs on their construction contracts. However, according to officials at 11 of the 13 agencies, including DOD and NASA, PLAs could be used on their construction projects without the agencies’ knowledge because contractors are not required to report collective bargaining matters to the government. As an example, within DOD, we contacted the Corps of Engineers, the Air Force, and the Navy, and, among them, these agencies identified only one project using a PLA—a Corps project. However, data provided by the Building and Construction Trades Department and the National Constructors Association showed that seven additional current Corps projects and two Air Force projects involved the use of PLAs. We verified with the related agencies or contractors that PLAs were in use on these projects. TVA and DOE appear to be the most actively involved in the use of PLAs. TVA negotiates with the Building and Construction Trades Department and its 15 international unions and also signs agreements requiring that contractors become signatory to the PLAs. PLAs on projects of the other three agencies were negotiated and signed by the contractors and unions. DOE, however, invoked the authority of P.L. 85-804 at four locations, Colorado, Idaho, Nevada, and Washington, to require that all contractors and subcontractors follow certain provisions of the six related PLAs. In addition, bid solicitations by DOE for construction projects made reference to the use of PLAs. The 1997 solicitation for construction of the National Ignition Facility in California stated that a PLA had been established. The 1989 solicitation for a new construction management contractor at the Oak Ridge site in Tennessee required bidders to include plans/alternatives to recognize the PLA already in place at that location. Officials of two of the nine federal agencies with no PLAs that we could identify told us that they considered, but elected not to use, PLAs on recent construction projects. These agencies were GSA and the Department of Labor. A GSA official told us that GSA considered requiring the use of a PLA on a courthouse construction project in Boston, MA, because other federally funded projects in the Boston area had used PLAs. However, she said that the agency decided not to require a PLA because it had no reason to believe that a PLA was needed and because the agency believed that a neutral posture should be maintained regarding use of union versus nonunion labor. “In connection with this solicitation, a responsive bidder may have a Project Labor Agreement (PLA) with its subcontractors. . . .The Employment and Training Administration has a strong interest in ensuring good labor relations to achieve expeditious completion of this project. A PLA is one possible method of meeting this goal. . . .” See appendix VI for more details about the four federal agencies with PLAs identified on current construction projects. Although we could find no centralized, complete source of data on the use of PLAs in the nonfederal public sector, our research disclosed examples of states, counties, and other nonfederal public entities using PLAs on construction projects with and without federal funding. Examples of projects with federal funding include the Boston Harbor cleanup and Central Artery/Tunnel projects in Boston, MA; the Denver International Airport, Denver, CO; and the 38th and Fox Phase IV and the Del Camino Interchange projects for the Colorado State Department of Transportation. The first three projects used locally negotiated PLAs. The contractor on the latter two projects used a national PLA—the Heavy and Highway Construction Project Agreement—that was neither required nor encouraged by the state of Colorado, according to a state official. Examples of public projects that used PLAs and, according to Washington and Colorado state officials, involved no federal funds, include the Duwamish River Bridge, the 164th Avenue Interchange, and the SR5 to Blanford Drive projects for the Washington State Department of Transportation; and the McClellan Interchange, the C-470 Yosemite Interchange, and the 125th & Mississippi Avenue Bridge projects for the Colorado State Department of Transportation. State officials said that neither state required or encouraged the use of PLAs on these projects. The contractor in each case used the Heavy and Highway Construction Project Agreement. Other nonfederal public projects with PLAs include the Inland Feeder and Eastside Reservoir Projects for the Metropolitan Water District in Southern California; the Waterfront Park Project for Mercer County, NJ; and the Tappan Zee Bridge Project for the New York State Thruway Authority. All used locally negotiated PLAs. Some labor experts believe that the use of PLAs for public construction projects will increase due in part to the Boston Harbor decision. Since that decision, the governors of four states have issued Executive Orders encouraging the use of PLAs on their states’ public construction projects: Nevada (1994), New Jersey (1994), New York (1997), and Washington (1996). In addition, the mayors of Boston, MA (1997) and Philadelphia, PA (1995) issued similar Executive Orders for their cities’ construction work. At least two other states, Alaska and Illinois, recently considered legislation that would allow their state agencies to enter into or require PLAs on public-works projects, but neither bill had passed at the time of our review. Conversely, in 1995 Utah passed a law that expressly forbids any state agency or political subdivision to require the use of a PLA in connection with any public-works project. Although no complete central source of information exists, according to labor experts and union officials, most PLAs are used in the private sector. An official from a large national contractor told us that virtually all of that company’s private-sector domestic work is covered by PLAs. The vast majority of PLAs under the national agreements, discussed earlier, are used on private-sector projects. For example, 93 percent of the projects/contracts under the National Constructors Association’s national PLA are in the private sector. Percentages are similar for known uses of the other national agreements, except for the National Heavy and Highway Construction Project Agreement, which is used predominantly on nonfederal public projects. Our research disclosed few specific examples of locally negotiated, private-sector PLAs. We believe that this may be because the private-sector PLAs receive less publicity than those in the public sector. The latter seem to make news because public funds are involved. Some locally negotiated PLAs used on private-sector projects that we were able to identify include those for Toyota manufacturing plants in Princeton, IN, and Georgetown, KY; a Coil Spring Processing Facility in Spencer County, IN; and a project for Reynolds Metals in Massena, NY. “The heads of executive departments and agencies covered by this memorandum, in consultation with the Federal Acquisition Regulatory Council, shall establish, within 120 days of the date of this memorandum, appropriate written procedures and criteria for the determinations set forth in section 1.” Six of the 13 federal agencies we reviewed issued some level of guidance on PLA use, generally by the due date. Officials at five of the remaining seven agencies said that they were awaiting related Federal Acquisition Regulation amendments before issuing procedures and guidelines. OMB eventually assumed responsibility for assisting the agencies in developing procedures, although the agencies still have the primary responsibility; and on March 12, 1998, OMB sent a draft generic PLA guidance document to officials at DOD, GSA, DOE, and the Department of Labor for comment. The memorandum transmitting the draft guidance states that the draft is not intended to foreclose agency-specific customization and adds that the draft soon may be circulated to agencies to assist them in developing their guidance. The draft guidance does not require agencies to notify the Subcommittee on Oversight and Investigation of information it requested on the future planned use of PLAs, but the draft guidance does provide for the agencies to collect the needed information. According to OMB, this provision was added so that agencies could comply with the Subcommittee’s request. The six agencies that issued some guidance were the Department of Commerce (Commerce), DOD, GSA, the Department of the Interior, NASA, and the Department of Transportation (Transportation). All agencies except Commerce included some or all of the following factors for contracting officials to consider before making a decision on the use of a PLA: (1) the history of labor disputes in the area of the work, (2) whether local collective bargaining agreements with needed crafts are expected to expire during the planned period of the project, (3) the general availability of qualified craft workers in the area, (4) the effect on the government of delays in contract performance, and (5) the probable effect of a PLA on competition. Commerce’s guidelines primarily reiterated the provisions for use of a PLA included in the Presidential Memorandum. Transportation did not issue its own guidelines, but distributed the Presidential Memorandum and GSA’s guidelines to its acquisition personnel via the internet. During the course of our review, officials at each of the 13 agencies we reviewed said that they did not expect any changes in the extent of their use of PLAs as a result of the Presidential Memorandum. However, on April 22, 1998, after our field work was completed, the Secretary of Transportation issued a memorandum to the heads of all Transportation agencies strongly encouraging the use of PLAs on agency construction projects as well as projects funded with agency grants. A Transportation official told us that PLA awareness brought about by the Secretary’s memorandum could result in PLAs being used. None of the six agencies’ guidance for the use of PLAs clearly provide for responding to the Subcommittee on Oversight and Investigation’s request to federal agencies that it be notified of any planned use of PLAs. GSA’s initial guidance regarding the use of PLAs made provision for notifying the Subcommittee. That specific provision was later deleted when the agency revised its procedures and criteria to conform with OMB’s draft guidelines, but those revised procedures and criteria call for collection of the data GSA would need to comply with the Subcommittee’s request. Commerce’s procedures and criteria acknowledged the congressional interest in PLAs, but they did not include guidance for providing information to the Subcommittee. It should be noted that in response to the Subcommittee’s request, agencies may only give notice of PLAs that are required by the agencies. Therefore, 12 of the 26 PLAs we identified on federal construction projects likely would not have been reported to the Subcommittee because the PLAs were initiated by contractors and not required by the agencies. Proponents and opponents of the use of PLAs said it would be difficult to compare contractor performance on federal projects with and without PLAs because it is highly unlikely that two such projects could be found that were sufficiently similar in cost, size, scope, and timing. Also, through our own observations, we know that many of the federal construction projects using PLAs involve unique facilities. For example, the PLAs used by TVA and many used by DOE cover all construction at a given site or sites and involve many contracts. In the case of TVA, work under the PLAs is spread over seven states. In the case of DOE, its various locations have unique missions, facilities, and circumstances. Also, officials at the four federal agencies with current projects using PLAs said they could not readily identify similar projects not using a PLA. In addition, a PLA in use on a project that might be appropriate for comparison with a non-PLA project may not be representative of all PLAs because the specific provisions of PLAs can vary based on local negotiations. Finally, in our opinion, based on varied evaluation experience, any contract performance differences that might be discerned between a project with a PLA and one without a PLA could be attributable to factors other than the PLA. Therefore, drawing definitive conclusions on whether or not the PLA was the cause of any performance differences would be difficult. Nevertheless, our research disclosed three analyses of the costs of a project using a PLA versus not using the PLA on the same project; however, none compared a PLA project with a similar non-PLA project. These analyses are described in this report for information purposes only. We did not verify any of the analyses, nor do we take a position on the validity of the conclusions drawn. The first analysis was done in March 1995 by a local chapter of the Associated Builders and Contractors, East Syracuse, NY. The chapter compared initial estimates and actual bids both with and without a required PLA on a construction project for the New York State Dormitory Authority at the Roswell Park Cancer Institute. This unusual comparison was possible because several contracts were awarded before the PLA became effective. The analysis showed that the bids were 26 percent higher after the PLA requirement began than before the requirement existed. In the second case, the New York Thruway Authority hired a consultant to negotiate a PLA for its 4-year project to refurbish the Tappan Zee Bridge. The consultant found that without a PLA, 19 local collective bargaining agreements with varying provisions would apply to the project and estimated that labor costs under the uniform provisions of the PLA would be over $6 million less than labor costs under the 19 separate agreements. The savings represented about 13.5 percent of the $44.7 million estimated total labor costs and about 4.6 percent of the project’s total estimated cost of $130 million. In addition, each of the 19 local agreements would have expired and required renegotiation one or two times during the life of the project. Each expiration represented a potential strike situation. The PLA was adopted in 1994 and survived a court challenge in 1996, based in part on the consultant’s estimate of cost savings and on unspecified savings of revenue from bridge tolls, as a result of having a PLA. One of the authority’s key objectives was to avoid work disruptions on this project. The third analysis involved the use of a PLA for constructing the National Ignition Facility at DOE’s Lawrence Livermore National Laboratory, Livermore, CA. A Laboratory official provided us with documents showing that, in January 1997, the project contractor estimated the PLA would save $2.6 to $4.4 million on the $1.2 billion construction project, or less than 0.4 percent, and concluded that these savings alone justified the PLA. Most of the savings resulted from estimated wage differences from using the PLA and involved such items as shift differential, overtime pay, use of apprentices, travel and subsistence pay, and holiday pay. For example, use of the PLA reportedly resulted in employing more apprentices and fewer, higher-paid journeymen on the project than would have been the case without the PLA. We requested comments on a draft of this report from OMB and the 13 federal agencies selected for review. GSA’s Deputy Associate Administrator for Acquisition Policy; NASA’s Acting Deputy Administrator; DOE’s Director, Office of Worker and Community Transition; and the Department of Agriculture’s Chief, Procurement Policy Division provided written technical comments that we incorporated in the report, as appropriate. Program officials from the Departments of Veterans Affairs, the Interior, Justice, Health and Human Services, and Commerce responded orally that they generally agreed with reported information and had no specific comments. Officials from OMB, TVA, and the Departments of Labor and Transportation provided oral technical comments that we incorporated in this report as appropriate. A Program Analyst in DOD’s Office of the Under Secretary of Defense for Acquisition and Technology provided oral comments that are discussed below. In addition, we asked the following organizations to verify that we correctly reported data they provided: the AFL-CIO’s Building and Construction Trades Department, the Associated Builders and Contractors, Inc., the Associated General Contractors of America, and the National Constructors Association. The President of the Building and Construction Trades Department and the Counsel, Labor and Employment Law, Associated General Contractors of America, provided written comments that are discussed below. Officials from the other two organizations provided oral technical comments that we have incorporated in this report as appropriate. DOD raised three points. First, it noted that the draft report’s definition of a PLA was much more encompassing than that in the Presidential Memorandum. The reason is that, in practice, PLAs cover maintenance, modification, and repair work in addition to new construction, and our objective was to gather information on all forms of PLAs. Second, DOD questioned whether three agreements we classified as PLAs were in fact PLAs. We reevaluated these three cases and concluded that one—a Navy construction project—is not a PLA, and we excluded it from the final report; but the other two—Air Force projects—were PLAs, although the projects primarily involved maintenance activities rather than new construction. Third, DOD cautioned that in considering use of a PLA, government personnel must be careful not to act in a way that would be inconsistent with existing laws. We agree. The Building and Construction Trades Department stated that it found nothing in the draft report that is incorrect concerning the information that it provided but had three further significant comments. First, the Department expressed concern that the draft report did not fully or fairly reflect the benefits offered by PLAs or the extent of their use. The Department cited several benefits it believes PLAs provide. Although our report noted most of these perceived benefits, it did not include all of the ones cited by the Department, such as (1) joint labor-management safety training programs and (2) joint labor-management dispute resolution procedures for all labor and employment disputes affecting craft personnel. As the report states, our listing of perceived benefits and perceived disadvantages was intended to be illustrative, not exhaustive. We did not make any changes to the report to reflect a higher extent of PLA use because it already reflected all the uses the Department and others provided, and the Department did not provide any additional data on PLA usage. The Department also noted its disagreement with opponents’ views that PLAs increase costs and decrease competition; however, these matters were beyond the scope of our review and are not discussed in the report. Second, the Department stated that our description of the Associated Builders and Contractors of America should show that the organization is a leading opponent of PLAs. We agree and revised the report accordingly. Third, the Department stated that the report provides a misleading picture of PLA case law in New York state. It refers to at least two of the reported court decisions where PLAs were overturned, saying that one was later reversed by a higher court and suggesting that the other decision was rendered moot by a later decision on another case. Although we confirmed that the decision for one of these cases was reversed by a higher court and noted this in our report, we did not change our report to address the second case because the Department did not provide any specific information on the case. In general, we identified the reported state cases from available literature and did not determine the ultimate disposition of the state cases beyond the information that was available in the literature we reviewed. We clarified our report to provide additional emphasis to this aspect of our methodology. The Associated General Contractors of America made three main points. First, it emphasized its opposition to the mandated use of PLAs on public projects and cited certain disadvantages of PLAs that were not included in our report. For example, the organization said that (1) public owners lack needed experience for negotiating PLAs with unions, which it believes results in agreements more favorable to the unions than the public owners or contractors; (2) PLAs can only increase not decrease wages and benefits on any project subject to the Davis-Bacon Act; and (3) PLAs create inefficiencies by eliminating contractors’ flexibility to employ and deploy multiskilled and semiskilled personnel and by requiring that contractors contribute to union benefit funds, which may be in addition to contributions to their own benefit plans. As we previously said, our intent was to provide examples of advantages and disadvantages of PLAs purported by PLA proponents and opponents. We did not set out to provide an exhaustive list of either, or to make an assessment of the advantages or disadvantages of PLAs. Second, the organization disagreed with the draft report statement that the U.S. Supreme Court’s decision in the Boston Harbor case “cleared the way for more frequent use of PLAs on public-sector construction projects.” The Boston Harbor decision did clear the way for further use of PLAs on public-sector construction projects because it overruled the First Circuit’s decision that had enjoined the use of a PLA in a public-sector construction project. The Supreme Court upheld the state agency’s right to require contractors to agree to be bound by a PLA. The organization also states that the Boston Harbor case did not address the legality of PLAs in the context of the Employee Retirement Income Security Act, anti-trust laws, or competitive bidding statutes. However, our report does not discuss or attempt to predict how future challenges to PLAs would be decided by the courts under those laws. We only note that the Supreme Court has upheld a public agency’s bid specification requiring contractors on a public construction project to agree to abide by a PLA negotiated by its project manager and labor. Third, the organization said that we should state in our report the basis for our statement that many of the PLAs used on federal contracts were initiated by contractors. We based this statement on what agency officials and contractors told us and modified our report to reflect this. We are sending copies of this report to the Ranking Minority Members of your Subcommittees and the Chairman and Ranking Minority Member of the Senate Committee on Labor and Human Resources. We also will send copies to the Director, OMB; the head of each of the 13 agencies included in our review; and the other organizations we contacted. Also, we will make copies available to others on request. Major contributors to this report were Sherrill H. Johnson, Assistant Director; Louis G. Tutt, Evaluator-in-Charge; Billy W. Scott and David W. Bennett, Senior Evaluators; Victor B. Goddard, Senior Attorney; and Hazel J. Bailey, Communications Analyst. Please contact me on (202) 512-4232 if you or your staff have any questions. Fiscal year 1996 construction obligations Total amount of obligations ($000) To determine the extent to which project labor agreements (PLA) are used in the federal government, agencies’ responses to the June 5, 1997, Presidential Memorandum encouraging federal agencies to use PLAs, and agencies’ plans for responding to the Subcommittee on Oversight and Investigation’s continuing request to be notified of planned PLA uses, we contacted the Office of Management and Budget (OMB) and the 13 federal agencies that accounted for more than 98 percent of the total construction obligations in fiscal year 1996, as reported in the Federal Procurement Data System (see app. I). Each agency responded to our written request for various data regarding PLAs, including identification of any PLAs currently in effect. We interviewed officials at each agency regarding PLAs in general and PLAs identified. We also visited three locations where current federal projects have PLAs: Oak Ridge, Tennessee, the Department of Energy; Knoxville, Tennessee, the Tennessee Valley Authority; and Houston, Texas, the National Aeronautics and Space Administration. To obtain information on the use of PLAs in the private sector and the nonfederal public sector, we contacted several contractors, industry associations, union officials, state agencies, and private-sector labor experts. We judgmentally selected contractors based on their known participation in federal construction projects and their known use of PLAs. The industry associations were selected because they represent union and nonunion contractors in the construction industry. The union officials were from the American Federation of Labor-Congress of Industrial Organizations’ (AFL-CIO) Building and Construction Trades Department that represents 15 construction craft unions. We contacted state agency officials in Arizona, Colorado, Florida, Nevada, New York, Pennsylvania, and Washington primarily because they were among the largest recipients of federal highway funds in 1996 or because they had projects known to have PLAs. We selected private labor experts because of their involvement in the debate on PLAs. In addition, we performed literature and internet searches to identify specific projects with PLAs and to develop general subject matter background. We limited verification of data to confirmation of current PLAs on federal construction projects, whether identified by the agencies awarding the contracts or other sources. We did not verify data generated by the Federal Procurement Data System nor did we make any independent assessment of the advantages or disadvantages of PLAs. To evaluate the feasibility of comparing contractor performance on federal construction projects done with and without PLAs, we asked each agency with PLAs to identify any similar non-PLA projects. We also asked contractors, industry associations, and private labor experts for any known studies or methodologies for comparing federal projects with and without PLAs. We requested comments on a draft of this report from OMB and each of the 13 federal agencies we reviewed. We also sent the draft to the Building and Construction Trades Department, the National Constructors Association, the Associated Builders and Contractors, Inc., and the Associated General Contractors of America and asked them to verify that we correctly reported data that they provided. At the end of this letter, we present and evaluate comments we received. We also made changes in the letter, where appropriate, to reflect these comments and the technical comments that were provided. We did our work from August 1997 to March 1998 in accordance with generally accepted government auditing standards. Heavy and Highway Construction Project Agreement(For heavy and highway construction, improvements, modifications, or repairs) General Presidents Project Maintenance Agreement(For maintenance and repair of existing facilities) National Maintenance Agreement(For maintenance and repair of existing facilities) National Construction Stabilization Agreement(For construction of industrial operating and/or manufacturing facilities) Building and Construction Trades Department Standard Project Labor Agreement(For new construction work) Current contractor(s) Lockheed Martin Idaho Technologies Co. Bechtel Nevada Corp. Bechtel Nevada Corp. Bechtel Nevada Corp. Parsons Constructors Inc. BNFL, Inc. Fluor Daniel Hanford/Fluor Daniel Northwest Kaiser-Hill Co. L.L.C. Bechtel Savannah River, Inc. Department of Defense, U.S. Army Corps of Engineers (8 PLAs) Atkinson-Dillingham-Lane (Joint Venture) Department of Defense, U.S. Air Force (2 PLAs) Management Logistics, Inc. (continued) Current contractor(s) Management Logistics, Inc. L.E. Myers Co. and John W. Cates Construction Co. NPS Energy Services, Inc., GUBMK Constructors, and Stone and Webster Engineering Corp. National Aeronautics and Space Administration (2 PLAs) EG&G Langley, Inc. The Department of Energy (DOE) has no data system that reports information on its PLAs, but our research disclosed that construction projects of DOE’s predecessor agency, the Atomic Energy Commission, likely had PLAs as early as the 1940s. Currently, DOE has construction projects in 9 states with at least 12 PLAs. The nine states include California, Colorado, Idaho, Missouri, Nevada, Ohio, South Carolina, Tennessee, and Washington (see app. V). The oldest of these PLAs has been in effect at DOE’s Oak Ridge Reservation, Tennessee, since the mid-1950s. Similarly, PLAs have been in effect at the Nevada Test Site since the mid-1960s and at the Rocky Flats Environmental Technology Site, Colorado, since the early 1970s. DOE is not signatory to any of the 12 current PLAs, but the agency has effectively sanctioned 6 of the agreements by invoking the authority of P.L. 85-804 to require that all contractors and subcontractors adhere to specific provisions of those agreements. The six PLAs include three at the Nevada Test Site; and one each at the Colorado Site, the Idaho National Engineering Environmental Laboratory, and the Hanford Site, Washington. DOE officials said that the primary reasons for PLAs on their construction projects are to (1) prevent work stoppages (strikes and slowdowns); (2) ensure access to a skilled, qualified workforce, with needed security clearances; and (3) provide cost and wage stability. Two of the 12 PLAs on current DOE projects cover maintenance and repair work—one at the Weldon Spring Site, Missouri, and the other at the Nevada Test Site. Another current PLA covers conventional construction work on the National Ignition Facility at the Lawrence Livermore National Laboratory, California, while the newest PLA covers specific construction related to decontamination and decommissioning work at the K-25 Site on the Oak Ridge Reservation. The remaining eight PLAs cover all new construction work at the respective sites where they apply. The PLA at the Weldon Spring Site is the National Maintenance Agreement, which is sponsored by the National Maintenance Agreements Policy Committee, Inc. and requires union membership as a condition of employment. The remaining 11 PLAs include 10 negotiated locally between DOE’s contractors and local unions and 1 (the Hanford Site PLA) negotiated between DOE’s contractors and the international unions. The Department of Defense (DOD) has no central database with information on PLAs. Our research showed evidence that PLAs were used in the construction of various military installations, missile sites, and other defense facilities as far back as World War II. In addition, data from the Building and Construction Trades Department of the American Federation of Labor-Congress of Industrial Organizations showed that General Presidents Project Maintenance Agreements were used on at least six Air Force contracts during the 1970s and 1980s. Currently, 10 DOD construction projects have PLAs that we could identify. According to contractors and agency officials, all 10 PLAs were initiated by the contractors. The oldest of these PLAs have been in effect at Falcon Air Force Base, Colorado, since the mid-1980s. We requested data on the use of PLAs from the U.S. Army Corps of Engineers, the U.S. Air Force, and the Naval Facilities Engineering Command. In the initial responses to our requests, the Corps of Engineers identified one current construction project with a PLA, while the Air Force and Navy identified none. Each said it does not require PLAs and that its contractors could have PLAs unknown to the agencies. “Without examining each and every contract for construction . . . we would be unable to provide to you information on whether or not the Naval Facilities Engineering Command ever awarded a contract with a contract requirement for a project labor agreement. The several headquarters and field activity contracting personnel . . . contacted with regard to your inquiry did not recollect any occasion where this Command would have included such a requirement in a solicitation for construction.” “ are unable to determine information concerning the use of project labor agreements negotiated by a contractor during the performance of construction contracts . . . since contractors are not required to report their collective bargaining agreements to the government . . . .” “t is likely that some of our contractors elected to use project labor agreements and in fact negotiated such agreements applicable to the workers performing on their contract. Without an extensive, time-consuming survey of all of our construction contractors performing at present . . . and extensive research to identify contractors who performed on closed contracts, we would be unable to provide the information you have requested.” The Tennessee Valley Authority (TVA) has no data system that reports information on its PLAs. However, we found evidence that from 1988 to 1991, a TVA contractor used the General Presidents Project Maintenance Agreement at four TVA locations in Alabama and Tennessee. Since 1991, TVA has had two PLAs that cover contracts for construction and maintenance work in its seven-state coverage area. Previously, that work was managed by TVA and performed primarily by an in-house workforce represented by 15 craft unions. TVA is unique among the four federal agencies with projects that have PLAs, in that it negotiates the PLAs and has agreed to require that certain contractors become signatory to these PLAs; and a TVA official told us that the agency believes the U.S. Supreme Court decision in the Boston Harbor case supports its authority to require use of PLAs. Also, according to a TVA official, deregulation of the utility industry increased competitive pressures and forced TVA to cut costs. The official said that TVA realized its in-house construction management and safety record needed improvement, and it began downsizing and restructuring. In 1991, TVA signed the two PLAs and engaged private contractors to manage construction and maintenance work under them. The in-house workforce was reduced to include primarily operational crafts representing six unions while most construction, maintenance, and modification work was contracted out under the two PLAs. TVA officials also told us that the primary reasons for using PLAs at TVA are to ensure harmonious labor relations, avoid work stoppages, and ensure an adequate supply of skilled labor. One PLA covers construction at new or existing plant sites directly related to new generating capacity or power transmission, and the construction, modification, or addition to offices, other buildings or facilities. The other PLA covers maintenance, renovation, modification, addition, and/or repair to existing plants and transmission facilities that do not involve the addition of new power capacity. All companies working on construction of new generating capacity or transmission construction for the Nuclear or Fossil and Hydro groupsmust become signatory to the construction PLA. Otherwise both of TVA’s PLAs generally require that only companies receiving contracts over $250,000 must become signatory to the appropriate PLA. However, in 1994, an additional threshold of $350,000 was added to each PLA for contracts relating to work for TVA’s Transmission and Power Supply Group. Also, each contractor must ensure that its subcontractors become signatory to the appropriate PLA except for those performing specialty work or those with subcontracts for $100,000 or less. The various dollar thresholds exist, in part, to help ensure that businesses within the TVA power service area, and small, disadvantaged, minority- or woman-owned businesses have an opportunity to compete for TVA work. About 90 to 95 percent of TVA’s construction dollars are awarded to contractors who are signatory to the two PLAs. According to TVA officials, TVA is not subject to the Davis Bacon Act wage rates that normally apply to federal construction contracts. Instead, section 3 of the TVA Act requires TVA to include a prevailing wage provision in covered contracts. Pursuant to section 3, TVA conducts its own analysis of prevailing wage rates and negotiates those rates annually with the Trades and Labor Council, which is comprised of the 15 unions who signed the PLAs. TVA uses 15 factors in determining its wage rates, including union wages paid in 13 cities, Davis-Bacon wages, and wages at various major projects. Prior to 1991, TVA used this system to determine wage rates for its in-house craft union workers. Each year, TVA uses this wage survey in negotiating PLA wage rates with the unions and contractors. Any union that disagrees with TVA’s wage determinations may appeal to the Department of Labor. TVA officials told us there is about one appeal each year. The National Aeronautics and Space Administration (NASA) has no database showing information on its use of PLAs, but sources outside the agency indicate that PLAs were used in the construction of NASA facilities at Cape Canaveral in the 1960s. We confirmed that a form of PLA is being used on two current NASA contracts, one at the Johnson Space Center, Texas, and one at the Langley Research Center, Virginia. Each PLA covers maintenance and operations of facilities, rather than new construction work; and, according to agency officials, each was initiated by the local contractor, not by NASA. According to NASA officials, each contractor’s primary reason for using a PLA was to ensure labor and wage stability. The General Presidents Project Maintenance Agreement is being used at Johnson Space Center, and the Building and Construction Trades Department’s data show that it has been in place since 1973. The PLA at the Langley Research Center was negotiated locally between the contractor and local unions, and it began in 1989. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the use of project labor agreements (PLA) on federal construction contracts and related matters, focusing on the: (1) number of federal construction and other projects where PLAs were used and the extent to which PLAs have been used on projects sponsored by nonfederal organizations, including public projects with some federal funding; (2) procedures and criteria for using PLAs established by federal agencies, as required by a Presidential Memorandum that encourages federal agencies to use PLAs on construction contracts over $5 million; (3) federal agency procedures established to comply with a letter from the Chairman of the House Committee on Education and the Workforce, Subcommitte on Oversight and Investigations, to federal agencies requesting them to notify his subcommittee of the planned use of PLAs; and (4) feasibility of comparing contractor performance under federal construction contracts with and without PLAs. GAO noted that: (1) the total number of PLAs in use is unknown because there is no complete or comprehensive database on the use of PLAs in the public or private sectors; (2) union and industry organizations maintain data on certain PLAs that they negotiated at the national level, but there were no comparable data on ad hoc PLAs negotiated between contractors and unions at the local level; (3) four of the 13 federal agencies GAO reviewed have construction projects covered by 26 PLAs that it could identify; (4) the four agencies are the Department of Energy (DOE), the Department of Defense (DOD), the Tennessee Valley Authority (TVA), and the National Aeronautics and Space Administration (NASA); (5) however, officials at 11 of the 13 agencies, including DOD and NASA, said PLAs could be used on agency construction projects without their knowledge because such agreements are generally made between contractors and unions; and collective bargaining matters are not required to be reported to the government; (6) available literature and union data show that PLAs exist on numerous other public and private construction projects and on other public projects with some federal funding; (7) also, labor experts and union officials say that the private sector is the biggest user of PLAs; (8) six of the 13 federal agencies GAO reviewed had issued various levels of guidance for PLA use as required by the Presidential Memorandum; (9) however, none specifically provided for notifying the Subcommittee on Oversight and Investigations of any planned use of PlAs; (10) recently, the Office of Management and Budget (OMB) assumed responsibility for assisting the agencies in developing procedures and criteria for use of PLAs; (11) although OMB's draft procedures and criteria for implementing the Presidential Memorandum do not specifically refer to the Subcommittee's request to be notified by agencies planning to use a PLA, the draft would require the collection of the type of information requested by the Subcommittee; (12) according to OMB, it included this provision so that agencies could comply with the request; (13) PLA proponents and opponents that GAO contacted said they believe contract performance comparisons between federal construction projects with PLAs and those without PLAs would be difficult; (14) this is primarily because they believe it would be difficult to find projects similar enough to compare; and (15) in addition, GAO believes that even if similar PLA and non-PLA projects were found, it would be difficult to demonstrate conclusively that any performance differences were due to the use of the PLA versus other factors. |
As stated in IRS’s fiscal year 2016 collection program letter, the collection program’s mission is to collect delinquent taxes and secure delinquent tax returns through the fair and equitable application of the tax laws, including the use of enforcement tools when appropriate and providing education to taxpayers to facilitate future compliance. As we have previously reported, IRS’s collection program largely uses automated processes to categorize unpaid tax or unfiled tax return cases and send them to a collection phase to be potentially selected for collection activities. The automated Inventory Delivery System (IDS) categorizes and routes cases based on many factors, such as type of tax and amount owed. As shown in figure 1, IDS analyzes cases to identify and filter out cases that should not be pursued further (shelved) and determine whether cases should be sent to either the telephone phase (the Automated Collection System, or ACS) or the in-person phase (Field Collection) for potential selection. Through IDS routing, the Field Collection program generally makes the first effort to enforce filing and payment requirements for higher-priority cases that are not resolved by sending notices. The Field Collection program is also used to enforce compliance for lower-priority cases left unresolved by ACS’s efforts. The Field Collection program is organized to make direct contact with individuals and business officials to enforce tax filing and payment requirements. The program divides the United States into seven areas. Each area is run by an area director who reports to the Director of Field Collection. Each area is typically divided into six to eight territories, each headed by a territory manager. Each territory, on average, contains six groups that are run by group managers. Group managers directly oversee an average of eight revenue officers. Cases sent to the Field Collection program for potential selection are generally identified by the taxpayer’s ZIP code and aligned with Field Collection program groups around the nation, each of which works cases in a set of ZIP codes in its geographic proximity. Group managers select and assign collection cases to revenue officers for resolution. Revenue officers are generally assigned to work cases in designated ZIP codes handled by the group. Cases are removed from Field Collection’s inventory of cases for potential selection when they are assigned to a revenue officer for resolution, are shelved; or expire under statute of limitations laws. Unless cases sent to the Field Collection program are assigned to a revenue officer for collection work, delinquent taxpayers may not receive contact from IRS to attempt to resolve the delinquency aside from annual reminder notices. Since 2010, Field Collection staff have been reduced by 50 percent from a 2010 high of 7,268 full-time equivalents (FTE), as shown in figure 2. Field Collection revenue officers have consistently closed fewer cases each year since a high in fiscal year 2011, as shown in figure 3. In fiscal year 2015, more than 40 percent of closed cases were closed by shelving rather than a revenue officer working the case. The figure also shows that the year-end Field Collection inventory and queue has generally remained stable in recent years. Automated systems classify collection cases into a hierarchy of five priority levels, as shown in figure 4. The priority levels are divided into two files—the group hold file and the group queue. Collection cases in the group hold file are generally considered the highest priority and are the first cases group managers evaluate for assignment. These cases are considered mandatory because group managers typically are required to evaluate whether to assign these cases within 45 days to the first available and qualified revenue officer or document why the cases were not assigned by the deadline or were removed from the hold file. Unlike collection cases in other priority levels, group hold file cases require immediate evaluation for assignment or an explanation if they are not assigned. Group managers must even assign some mandatory cases in less than 45 days. For example, collection cases involving missed or lower-than-expected employment tax payments—known as federal tax deposit alerts within IRS—should be assigned within 7 days. Other mandatory collection cases include those involving IRS employees, transfers from other areas within Field Collection, and current cases where additional delinquent taxes have been assessed. The group queue contains the other four priority levels’ collection cases— accelerated high, high, medium, and low. The automated system assigns these priorities based on a number of criteria including the balance due amount, return type, tax year of the case, and last return amount. Accelerated high priority collection cases—second priority in selection consideration—are cases that IRS has determined are among the most important to pursue and group managers are generally expected to assign them from the queue first. Characteristics of cases in this category might include those with balances due greater than a selected high-dollar amount or individual delinquent taxpayers with income greater than a selected high amount. Non-accelerated high priority cases are third priority in selection consideration. Characteristics of these cases may include businesses with recent unpaid employment tax liabilities and those with balances due that fall into a range of selected high-dollar amounts. Characteristics of collection cases designated medium and low priority may include balances due within, or less than, a range of relatively moderate dollar amounts (in comparison to high priority cases) and certain case age parameters that IRS views as lower priority. Characteristics of low-priority cases include remaining cases that do not meet the criteria of higher-priority levels. IRS’s automated systems send new cases weekly to group managers’ hold files and queues. Group managers we met with explained that they sequentially review the hold file and queue cases at each priority level to take into account several case selection considerations. These considerations can include revenue officers’ availability, including their geographic proximity to the taxpayer’s location, since Field Collection activities often involve face-to-face interaction. Group managers also consider the characteristics of the cases available for assignment, such as whether a business is still active or operating thus increasing the potential for collectability (see figure 5). The automated systems determine the anticipated difficulty and appropriate category of revenue officer that can be assigned to a case based on the queue priority level and other characteristics of the case, such as complexity. These categories are based on the revenue officer’s pay scale, which is aligned with the federal General Schedule (GS) pay system. Revenue officers in the Field Collection program generally are GS-9, 11, 12, or 13. This approach generally ensures that higher paid revenue officers with more experience are assigned the more challenging or complex cases. In most instances, group hold file and accelerated high-priority cases all must be assigned as soon as a revenue officer with the appropriate characteristics is available. However, IRS guidance provides group managers discretion to pass over these cases and select lower-priority cases when there are justifiable reasons or business needs. For example, a group manager can bypass an accelerated high-priority case when, in the group manager’s judgment, assignment of that case at the time would be too burdensome based on the size and complexity of the revenue officer’s current caseload or when a revenue officer’s current caseload has reached inventory levels prescribed in the Internal Revenue Manual. On March 10, 2016, when we received a snapshot of all assigned and unassigned cases in IRS’s inventory management system, the majority of cases group managers had selected and assigned to revenue officers were accelerated high- and high-priority cases (see table 1). Likewise the majority of unassigned cases were medium- and low-priority cases. Although IRS officials did not have historical data readily available to analyze and confirm, they agreed that this mix of cases that we observed on March 10, 2016, is likely typical as the case selection process is geared toward selecting higher priority cases. The primary weakness we identified in our analysis of Field Collection case selection processes is a lack of clearly defined and measurable objectives that support the collection program’s mission. According to federal internal control standards, objectives defined in clear and measurable terms are a foundation for improving accountability and providing necessary assurance that a program’s mission will be achieved. The lack of clearly defined and communicated objectives also negatively impacts other aspects of Field Collection case selection processes that we believe are most relevant to assuring mission achievement. Specifically, the lack of clearly defined objectives directly impacts IRS’s ability to effectively measure Field Collection performance, assess risks to the achievement of objectives, and assess the continued effectiveness of automated processes. Finally, we identified the lack of adequate procedures to guide group managers’ use of judgment in selecting cases. These deficiencies increase the risk that Field Collection case selections may not contribute to the program’s mission as well as they otherwise could. Having program objectives clearly defined in measurable terms is a foundation that allows managers to take steps to assure a program achieves its mission, according to federal internal control standards. This includes selecting appropriate methods to communicate internally the necessary quality information to achieve program objectives. IRS guides Field Collection employees through a number of different channels, including: the Internal Revenue Manual (IRM), which is IRS’s official compendium of personnel guidance; annual program letters; and occasional memos and e-mails. However, none of the communications we reviewed clearly defined the collection program or case selection objectives. For example, the IRM does not state the objectives of the Field Collection program or what role case selection plays in supporting achievement of those objectives. Similarly, although annual collection program letters to staff stated the program mission and listed distinct activities and case types to focus on in the fiscal year grouped under IRS strategic goals, they did not present clearly defined program or case selection objectives sufficient for purposes of internal control. The objectives are unclear in part because the terms are so general that they do not enable management to assess risks, establish control procedures, or link to related performance measures. An August 2013 email from the Director of Field Collection stated that group managers should select cases so that the mix of assigned cases mirrors what is available in the inventory. This guidance suggests a program objective but neither the e-mail nor any other guidance identifies it as such. The only IRS communication we obtained that identified program and case selection objectives was a document IRS provided to us in March 2016. According to IRS officials, the Collection program developed the document in response to prior recommendations we made in reviewing other aspects of collection case selection processes. However, as shown in table 2, our analysis of the document shows that it does not fully document and communicate program objectives, as recommended by federal internal control standards. The lack of clear and consistently communicated objectives was also evident in our focus group discussions with Field Collection managers. We asked managers to describe the objectives in choosing which case to assign a given revenue officer. Participants provided a range of responses. For example, many participants identified an objective of assigning revenue officers a mix of cases that reflects the current inventory. IRS officials explained that the mix of cases refers to the ratios between cases where the taxpayer has a balance due versus those that have not filed a tax return. This case selection objective can also mean balancing the ratio of individual and business taxpayer cases so that the mix of assigned cases mirrors what is available for assignment. This principle reflects the guidance provided in the August 2013 email from the Director of Field Collection. Focus Group participants also described productivity, or resource use, as an objective. For example, one participant said, “I look at cases that are going to be more productive rather than assigning old, inactive cases. The more productive cases are those cases that have come to Field Collection more recently or have more recent [collection assessments or unfiled returns]. The older cases are stale.” In contrast, several focus group participants said that the program’s automated prioritization system sometimes gives higher priority levels to cases that are older and may not be collectable, such as cases that have been assigned to ACS for a long time and have not been resolved. Some participants also stated that balancing the revenue officer’s workload was an objective. According to these participants, this involves looking at the number and complexity of the current assigned workload of a given revenue officer to ensure that the next case assigned does not overburden the officer. In a March 2016 email to staff, the Director of Collection defined fairness in the program as having three components: (1) fairness to the taxpaying public by pursuing those who fail to voluntarily comply, (2) an equitable process to select cases expected to best promote voluntary compliance and other apparent Collection goals or objectives, and (3) respect and adherence to policies and procedures that safeguard relevant taxpayer rights in the collection process. This effort to define fairness came in response to recommendations we made in reviewing other aspects of IRS’s collection selection processes. While the effort demonstrates progress, our analysis of this email shows that it still does not meet applicable standards for clearly defining objectives and communicating them with methods appropriate for use in internal control, as detailed in table 3. Because of the shortcomings identified in table 3, IRS risks that employees implementing control procedures may not understand how fairness applies to their work. For example, territory and group managers in our focus groups offered a variety of opinions and perspectives of how to assure fairness in case selection. Specifically, when we asked focus group participants what fairness means to them and how they apply fairness in case selection, managers’ responses included: avoiding conflicts of interest, such as cases where the group manager or revenue officer has a prior relationship with an individual or business; selecting cases with consideration of geography, such as to ensure there are no areas where taxpayers are in a “tax free zone;” and diversifying selections by type of business, selecting cases so that the Field Collection program provides broad coverage, cases selected are representative, and no one group of taxpayers is selected more than others. Our focus group discussions also showed that managers had inconsistent views on the meaning of fairness in case selection and that some may not fully understand how to apply fairness or believe the selection process precludes unfair selection. In half of the group manager focus groups, at least one participant said he did not know what the role of fairness is in case selection or did not consider fairness in assigning cases. Some also said that choosing any case for assignment would be fair because all of the cases represent noncompliance and the automated selection process fairly prioritizes cases for potential selection. According to IRS officials, they have not clearly defined Field Collection program and case selection objectives and fairness because they believe their efforts to define them in the document and email described above were sufficient. However, without clearly defined and clearly understood objectives aligned to the Field Collection mission, program management lacks reasonable assurance that case selection processes support achievement of IRS’s mission, including applying tax law with integrity and fairness to all. The lack of clear and consistent objectives also impacts IRS’s ability to measure program performance, assess risks to the program mission, and determine whether the automated processes used are still appropriate. We found that the Field Collection program tracks some case assignment and closure data. Specifically, Field Collection management compares open case inventory to a portion of the case inventory awaiting assignment. IRS officials, including managers in all eight of our focus groups, noted that they use case mix data to monitor or adjust case selections on a monthly basis to achieve this balance. Our analysis of Field Collection case data suggests that, overall at the national level, the program’s mix of assigned cases is aligned—to some degree—with the available inventory by noncompliance type and taxpayer type, as shown in table 4. However, because the Field Collection program has not yet established clearly defined objectives and does not have related performance measures it lacks a way to measure program performance effectively over time. Federal internal control standards state that measurable objectives allow management to assess program performance in achieving them. For example, if one of Field Collection’s objectives was to achieve fairness and it defined fairness to include ensuring broad coverage of the taxpayer population in collection status, then the Field Collection program would need to establish measures to assess its achievement of this objective. Similarly, if a case selection objective was to assign them so that cases assigned to revenue officers reflect the Field Collection group inventory, then IRS would need to clearly link this objective to related performance measures to which staff were held accountable. We identified a number of potential data elements in the case selection system that could be helpful to IRS in developing such performance measures, as shown in table 5. We found that IRS currently has two approaches for assessing risks within the agency. These approaches are: Internal controls framework. The procedures in IRM 1.4.2 govern IRS’s processes for monitoring and improving internal controls, which include identifying and mitigating risks. Managers are expected to understand the risks associated with their operations and ensure that controls are in place and operating properly to mitigate those risks. Enterprise Risk Management (ERM). ERM is broader in scope than internal controls, focusing on service-wide risks. ERM is intended to help the service in setting strategy to consider risk and how much risk the service is willing to accept. IRS implemented ERM in February 2014 to alert IRS management to IRS-wide risks and to serve as an early-warning system to identify emerging challenges and address them before they affect operations. However, in order to use both of these approaches effectively to identify, analyze, and manage risk, IRS needs to have clearly defined, measurable objectives. Federal internal control standards state that effectively managing a program to achieve its mission involves comprehensively considering and assessing potential risks in the program’s internal and external operating environments and establishing risk tolerances (the acceptable level of variation in performance relative to the achievement of objectives). Such tolerances are often stated in terms of performance measures, which allow performance assessment toward achieving objectives. Lacking clearly defined and associated performance measures therefore hinders the Field Collection program’s ability to effectively assess, identify, and address risks to the achievement of its mission. Without clearly defined objectives, risks to achieving those objectives cannot be identified and analyzed, nor can risk tolerances be determined. According to IRS officials, the Field Collection program has not assessed risks posed by case selection processes because selection processes are well designed. However, unless Field Collection management identifies and understands the significance of the risks to achieving identified objectives, IRS lacks sufficient assurance that the program’s case selection processes support achievement of objectives and respond to the identified risks within acceptable tolerances. The Field Collection program’s automated prioritization and decision support systems are control procedures that are intended to help guide staff to reduce risks in making decisions. For example, the priority levels may help guide group managers to generally select the types of cases management considers higher priority, such as those that could yield more revenue or other positive compliance results, which potentially reduces the risk of using resources inefficiently. However, because Field Collection lacks program and case selection objectives, it is not clear what objectives the automated processes support or which specific risks they are intended to address. According to federal internal control standards, periodic reviews of controls assure procedures continue to work as intended. Monitoring internal control design and effectiveness, and revising control procedures as needed provides sufficient evidence that the controls continue to be effective in addressing risks (which can change over time) and support achievement of program objectives. Although IRS occasionally makes and documents ad hoc changes to these automated processes to improve results, Field Collection lacks documented procedures to periodically review automated case selection policies, procedures, and related activities, such as the case characteristics and thresholds used to classify cases by priority level. IRS established the queue priority categories in 2000 and modified them in 2001, but did not have available documentation of periodic assessments to assure they continued to be effective in the intervening 15 years. According to IRS officials, Field Collection lacks documented procedures for periodic assessments because selection processes are well designed. However, without periodic reviews IRS lacks reasonable assurance that the case selection processes are still effective in working toward achieving the program’s mission, including fairness to all taxpayers. Management is responsible for establishing operating procedures and communicating them to staff to ensure they are followed so that objectives are achieved. Establishing and communicating guidance— such as documenting procedures—provides necessary assurance that the staff responsible for implementing procedures understand and apply them to effectively achieve program objectives. The Field Collection program has established and communicated operating procedures to guide automated aspects of case selection. However, Field Collection has provided insufficient guidance to group managers on the use of professional judgment when manually selecting cases. As we noted earlier, we learned about the judgment group managers exercise in selecting cases by talking with Field Collection officials. For example, during the focus groups, managers described how professional judgment factors into the case selection process. Some group managers said they may choose to select a given case because of its geographic proximity to other cases assigned to the revenue officer. Similarly, several group managers discussed how they used professional judgment based on previous experience to assess a case’s potential productivity for resulting in collection. This is consistent with the findings of a September 2014 report from the Treasury Inspector General for Tax Administration (TIGTA). Although group managers use professional judgment when selecting cases for assignment—resulting in the commitment of revenue officer resources and some cases being selected over others—IRS has limited guidance on how to exercise such judgment. IRS’s official guidance—the Internal Revenue Manual (IRM)—does not guide group managers on how to exercise judgment, such as by listing the factors that ought to be taken into account to help ensure that Field Collection program and case selection objectives are achieved. The only place the IRM acknowledges professional judgment is in a note that states, “There are many considerations when assigning work such as: risk level, case grade, current inventory, geographical issues, etc.” The only other program- wide guidance we identified was the August 2013 email from the Director of Field Collection stating that cases should be selected so that the mix of assigned cases mirrors what is available in the queue. According to IRS officials, Field Collection has not developed and documented guidance for how group managers are to exercise professional judgment in case selection because they consider current procedures sufficient, such as relying on group mangers to understand local conditions, relying on their previous experience as revenue officers or gaining necessary experience on the job. However, the use of professional judgment without sufficient guidance presents risks and results in Field Collection management not having sufficient assurance that the case selection decisions group managers make support achievement of the program’s mission of applying the tax law with integrity and fairness to all. The Field Collection program’s automated systems and the decisions made by group managers determine if some collection cases are pursued sooner, later, or at all. Case selections can affect federal spending, revenue collected, and taxpayer confidence in the tax system’s fairness, which can affect overall voluntary compliance. Therefore, it is important that the Field Collection program select and pursue collection cases that are most likely to produce results in support of IRS’s mission, including applying tax laws with integrity and fairness to all. Without clearly defined and measurable objectives the Field Collection program cannot know, or provide taxpayers assurance that, its case selection procedures are effectively supporting its mission. Further, without objectives and other controls IRS will not be able to monitor performance; identify, assess, and manage risks; or ensure that its automated process are still effective. Moreover, while the use of professional judgment is to be expected in the selection of cases for assignment, without guidance for managers, IRS will not have assurance that selections are being made consistently across its regional offices. To ensure that Field Collection program case selection processes support IRS’s and the Collection program’s mission, including applying tax laws with integrity and fairness to all, we recommend that the Commissioner of Internal Revenue take the following five actions. Develop, document, and communicate Field Collection program and case selection objectives, including the role of fairness, in clear and measurable terms sufficient for use in internal control. Develop, document, and implement performance measures clearly linked to the Field Collection program and case selection objectives. Incorporate program and case selection objectives into existing risk management systems or use other approaches to identify and analyze potential risks to achieving those objectives so that Field Collection can establish risk tolerances and appropriate control procedures to address risks. Develop, document, and communicate control procedures guidance for group managers to exercise professional judgment in the Field Collection program case selection process to achieve fairness and other program and collection case selection objectives. Develop, document, and implement procedures to periodically monitor and assess the design and operational effectiveness of both automated and manual control procedures for collection case selection to assure their continued effectiveness in achieving program objectives. We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. The Deputy Commissioner for Service and Enforcement provided written comments on August 25, 2016, which are reprinted in appendix II. IRS agreed with our recommendations and described actions it plans to take to address each of them. IRS stated that it appreciates GAO’s support and guidance as it continues to seek opportunities to improve Field Collection case selection controls and case selection throughout IRS. IRS states that our report does not identify any instances where the selection of a case was considered inappropriate or unfair. However, as described in our scope and methodology, we did not design our study to look for cases of inappropriate selection but rather to assess the internal controls that help safeguard the case selection processes. By evaluating the Field Collection program’s internal control framework for selection, we were able to determine whether IRS has processes in place that provide reasonable assurance of fair case selection. IRS outlines planned actions to address each of our recommendations. However, it is not clear that these actions will be fully responsive to the first recommendation that IRS develop, document, and communicate Field Collection program and case selection objectives, including the role of fairness, in clear and measurable terms. IRS states that the Small Business/Self-Employed Division (SB/SE) will develop fiscal year 2017 program objectives that align with the mission of SB/SE and that the Collection program will develop and document specific Field Collection and case selection activities that will support SB/SE objectives. Our concern is that it is not clear how these efforts will address our recommendation to establish Field Collection (not division-level) program and case selection objectives. As described in this report, listing distinct activities or case types to focus on in a fiscal year does not meet the internal control standard of clearly defining and communicating program objectives in specific and measurable terms. Since it is not clear that the actions IRS described will result in Field Collection program and case selection objectives sufficient for internal control purposes, IRS’s ability to address our related recommendations to establish performance measures, assess program risks, and monitor control procedure effectiveness may be limited. Clearly defining objectives is the foundation for effective implementation of internal control standards, including assurance that program operations effectively address risks to program objectives and support the achievement of objectives over time. In response to our recommendation to develop, document, and communicate control procedures guidance for Field Collection group managers to exercise professional judgement in case selection, IRS stated they would review current procedures and guidance and make changes if necessary. Given that we found little documented guidance on the appropriate use of professional judgement, IRS lacks sufficient assurance that case selections support achievement of the program’s mission of applying the tax law with integrity and fairness to all. IRS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) describe the Field Collection program’s processes (automated and manual) for prioritizing and selecting cases and (2) assess how well Field Collection case selection processes support the collection program’s mission, including applying tax laws “with integrity and fairness to all.” To describe the case selection processes, we reviewed program documents and interviewed knowledgeable IRS officials, including officials in the Small Business and Self-Employed Division Collection and Field Collection offices. Our document review included guidance in the Internal Revenue Manual (IRM) and automated system manuals. Our analysis included both automated and manual processes that may involve IRS staff. We analyzed these processes to outline and graphically depict systems and processes IRS uses to prioritize and select cases. To provide information on the assigned and unassigned case inventory, we analyzed data from IRS Field Collection’s main inventory management and case selection information system, ENTITY. The data included characteristics such as the dollars due on selectable and assigned cases, the age of the cases, and the priority levels of the cases as determined in the prioritization process. These data describe a one- time snapshot of IRS Field Collection case inventory characteristics on March 10, 2016. The data were only available as a snapshot because, according to IRS officials, ENTITY is the only source for data on the priority level of each case and the data on priority levels are updated frequently and are not stored. To assess the reliability of the ENTITY March 10, 2016, snapshot data we present in the report tables, we interviewed knowledgeable IRS officials and manually tested the data for missing data, outliers, or obvious errors. We also reviewed relevant documentation on management reports and case routing data. In addition, we received another snapshot of the case inventories for May 25, 2016, and compared the data to the March 10 snapshot. We analyzed the data to determine if it changed significantly between the two points in time—which, for the purposes of our analysis, we determined would be a greater than 10 percent change—and found no significant changes. We found the data sufficiently reliable for the analysis that we conducted in this review. To evaluate how well the case selection processes support program goals, we compared the selection process and procedures to selected standards in Standards for Internal Control in the Federal Government, to include the standard that managers define program objectives, assess risks to the objectives, and design controls to support the achievement of the objectives and address the risks. We selected the standards by assessing which are among the most relevant to ensuring the selection processes support mission achievement given our objectives and the program context. These standards include to define program objectives in clear and measurable terms, which is an internal control foundation for other selected standards to assess risks and establish risk tolerances; to design and implement control procedures to guide operations and address risks; and to establish performance measures and procedures for assessing control procedures to assess program performance in achieving objectives and ensure that controls effectively address risks and support achievement of objectives over time. Our review of the design of controls included the IRM and other Field Collection program documents that we used to describe the case selection process in objective one. We conducted eight focus groups with a non-generalizable, nation-wide random sample of IRS Field Collection managers—two focus groups with territory mangers and six with group managers—to collect evidence on the implementation of the case selection process. We received a list of all Field Collection group and territory managers from IRS. To ensure that managers selected had sufficient experience in their respective positions to actively contribute to the focus groups, we removed managers that were “acting” or had less than two years of experience in their position. We arranged the list of managers in a random order. Managers were assigned a focus group date and time in order of their random selection, controlling for their time zones, and were given the option to participate in the focus group or not. 43 of the 46 group managers that agreed to participate in the focus groups actually did; all 16 territory managers participated. All of the focus groups were conducted by phone in the week of March 28, 2016. We asked all eight focus groups questions about internal controls in the Field Collection case selection process, including the program objectives of the case selection process and the case characteristics managers consider when making case selections and assignments. We documented the responses from the focus group participants and categorized the responses into themes. We analyzed the themes for their frequency and pervasiveness through the focus groups. We looked for patterns or trends across all eight focus groups and for differences between the group and territory manager focus groups. We conducted this performance audit from August 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. James R. McTigue, Jr. (202) 512-9110 or [email protected]. In addition to the above named contact, Brian James (Assistant Director), David Dornisch, Steven Flint, Travis Hill, Ted Hu, Ronald W. Jones, Kay Kuhlman, Donna Miller, Justin Riordan, and Andrew J. Stephens made key contributions to this report. | IRS's Field Collection program is where IRS revenue officers make in-person contact with noncompliant individuals and business officials to enforce tax return filing and payment requirements. Sound processes for selecting cases are critical to maintain taxpayer confidence in the tax system and use federal resources efficiently. GAO was asked to review the processes IRS uses to select collection cases for potential enforcement action. This report (1) describes the Field Collection program's automated and manual processes for prioritizing and selecting cases and (2) assesses how well Field Collection case selection processes support the collection program's mission, including applying tax laws “with integrity and fairness to all.” To address these objectives, GAO reviewed IRS documents and conducted interviews with IRS officials knowledgeable about the case selection processes, including a series of focus groups with IRS Field Collection managers. GAO evaluated how well the processes adhere to relevant federal standards for internal control. The Internal Revenue Service (IRS) uses automated processes to prioritize cases to be potentially selected for in-person contact to resolve a tax collection issue (see figure), but group managers in the Field Collection program manually select the cases to assign to revenue officers. For example, when reviewing cases, group managers consider characteristics of the revenue officer available—such as current workload—and case characteristics—such as potential collectability—when deciding whether to assign a case. GAO found weaknesses in the Field Collection program's internal controls for case selection, including: Program objectives are not clearly defined and communicated. IRS has not sufficiently developed and communicated specific and measurable program objectives, including fairness. GAO heard different interpretations of program objectives and the role of fairness from focus group participants. Without clearly defined and clearly understood objectives aligned to its mission, Field Collection management does not have reasonable assurance that case selection processes support achievement of that mission. Further, the lack of clearly articulated objectives undercuts the effectiveness of Field Collection management's efforts to measure performance and assess risks. Documentation and assessment of case selection risks are inadequate. The Field Collection program's automated prioritization and decision support systems are control procedures that may guide staff to reduce risks. However, the Field Collection program does not have documented procedures for periodically reviewing automated aspects of case selection. Further, the Field Collection program lacks sufficient guidance for group managers to exercise judgment in case selection. These deficiencies limit the Field Collection management's ability to provide reasonable assurance that selection decisions effectively support achievement of IRS's mission. GAO is making five recommendations, including that IRS: develop and document objectives in clear and measurable terms, including fairness; provide guidance for group managers' use of judgment in selecting cases; and develop procedures to assess automated and manual processes. IRS agreed with the recommendations and outlined planned steps to address them. |
Within the Department of Energy (DOE), federal and contractor employee training is provided through a decentralized training structure. DOE’s headquarters offices, field offices, and contractors all have their own training programs and budgets and dedicated staffs. These programs provide training to federal and contractor employees on a wide variety of subjects. Comparing fiscal year 1995 and fiscal year 1997, DOE’s expenditures on training decreased by about $175 million, or about 32 percent. A comparison of DOE’s training expenditures with other federal agencies and with the private sector indicates that DOE’s training expenditures could be lower. DOE has also recognized this. Because DOE emphasizes decentralized management, it assigns the main responsibility for employee training to individual DOE offices and contractors. These organizations, in turn, have established their own training programs and budgets with dedicated staffs to provide employee training. At DOE’s headquarters, the Office of Management and Administration has the main responsibility for DOE-wide training issues. This office is responsible, for instance, for establishing DOE’s training policies, procedures, and management plans. The administration of training, however, largely falls within the purview of DOE’s individual offices and contractors. Specifically, these organizations are responsible for planning, providing resources for, developing and delivering, and reporting on the training given to their employees. In addition, these organizations are responsible for ensuring the efficient and effective management of their training programs. Generally, these organizations offer their employees three types of training: general, career development, and performance development. General training, which applies to all employees within the Department, includes courses on such subjects as equal employment opportunity, ethics, and security. Career development training, which supports the career growth of employees, includes courses on such subjects as time and stress management. Performance development training, which supports the acquisition or improvement of work-related skills, includes a wide range of courses, from technical courses on subjects such as nuclear physics and chemistry to nontechnical courses on back care and hearing conservation. The Department spends hundreds of millions of dollars annually training federal and contractor employees. According to DOE data, there has been a significant reduction in DOE’s training expenditures—about 32 percent—comparing fiscal years 1995, and 1997 (see table 1.1). The reduction in DOE’s annual training expenditures from fiscal year 1995 through fiscal year 1997 can be attributed to several factors. Those factors include (1) about a 13-percent decrease in the number of DOE and contractor employees; (2) greater use of advanced training technologies, such as computer-based learning; and (3) congressionally mandated reductions in training funds. DOE’s training expenditures could be lower, according to fiscal year 1997 data. First, the amount spent on employee training varied widely among DOE field offices that perform similar functions. For example, according to DOE, the Department’s Richland and Savannah River Operations Offices offered similar training, including courses on radiological worker training. However, the Savannah River Operations Office spent less than $2,300 on training per federal employee while the Richland Operations Office spent over $4,500 per employee. Second, DOE’s average training expenditure per federal employee was higher than most other federal agencies or major private sector companies reviewed by the American Society for Training and Development’s Benchmarking Forum. Specifically, the Society’s Benchmarking Forum collected and analyzed fiscal year 1997 training cost data from numerous organizations, including DOE, several other federal agencies, and nearly 60 companies in the private sector. The data showed that DOE’s average training expenditure of $1,808 per federal employee was higher than most other federal agencies reviewed (see table 1.2). The data also showed that DOE’s average training expenditure per federal employee was about $300 higher than the private sector average. The private sector companies included businesses of various types, including American Telephone and Telegraph and the Dow Chemical Company. Similarly, for contractor employees, DOE’s training expenditures could be lower, according to fiscal year 1997 data. First, the amount spent on contractor employee training varied widely at DOE locations that perform similar functions. For instance, the contractor supporting DOE’s Richland Operations Office spent an average of about $1,510 per employee while the contractor supporting the Savannah River Operations Office spent an average of about $3,500 per employee. Second, the average training expenditure per DOE contractor employee during fiscal year 1997 was about $130 higher than the private sector average. DOE has also analyzed its training costs relative to these other organizations and believes the analysis represents a good comparison of training data. According to a DOE training official, the analysis shows, for instance, that DOE’s costs per training day are still too high compared with those of private sector companies. In commenting on a draft of this report, DOE indicated that its current training expenditure level of 2.5 percent to 3.0 percent of payroll was comparable to similar, technology-intensive, large, private companies. We noted, however, that DOE’s average training expenditure per federal employee was higher than most other federal agencies or major private sector companies reviewed by the American Society for Training and Development’s Benchmarking Forum. DOE did not dispute that information. As agreed with the Chairman, Subcommittee on Energy and Water Development, House Committee on Appropriations, we determined the problems that exist with DOE’s training program and the changes that are needed to address those problems. Specifically, this report (1) discusses DOE’s current process for setting its training budget, (2) identifies opportunities to reduce the costs associated with DOE’s training program, and (3) evaluates DOE’s draft plan for training the Department’s employees in the future. To review the current process for setting the training budget, we contacted both DOE headquarters and field office officials. At DOE headquarters, we held extensive discussions with officials within the Office of Training and Human Resource Development. This office has the lead responsibility for drafting a new training plan that, when completed in early calendar year 1999, will lay out a strategy for improving DOE employee training over fiscal years 1999 through 2001. We also held discussions with officials on the Department’s Training and Development Management Council. This council is responsible for overseeing the efforts to improve DOE’s training program. In addition, we interviewed officials and reviewed training activities of six DOE headquarters offices—the Offices of Defense Programs; Environment, Safety, and Health; Energy Information Administration; Environmental Management; Science (formerly Energy Research); and Fossil Energy. These offices were selected because, according to their staffing levels, they are some of the largest offices within DOE headquarters. We further held discussions with officials at selected DOE field locations, including officials at the Department’s two training centers of excellence—the Nonproliferation and National Security Institute in Albuquerque, New Mexico, and the National Environmental Training Office in Aiken, South Carolina. Generally, a center of excellence is a DOE organization that has been selected for its training, development, and technical expertise in a topical area that cuts across the entire Department. To identify opportunities to reduce the costs associated with DOE’s training program, we reviewed various departmental documents. These included, but were not limited to, (1) a DOE memorandum documenting the results of the Department’s 1995 training review; (2) DOE’s 1995 and 1996 strategic training implementation plans; (3) DOE’s 1998 draft training plan; and (4) the minutes of the Training and Development Management Council. We also relied on the GAO work done under three previous assignments: (1) Department of Energy: Training Cost Data for Fiscal Years 1995 Through 1997 (GAO/RCED-97-140R, May 6, 1997); (2) Department of Energy: Status of DOE’s Efforts to Improve Training (GAO/RCED-97-178R, June 27, 1997); and (3) Department of Energy: DOE Contractor Employee Training (GAO/RCED-98-155R, May 8, 1998). To further identify opportunities to reduce DOE’s training costs, we compared DOE’s training costs with those of other federal agencies and the private sector. Specifically, we contacted training officials both inside and outside the federal government. Within the federal government, these contacts included training officials with the Department of Health and Human Services, the Department of Transportation’s Federal Aviation Administration, and the Tennessee Valley Authority. These agencies, as well as DOE, voluntarily provided training cost information to us and a private organization, the American Society for Training and Development’s Benchmarking Forum. Outside the federal government, we contacted an official with the American Society for Training and Development’s Benchmarking Forum, which had collected training cost information from nearly 60 private sector companies. From the contacts made, training cost information was obtained, analyzed, and compared with training cost information we had obtained from DOE. Generally, comparing training cost information from DOE and other federal agencies and the private sector appeared appropriate. All organizations, for instance, offer their employees a certain amount of technical skills training. The training cost information we obtained was for fiscal year 1997 and was the latest data available. To evaluate DOE’s draft plan for training the Department’s employees in the future, we contacted federal training officials both inside and outside of the Department. Externally, these contacts included training officials with the Office of Personnel Management; Defense Information Systems Agency; Federal Emergency Management Agency; and Nuclear Regulatory Commission. Within DOE headquarters, these contacts included officials with the Offices of Science; Environmental Management; Environment, Safety, and Health; Field Management; Procurement and Assistance Management; and Human Resources Management. At DOE field locations, these contacts included officials at the Nonproliferation and National Security Institute; National Environmental Training Office; Richland Operations Office; Rocky Flats Field Office; and Savannah River Operations Office. In all cases, these officials were contacted to obtain their views on the types of training problems DOE should be addressing in its draft training plan. We also reviewed various reports that have dealt with improving federal employee training. These included, among others, Getting Results Through Learning, Human Resource Development Council, June 1997; Leadership for Change: Human Resource Development in the Federal Government, U. S. Merit Systems Protection Board, July 1995; and Leadership for America: Rebuilding the Public Service, The National Commission on the Public Service, 1989. We provided a draft of this report to DOE for its review and comment. DOE’s comments are included as appendix I and are discussed in the chapters where appropriate. We conducted our work from June 1998 through January 1999 in accordance with generally accepted government auditing standards. Two important aspects associated with the management of DOE training could be improved. Those two are how DOE develops its training budgets and how it spends its training funds. We found that DOE has not successfully completed any of the critical steps necessary to develop a sound and defensible training budget. Specifically, we noted that occupational training needs have not been defined throughout the Department and incorporated into employees’ individual development plans (IDP); IDPs have generally not been prepared and used to support DOE offices’ annual training plans; and annual training plans have generally not been prepared and used to support DOE’s annual training budgets. With respect to how DOE spends its training funds, we identified two factors that account for the high costs associated with DOE training. Those factors are that DOE offices and contractors offer a high percentage of training that is not mandated by laws and/or regulations and that DOE’s offices and contractors independently develop and deliver training. DOE, for its part, is aware of the problems associated with its budgeting for and expenditure of funds on training and is considering corrective actions. However, our review raised questions regarding the direction and/or pace of DOE’s actions. According to the Office of Personnel Management and DOE guidance, certain steps are critical in developing a training budget. First, training needs should be defined. Second, the training needs should be incorporated into employees’ IDPs. Third, the IDPs should be used to prepare annual training plans. The successful completion of these steps supports the development of sound and defensible training budgets. We found, however, that DOE has not successfully completed any of these steps. Specifically, we found the following: Occupational training needs have not been defined throughout the Department and incorporated into employees’ IDPs; IDPs have generally not been prepared and used to support DOE’s annual training plans; and Annual training plans have generally not been prepared and used to support DOE’s annual training budgets. As a result, DOE’s annual training budgets are not directly tied to the training needs of the Department. Instead, DOE’s annual training budgets have generally been based on the amount of funding received in previous fiscal years. A training needs assessment is a critical initial step in developing a training budget. According to Office of Personnel Management regulations, an agency needs to assess its occupational training needs periodically. The assessment evaluates what performance is desired within an agency and what performance presently exists. When a gap exists, the assessment identifies the training necessary to elevate performance to the level desired. We found that DOE has not conducted a comprehensive assessment of occupational needs throughout the Department. The primary reason that a comprehensive assessment has not been conducted throughout DOE is that the Department’s order on federal employee training contains no provision for doing one. Specifically, the training order outlines the objectives and responsibilities for federal employee training throughout the Department. It also outlines the components essential to the administration of employee training. The order does not, however, require that occupational training needs be assessed. DOE training officials indicated that such an assessment had been included in the preceding DOE order on employee training but was deleted from the current order under the Department’s paperwork reduction program. During this and previous reviews of DOE activities, we have identified several departmental occupational groups that would most likely benefit from an assessment of occupational training needs. For instance, we believe that property managers may not be adequately trained. Supporting that view, we found that DOE recently surveyed 145 property managers and determined that 65 (or about 45 percent) had received no formal property management training. DOE also recently surveyed its field locations to determine if project managers are being properly trained. DOE guidance requires that employees who are project managers must be certified as possessing certain skills and receiving certain training. However, preliminary data show that many project managers have not received certification. For instance, DOE’s Savannah River Operations Office reported that only 2 of its 33 project managers had been certified. We further reported that managers throughout DOE believe that the lack of skilled staff in program, project, and contractor oversight positions is one of the Department’s most fundamental problems. Recognizing that certain occupational groups should have their training needs assessed, DOE, in November 1998, proposed a revised order and manual on federal employee training. The proposed manual states that an occupational needs assessment must be completed at least every 5 years once the revised order and manual are made final. In addition, the manual notes that such an assessment must include, but not be limited to, scientific and technical, acquisition, project management, and financial management functions. The DOE training official responsible for drafting the revised order and manual advised us that the 5-year assessment cycle was arbitrarily chosen. Furthermore, the sequence in which various occupational groups will be assessed has not yet been decided. DOE officials expect the revised order to be made final in the spring of 1999. After training needs have been established, IDPs should be prepared. According to DOE’s training order, an IDP is required for all employees within 60 days after they join the Department or transfer to a new position, and these IDPs should be reviewed and updated annually. The IDPs provide the mechanism to define total individual training needs within the Department and are to be used in preparing DOE offices’ annual training plans. Only a small percentage of the employees in the DOE offices we reviewed have completed an IDP. During 1998, we reviewed the training practices followed by six DOE headquarters offices. Only one office had IDPs completed for more than half of its employees. The six offices provided us with the estimates of completed IDPs shown in table 2.1. For the six offices combined, only 33 percent of the employees had completed an IDP. Recognizing that few of its employees had completed an IDP, DOE training officials established a goal in November 1998 of having 90 percent of DOE employees with an approved IDP by December 31, 1999. DOE training officials explained that the 90-percent goal is based on the belief that that may be the best percentage achievable. Some DOE training managers interviewed were not aware that the Department’s order on federal employee training requires the completion of an IDP, with certain exceptions, for 100 percent of the Department’s employees until we informed them. Each DOE office should complete an annual training plan that is based in part on the information contained in the IDPs, according to DOE’s training order. This plan provides the basis for developing training budgets. It should also contain certain information, such as the estimated number of employees to be trained, the type of training necessary, and the resources required to provide that training. We found that the annual training plans either have not been completed or did not contain the information necessary to justify a budget request. Five of the six offices had not completed an annual training plan for fiscal year 1998. For the one office that had—the Office of Environment, Safety, and Health—the plan did not contain the information required by DOE’s training order. For instance, the plan did not estimate the number of employees to be trained, the type of training necessary, or the resources required to provide that training. Instead, the plan identified the initiatives planned for fiscal year 1998, such as the need to continually provide employees with efficient course registrations and accurate training records. The DOE training official responsible for preparing the annual training plan explained that the plan did not contain certain information because it had been prepared using the previous year’s annual training plan as a guide, and this plan lacked this information. Recognizing that annual training plans were not being completed or were not being completed properly, DOE, as early as 1996, had attempted to develop a template for the plan. DOE envisioned that the template would include an outline and suggested language. Because this template was subsequently cancelled, DOE training officials in December 1998 immediately disseminated a copy of a properly completed fiscal year 1999 annual training plan as the model to be followed. We identified two opportunities for reducing the costs associated with DOE and contractor employees’ training. First, some nonmandatory training could be reduced or eliminated. According to a departmental estimate, about 90 percent of the training offered by DOE offices and contractors is not mandated by laws and/or regulations. In addition, DOE has not developed criteria on what type of nonmandatory training is appropriate. Some nonmandatory training is beneficial for career growth and professional development, such as courses on effective writing and oral presentation skills. However, the benefits of other nonmandatory training, such as determining social styles in the workplace, seemed less clear. Second, DOE’s headquarters offices, field offices, and contractors have developed and delivered duplicative courses and nonstandardized training across the Department. This problem has occurred because DOE’s decentralized training structure allows generally applicable courses, such as project management, hazardous worker training, and occupational safety and health, to be developed by each office and contractor. Federal agencies offer various types of training to their employees, including technical skills, executive development, supervisory skills, and mandatory training. We found that DOE as well as four other federal agencies estimated their fiscal year 1997 training expenditures by course type and provided that data to the American Society for Training and Development’s Benchmarking Forum. According to these estimates, only 10 percent of DOE’s fiscal year 1997 training funds were spent for federal employee training mandated by laws and/or regulations. In comparison, two other agencies spent more and two other agencies spent less of their fiscal year 1997 funding on mandatory training. Specifically, the Federal Aviation Administration spent about 42 percent and the Tennessee Valley Authority spent about 17 percent on mandatory training, while the Centers for Disease Control and Prevention spent about 3 percent and the National Institutes of Health spent about 3 percent. In addition, some training considered by DOE contractors to be mandated by laws and/or regulations may not in fact be legally required. For instance, in a 1998 report of contractor training activities at DOE’s Savannah River Plant, we found that the contractor’s internal audit office questioned the legal references for 30 percent of the training courses listed as mandatory. In that report, we pointed out that the contractor could not provide us with justification for each course it had considered mandated by regulation. We also found that DOE has not developed criteria on what type of nonmandatory training is appropriate. A DOE training official agreed, saying that there is a lot of “gray area” between what training is appropriate and not appropriate within the Department. Some nonmandatory training is beneficial for career growth and professional development, such as courses on effective writing and oral presentation skills. However, the benefits of other nonmandatory training seemed less clear. For example, one location offered a course to employees facing mid-life questions, another offered a course on determining social styles in the workplace, and a third offered a course on defensive driving. According to DOE training officials, while the Department estimated that only 10 percent of its training funds are spent on mandatory training, this estimate had not been confirmed by a detailed analysis. Furthermore, this estimate was only an informed estimate and did not include the training required, for example, by DOE orders. These officials also stated that the type of nonmandatory training offered is generally left up to DOE’s individual offices. Accordingly, DOE has no immediate plans to develop a more accurate estimate or conduct a comprehensive review of nonmandatory training offered across the Department. In 1998, we reviewed the training courses that were independently developed and delivered by DOE contractors at four field locations. The review showed that the cost per employee for these courses varied considerably among the contractors reviewed. For example, one course on environmental laws and regulations varied in cost from $72 per employee at one location to $624 per employee at another location. A second course on hands-on fire extinguisher use varied in cost from $2.50 per employee at one location to $102 per employee at another location (see table 2.2). Various factors account for the cost differences shown in the table, including the length of the course and the labor rate used for the instructor who provided the training. For instance, the course on environmental laws and regulations varied in length from 4 to 24 hours, and the course on hands-on fire extinguisher use varied in length from 15 minutes to 3 hours. Consequently, employees attending these courses received a dissimilar level of training, depending on the location. For some courses, for instance, Rocky Flats used an outside vendor to provide its training at a very favorable labor rate. In response to the problems associated with the independent development and delivery of training, DOE has been working since 1995 to standardize training courses that are generally applicable across the Department. DOE foresaw a number of benefits to be derived from standardization, including an overall reduction in training costs and staff, the establishment of a consistent knowledge base among employees, and the elimination of redundant training. In 1997, however, DOE abandoned its proposal to standardize training. At that time, DOE officials indicated that such a standardization effort was too comprehensive in scope in view of the more than 21,000 training courses in the DOE training community. DOE officials said the Department will continue efforts to standardize training by developing a listing of all DOE courses, called the Universal Catalog, and establishing centers of excellence on selected topics. As of December 1998, neither effort has been successful in standardizing training. The Universal Catalog was only 35-percent complete and more than 1 year behind schedule for completion. In addition, only two centers of excellence had been established, although DOE had planned to designate four centers of excellence by the end of the year. According to a DOE training official, competing DOE priorities precluded the Department from fully funding and making greater progress on both efforts. DOE can improve budgeting and reduce spending on training. In the budgetary area, DOE has not successfully completed any of the critical steps needed to develop sound and defensible training budgets. Because DOE has not completed these steps, its training budgets are not directly tied to the training needs of the Department. DOE also has not taken a number of actions to reduce its training expenditures. It has not developed criteria on what type of nonmandatory training is appropriate within the Department, which has led to a wide range of nonmandatory training courses being offered. DOE’s decentralized training structure has also led to the independent development and delivery of training courses by DOE’s headquarters offices, field offices, and contractors. In regard to budgeting, DOE has not conducted a comprehensive assessment of occupational training needs throughout the Department to better understand its training needs. Certain occupational groups would benefit from such an assessment, most notably those involved in program management, property management, and contractor oversight tasks. In addition, DOE has not completed an IDP for all employees required to have one by DOE order. DOE training officials have established a goal of completing IDPs for 90 percent of DOE employees by December 31, 1999. However, without some other impetus, such as holding managers accountable for ensuring that their staff complete IDPs, it is difficult to see how establishing a goal will have any more success than the requirements already contained in a DOE order. Finally, DOE offices have either not completed annual training plans or not completed them properly. According to DOE, the annual training plan provides the basis for any request for budget funds. Opportunities also exist for DOE to reduce its training costs. Specifically, DOE has not developed criteria on what type of nonmandatory training is appropriate nor reviewed the thousands of nonmandatory training courses offered using such criteria. In addition, DOE has not standardized the development and delivery of training courses that have general application across the Department. This has produced unnecessary and duplicative training courses throughout DOE. To improve the process for setting the training budget, we recommend that the Secretary of Energy require the expeditious completion of a comprehensive occupational training needs assessment throughout the Department. Where the assessment process cannot be expedited, priorities should be set for the order in which occupational groups will be assessed; the completion of IDPs for all departmental employees required to have one by DOE order; and the completion of annual training plans as required by DOE order. To reduce spending on DOE training, we recommend that the Secretary of Energy require the establishment of criteria for what type of nonmandatory training is appropriate and a review and elimination of nonmandatory training courses given across DOE that do not meet those criteria; and the standardization of the development and delivery of training that has general application across DOE. DOE agreed with our recommendations, except for the one that the Department expeditiously complete a comprehensive assessment of occupational training needs. In this regard, DOE indicated that it had already completed such an assessment for certain occupational groups and initiated a new program to rebuild a talented and well-trained corps of research and development program managers. Furthermore, DOE stated it will continue conducting these assessments as funding constraints and departmental priorities allow. While we are encouraged by the actions that DOE has already taken, we are concerned that funding constraints and/or other departmental priorities may, in some way, hinder the completion of a comprehensive occupational needs assessment. As we pointed out in this report, the lack of skilled staff is one of the most fundamental problems in the Department. Accordingly, we continue to believe that DOE should expeditiously complete a comprehensive assessment of occupational training needs. In addition, the Department disagreed with our use of the concept of nonmandatory training and with our discussion of whether excessive nonmandatory training takes place in the Department. DOE indicated that internal DOE directives as well as professional and international standards also impose significant training requirements upon the Department. DOE commented that, while this training is not normally defined as “mandatory” by externally imposed laws or regulations, it is required and does promote efficient as well as safe work practices. Nonetheless, DOE concurred in the benefits of reviewing training courses periodically and stated it is in the process of revising internal guidelines to better assess training, including the nonmandatory training that is given. DOE’s November 1998 draft training plan represents the Department’s most recent attempt to improve its training. The plan lays out a strategy for training DOE employees over 3 fiscal years (1999 through 2001). However, it has several shortcomings. Specifically, the plan does not realistically estimate the overall costs to implement the plan and the overall savings to be achieved from it; explain how DOE’s decentralized training resources will be committed to present a DOE policy regarding the use of the Department’s centers of excellence; and identify the steps necessary to improve contractor training performance. DOE training officials told us they were aware of these shortcomings and intend to address each of them before a final training plan is issued. In May 1995, the Department reviewed its training program and found a number of problems. The problems cited by the review included duplication and waste associated with the development and delivery of both federal and contractor training and a lack of consistency in the training provided across the Department. The review concluded, among other things, that if a DOE-wide training program were developed, tens of millions of dollars in annual training costs could be avoided. In response to the 1995 DOE review, the Department issued a strategic plan in July 1995 to improve federal employee training. DOE indicated that it intended to eventually develop a similar document to improve training for its contractors. Since its issuance, the strategic plan has had some success. For instance, DOE has established a new training structure that includes, for example, the Training and Development Management Council, which is responsible for overseeing the efforts to improve DOE’s training program. In addition, DOE has established two training centers of excellence. On the other hand, DOE has not achieved many of the goals established by the strategic plan. For instance, DOE had intended to reduce by 50 percent the number of duplicate training courses offered by it and its contractors. According to DOE officials, the Department must first enter all training courses into a central database before it can analyze courses and reduce redundancy. In July 1997, DOE decided to terminate its strategic training plan, recognizing that it had not been entirely successful, and replace it with a new training plan. DOE began drafting this new training plan in November 1997 and intends to make the plan final early in calendar year 1999. With the new training plan, DOE believes that further reductions in training expenditures are possible. In that regard, the plan contains 18 performance expectations to be accomplished. Those expectations include, for instance, (1) having DOE’s average training expenditures per employee be in alignment with similar federal agencies and the private sector by December 31, 1999; (2) not having DOE fund the development of duplicate training courses as of December 31, 1999; and (3) establishing six training centers of excellence by December 31, 2000. According to DOE’s new training plan, it is important that DOE estimate the overall dollar savings to be realized from the plan. Such an estimate, DOE training officials believe, is necessary to obtain the support needed from senior DOE management and the funding needed from the Congress. We found, however, that the plan provides a limited projection of the overall costs to implement the plan and no overall estimate of the cost savings to be realized from it. Instead, the plan only provides certain indications of the cost savings that are possible. However, these estimated cost savings are overstated. For that reason, it in unclear whether the plan’s savings will exceed its costs. In the draft plan, DOE estimates that about $2 million will be needed over fiscal years 1999 through 2001 to implement the performance actions contained in the plan. DOE also acknowledges that this overall estimate is understated. It states that cost estimates have not yet been made final for certain key portions of the plan, including the implementation of a DOE-wide training information system and a technology-supported learning program. In a March 1998 submission to the Congress, DOE estimated that the costs for these two portions for fiscal years 1999 through 2001 would be $3.8 million and $3.4 million, respectively. However, no fiscal year 1999 funding was appropriated for these two portions. Conversely, DOE provides no overall estimate of cost savings for the 3-year period covered by the plan. Instead, DOE intends to wait and see what cost savings the plan will generate. In the plan, nevertheless, DOE points out that about $3 million in savings were realized during fiscal year 1998 from several initiatives supported by the plan. Our review determined that these savings are overstated. For example, the $3 million savings is based, in part, on reported cost savings of about $1.7 million by DOE’s National Environmental Training Office in Aiken, South Carolina, for developing training courses that were then used at other DOE locations. We found, however, that the $1.7 million in savings was not offset against the approximately $1.9 million in costs to operate the Training Office in 1998. DOE training officials told us they will reevaluate and validate the cost data before the plan is made final. The director of the Training Office added that it must be recognized that the Training Office is only in its start-up phase and an immediate return on investment cannot be expected. DOE’s headquarters offices, field offices, and contractors all have their own training programs and training budgets. For DOE’s training plan to be successful, according to DOE, support and funding will be needed from offices throughout the Department. We found, however, that the plan does not explain how or according to what formula these DOE offices will be asked to commit funds to finance the plan. Moreover, we found that few DOE offices have actively participated in the development of the performance expectations contained in that plan. Thus, when the plan is completed, it is unknown whether support and funding will be available throughout the Department for the plan. According to DOE, each office within the Department is responsible for implementing the plan and will be held accountable for carrying out the expectations in it. In addition, each office will commit resources to ensure that the performance expectations in the plan are met. The plan does not specify, however, how, or according to what formula, these offices will be asked to commit resources to finance the plan. Instead, the plan indicates that DOE’s Training and Development Management Council will determine sometime in the future how the plan will be funded. While each office is responsible for the plan’s implementation, few offices have actively participated in the development of the performance expectations contained in it. According to the minutes of training plan meetings, representatives from only six of DOE’s principal offices have volunteered to take the lead in developing any of these performance expectations. DOE training officials also told us they did not foresee participation from any more offices. Once the training plan is completed, the training and development management council intends to forward the plan to the Secretary of Energy for endorsement. According to DOE training officials, the Secretary’s endorsement may help offices throughout the Department that did not participate in the plan’s development to accept its contents. However, how the plan will be funded is not discussed in the plan. A central feature of DOE’s training plan is the creation of centers of excellence. The mission of these centers is to provide high-quality training on a topical area that cuts across the entire Department. By operating the centers of excellence, DOE intends to eliminate the duplication of training. We found, however, that the training plan does not present a policy on the centers’ use or mandate that the centers will be the sole source for training on a topical area. Without that mandate, there is no assurance that duplication of training will be eliminated by the centers. Furthermore, DOE’s draft training plan provides little information on the centers-of-excellence concept. According to the training plan, two centers of excellence were successfully launched in December 1997. On the basis of that success, the plan indicates that further actions are planned. These include (1) forming a panel of experts to review applications to become a center of excellence, (2) recommending topical areas for center-of-excellence designation, and (3) developing general operating principles and means to evaluate the operating centers of excellence. The training plan indicates that four additional centers of excellence will be established by the end of fiscal year 2000. However, the training plan does not articulate a policy on, or mandate the use of, the centers within the Department. Absent that mandate, we found that one of the centers has separately delivered training courses on subjects that already existed within the Department. For example, during fiscal year 1998, the National Environmental Training Office delivered a 3-day course on Environmental Laws and Regulations. We determined that a similar course of comparable duration already existed elsewhere within DOE. For example, contractors at both DOE’s Oak Ridge Operations Office and Rocky Flats Field Office offer a 3-day course on Environmental Laws and Regulations. In commenting on this matter, the director of the training office said that DOE and DOE/contractor training organizations have historically worked independently. Therefore, it will take some time for these very same organizations to work more closely together. The director added that the training office, nevertheless, has had tremendous success during its first year in forming partnerships with various DOE locations to eliminate duplicate training. Furthermore, the training office’s newer courses are not being duplicated and in fact are being requested throughout DOE. According to DOE data, the Department spent about $322.2 million on training contractor employees during fiscal year 1997. Despite this large investment in its contractors and the documented problems in contractor training identified in DOE’s 1995 review of training, the Department’s draft training plan does not identify the steps necessary to improve contractors’ training performance or reduce costs. Instead, according to DOE training officials, the Department will be working with its contractors to improve contractor training through a subsequent installment of the plan. However, we found that DOE has not (1) established a departmental order on developing contractor training programs and budgets; (2) incorporated a standard set of performance measures into its performance-based contracts regarding contractors’ training efficiency and effectiveness; and (3) clarified the roles and responsibilities of DOE offices for the oversight of contractor training departmentwide. DOE training officials told us they were aware that these issues must be resolved and intend to address them in a subsequent installment of the training plan. However, a date for the subsequent installment to the training plan has not yet been established. While DOE’s order on federal employee training contains in-depth information on the administration of federal training, we found that its order on contractors’ human resource management provides considerably less detail on contractor employee training. This latter order only requires that each contractor submit an employee substance abuse and employee assistance program for approval by the appropriate DOE contracting officer. It does not, however, discuss the need for or the contents of an employee training program. The order also does not provide any guidance on developing a contractor’s annual training budget. Because of these omissions, DOE training officials told us the Department intends to issue a new order pertaining to contractor employee training sometime in the future. A DOE timetable for the issuance of that new order has not been established. We also found that DOE has not developed a standard set of performance measures to promote cost reductions in contractor training departmentwide. In May 1998, we reported that, for four contractors we reviewed, the applicable DOE field locations used various measures during fiscal year 1997 to evaluate contractors’ training performance. For example, at the Oak Ridge National Laboratory, DOE included a performance measure in the contract that required the contractor to develop a plan to consolidate all training records into an integrated database. In addition, at the Rocky Flats Field Office, DOE included a performance measure in the contract that required the contractor to fulfill 95 percent of the special requests for training when more than 3-days’ notice had been given. Although such measures could improve record keeping and course scheduling, they would not, for the most part, help eliminate unnecessary costs for contractor training or improve training effectiveness. In on our review of contractor training, we identified three performance measures that were not being used DOE-wide that could reduce contractor training costs. Specifically, we noted that DOE has not instituted a standard performance measure to take the following actions: Consolidate training operations where multiple DOE contractors or multiple contractor training organizations are present. Such consolidation can substantially reduce costs by eliminating redundant training organizations and redundant training courses. For example, at one contractor location contacted, the contractor consolidated training that had previously been provided by four separate organizations and reported a cost savings of about $3.3 million the following year. Subcontract (i.e., outsource) training courses to qualified vendors. Outsourcing can reduce the cost for providing contractor training. For example, the contractor at one location contacted outsourced about 65 percent of its training to a qualified vendor at an estimated savings of more than $0.6 million over a 2-year period. Use training course materials from other DOE locations rather than develop courses independently. One contractor, for example, advised us it has no policy or procedures requiring it to consider using materials from other DOE locations before deciding to develop a new training course. We noted that this contractor, in fiscal year 1997, spent over $3.9 million independently developing contractor training courses at its site. Only one of the four contractors we reviewed had performance measures aimed at reducing training costs. We further found that the roles and responsibilities for overseeing contractor training performance departmentwide have not been adequately addressed. According to DOE training officials we contacted, four DOE headquarters offices have some interface with contractors departmentwide—the Office of Human Resources Management, the Office of Contract and Resource Management, the Office of Worker and Community Transition, and the Office of Field Management. None of these offices, however, has responsibility for overseeing contractor training performance. According to an official with the Office of Human Resources Management, this office collects contractor training cost data but has limited contact with contractor training personnel. According to an official with the Office of Contract and Resource Management, this office only reviews contractor employees’ compensation, pensions, and benefits. According to an official with the Office of Worker and Community Transition, this office is primarily concerned with contractors’ employee displacement and downsizing programs. According to an official with the Office of Field Management, this office may deliver training on a particular subject to both federal and contractor employees in the field. None of these DOE offices indicated, however, that they review the contractor training courses offered or the contractor training budgets. DOE training officials agreed that the steps outlined above could improve contractor training. These officials also told us that the training plan will be revised to be applicable to DOE’s contractor workforce. In addition, specific performance objectives and measures will be included in the plan. Furthermore, the DOE order on contractor employee training will be revised to include a chapter that will assign responsibility and provide guidance for developing, monitoring, and evaluating training for departmental contractors. DOE’s new training plan represents the Department’s vision of the improvements needed in federal employee training for fiscal years 1999 through 2001. However, as currently drafted, the plan contains shortcomings. First, it does not provide a realistic estimate of the overall costs and overall savings associated with its new training plan. According to DOE training officials, such an estimate is necessary to obtain the support needed from senior DOE management and the funding needed from the Congress. Second, the plan does not explain how DOE’s decentralized training resources will be committed to accomplish the plan. At present, few DOE offices have actively participated in developing the performance expectations contained in the plan. Whether DOE offices that have not been actively involved in the plan will financially support it, when completed, remains to be seen. Third, the plan does not present a policy regarding the use of the Department’s centers of excellence. The centers are a central feature of the training plan. By operating the centers, DOE intends to eliminate the duplication of training within the Department. However, the plan does not present a policy on the use of the centers or mandate that the centers be the sole source for training within the Department on a topical area. Finally, even though DOE spent about 85 percent of its training budget for fiscal year 1997 on training contractor employees, DOE’s training plan does not address what steps should be taken to improve contractor employee training. Because of these shortcomings, the plan will not provide DOE with a reliable roadmap for the future, as intended. DOE officials told us they plan to correct these shortcomings, but it is not clear exactly how they will do this. To improve DOE’s new training plan, we recommend that the Secretary of Energy require that the plan include a realistic estimate of the overall costs to implement the plan and the overall savings to be achieved; an explanation of how DOE’s decentralized training resources will be committed to finance the plan; a policy regarding the use of the Department’s centers of excellence; and an identification of the steps necessary to improve contractor training performance. At a minimum, those steps should include (1) establishing departmental guidance on the development, monitoring, and evaluation of contractor training programs and budgets, (2) incorporating a standard set of performance measures regarding training into its performance-based contracts, and (3) clarifying the roles and responsibilities for the oversight of contractor training performance departmentwide. DOE concurred with the overall direction and intent of these recommendations. Among other things, DOE said that , as part of the plan, it will provide estimates of costs and savings in implementing the training plan. In addition, DOE said it will develop a policy on the use of the centers of excellence. Finally, DOE will add a new chapter to an existing DOE order to clarify DOE’s oversight roles and responsibilities for contractor training and provide performance-based contractor training objectives and measures to be incorporated into major contracts as they are renewed and offered for competitive bidding. | Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) training program and the changes that are needed to address those problems, focusing on: (1) DOE's current process for setting its training budget; (2) opportunities to reduce the costs associated with DOE's training program; and (3) DOE's draft plan for training its employees in the future. GAO noted that: (1) DOE has not completed any of the critical steps identified in the Office of Personnel Management's and its own guidance that lead to the development of a sound and defensible training budget; (2) for instance, DOE has not defined the training needs for various occupations, including program managers and contractor oversight specialists; (3) in addition, DOE employees have generally not completed individual development plans, and DOE offices have generally not prepared annual training plans; (4) DOE could reduce its training costs by eliminating certain nonmandatory training and reducing duplicative and nonstandardized training across the Department; (5) about 90 percent of DOE's training, according to a departmental estimate, is not mandated by laws or regulations, but DOE has not developed criteria on the type of nonmandatory training that is appropriate; (6) as a result, DOE offers a wide range of nonmandatory training courses, such as a course on determining social styles in the workplace and one on employees facing mid-life questions; (7) furthermore, because DOE and its contractors independently develop and deliver training, duplicate courses exist and nonstandardized training occurs across the department; (8) DOE's draft training plan has several shortcomings that may preclude it from improving departmental training over fiscal years 1999 through 2001, as intended; (9) for example, the draft plan does not realistically estimate what overall costs and overall savings will result from the plan, how the plan will be financed, given DOE's decentralized training resources, and how DOE's training centers of excellence will eliminate duplicative training, as intended; (10) moreover, even though DOE spent about 85 percent of its fiscal year 1997 training expenditures on contractor employees, the draft training plan does not address the steps necessary to improve contractor training; and (11) DOE officials stated that they are aware of these shortcomings and intend to address them in the final plan. |
The federal government uses grants, along with other policy tools, such as direct services and loans, to achieve national priorities through nonfederal parties, including state and local governments, educational institutions, and nonprofit organizations. The federal government uses grants to implement over 1,200 different programs through over 28 federal departments and agencies. These programs awarded funding to over 60,000 grantees. The Catalog of Federal Domestic Assistance (CFDA) provides descriptions of these grant programs, as well as other domestic assistance programs. The design and implementation of federal grants varies. For example, grant programs generally use one of three ways to award funding to grantees. Formula grants award funds based on distribution formulas prescribed by legislation or regulation. Project grants generally award funding for specific periods or specific projects, products, or services. As a third method, some grant programs award funds using a hybrid of formula and project-based awarding methods. In addition, federal agencies use a variety of organizational approaches to implement grant programs. Some agencies administer many grants through multiple, decentralized components, while other agencies have small, centralized grant-making offices that administer only a few grant programs. Federal grants are typically subject to a wide range of requirements derived from a combination of program statutes, agency regulations, and other guidance. They are also subject to many crosscutting requirements that apply to most federal assistance programs, including statutory provisions applicable to recipients of federal funds and administrative requirements such as audit and record keeping and the allowability of costs. As a general rule, grant programs are governed by detailed legislation as well as implementing regulations issued by the responsible agency. Prior to 1988, each agency issued regulations to govern its grant management, and OMB Circular No. A-102, Grants and Cooperative Agreements With State and Local Governments, also provided some governmentwide guidance for grants to state and local governments. OMB Circular No. A-110, Uniform Administrative Requirements for Grants and Agreements With Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations, provided some guidance for grants to other types of grantees like hospitals and other nonprofit institutions. In 1987, a memorandum from the President directed OMB to revise Circular No. A-102 to specify uniform, governmentwide terms and conditions for grants to state and local governments, and directed executive branch departments and agencies to propose and issue common regulations adopting these terms and conditions verbatim, modified where necessary to reflect inconsistent statutory requirements. Pursuant to this direction, the first iteration of what has come to be known as the “common rule” system was published on June 9, 1987. There are currently a number of OMB circulars on grants, which provide guidance only to federal (grantor) agencies; they do not apply directly to grantees. Therefore, each federal agency has issued largely identical sets of regulations that prescribe requirements that are binding on their grantees. These regulations are referred to as the “common rules.” Each agency’s common rule regulations are codified in the Code of Federal Regulations. Grant programs also share a common life cycle for administering the grants: announcement of grant opportunity, application, award, postaward, and closeout. During the award stage, the federal awarding agency enters into an agreement with the grantee. The grant agreement stipulates the terms and conditions for the use of grant funds such as the period of time funds are available for the grantee’s use, as noted by a start and end date. In addition, the awarding agency establishes an account in a federal payment system to execute payments to the grantee. During the postaward stage, the grantee carries out the requirements of the agreement and requests payments, while the awarding agency approves payments and oversees the grantee. The Payment Management System (PMS), operated by HHS, went online in 1984 and, as of 2006, was the largest of the nine civilian federal payment systems. The system, which handled about 70 percent of all federal grant disbursements in 2006, serves nine federal departments, an independent agency, a government corporation, and ONDCP. Appendix I provides a description of PMS and appendix II provides a recent list of department- and agency-level PMS customers. According to HHS, PMS is a full-service centralized grants payment and cash management system. The system is fully automated to receive payment requests, edit them for accuracy and content, transmit the payment to either the Federal Reserve Bank or the U.S. Treasury for deposit into the grantee’s bank account, and record the payment transactions and corresponding disbursements to the appropriate accounts. Federal agencies pay HHS a service fee for maintaining accounts and executing payments through PMS. PMS continues to charge agency customers a servicing fee until an account is closed. When the grantee has completed all the work associated with a grant agreement or the end date for the grant has arrived, or both, the awarding agency and grantee close out the grant. Closeout procedures ensure that grantees have met all financial requirements, provided their final reports, and returned any unexpended balances. To close out a grant, federal regulations generally require that the awarding agency ensures the grantee has completed all work and grantee settles (liquidates) all obligations 90 days after grant end date; grantee submits all final financial, performance, and other reports within 90 days of grant end date; and grantee requests an extension of the reporting deadline from the awarding agency, if required. These requirements apply when the awarding agency has specified a funding period for the grant (a start and end date) and has prohibited the grantee from having carryover balances. In this report, we refer to grants that were not closed after their end date as “expired” grants and PMS grant accounts that remained open after the grant’s end date as “expired grant accounts.” PMS issues a quarterly report to its customers, referred to as the “closeout report,” listing expired grant accounts that remain open, and for each account includes data on the funds made available and the amount of funds disbursed (i.e., “drawn down” or “charged”). GAO recommended HHS develop and distribute this type of report in 1987. HHS lists an account on a quarterly PMS closeout report if the end date for the account was 3 months old and there was no disbursement in the preceding 9 months. The closeout is an important grant management procedure because it is the final point of accountability for grantees. An undisbursed balance in an expired grant account can be an indication of a potential grant management problem. Grantees that do not expend their funding may not be meeting the program objectives for the intended beneficiaries. These balances may also suggest that awarding agencies or grantees, or both, may not be managing the funding efficiently or effectively. Effective grants management, including the completion of grant closeout, increases the likelihood that awarded grants contribute to agency goals. An agency or grant program office can track its performance in closing grants and other grant management procedures using a variety of measures. In this report, we use the amount of undisbursed funding to assess one aspect of the performance of the expired grants—their financial status. Other types of measures track other aspects of performance, such as the grants’ service quality and customer satisfaction. The amount of undisbursed funding measures the amount of funds remaining potentially available for deobligation. Agencies report to the President and Congress regarding their strategic plans and actual program performance, including, among other things, progress on improving grants management and other management initiatives under the auspices of the Government Performance and Results Act of 1993 (GPRA). GPRA is part of a statutory framework that seeks to create a more focused, results-oriented management and decision-making process within both Congress and the executive branch. The act requires federal agencies to develop strategic plans with long-term strategic goals, annual goals linked to achieving the long-term goals, and annual reports on the results achieved. An agency’s annual performance plan contains the annual performance goals and associated measures for its programs, as well as mission-critical management problems identified by the administration, the agency’s financial audit, and other agency assessments. An agency’s annual performance report compares its performance against its goals, summarizes the findings of program evaluations completed during the year, and describes the actions needed to address any unmet goals. OMB is responsible for providing guidelines to agencies on preparing their plans and reports, and for receiving and reviewing agencies’ strategic plans, annual performance plans, and annual performance reports. Currently, most agencies report on their annual performance in their PAR. OMB Circular No. A-11, Preparation, Submission and Execution of the Budget, provides guidelines on the content of the performance accountability portion of the PAR, while Circular No. A-136, Financial Reporting Requirements, provides guidance on the content of the financial accountability portion of the PAR. In our review of the closeout data for expired grants that executed payments through HHS’s PMS, we found the quarterly amount of undisbursed funding reported as remaining in expired grant accounts increased from about $600 million in 2003 to about $1 billion during 2006. These balances typically represented about 1 percent of the total funds made available for all expired grants in PMS during this period. This proportion included expired grant accounts with a zero undisbursed balance (no undisbursed funding) and expired grant accounts with a positive undisbursed balance (undisbursed funds remaining). Once we excluded expired grant accounts with a zero balance from the calculation and narrowed our focus solely to expired grant accounts with undisbursed balances, the proportion of undisbursed funding relative to total funds made available increased substantially. Among this smaller set of expired grant accounts, we found the undisbursed funding ranged between an average of 14 and 26 percent of the total funds made available for these grants. We found that, among PMS customers, numerous federal agencies and grant programs had expired grant accounts containing undisbursed funds. When we analyzed the quarterly PMS closeout data for 2003 through 2006, we identified two sets of expired grants accounts. One set consisted of expired grant accounts for which all of the funds made available had been disbursed, but still had not been closed. As stated earlier, grant accounts remain open in PMS, and HHS continues to charge service fees to the awarding agencies for maintaining accounts and executing payments, until the awarding agencies indicate to HHS that the account can be closed. Thus, even though all grant funds have been disbursed, these grant accounts are continuing to cost the awarding agency through accumulated PMS service fees. Moreover, the presence of expired grant accounts at the awarding agency suggests more than a minor administrative oversight. It suggests that the final point of accountability for these grants, which includes such important tasks as the submission of financial and performance reports, was not completed. We identified a second set of accounts that included those expired accounts reported as still having an undisbursed balance. On the basis of our review of expired grant accounts with undisbursed balances, we found that, from March 2003 through March 2005, the quarterly totals of undisbursed funding ranged between an average of 14 and 16 percent of the funding made available for the grants. However, from June 2005 through December 2006, the quarterly balances of undisbursed funding for these expired grants was near $1 billion, ranging between an average of 24 and 26 percent of the funds made available. These results are for grant accounts with specific time limits and do not include grant accounts that do not have specific time limits, such as TANF or Medicaid, because without a specific time limit, the grants, once awarded, do not expire. The PMS closeout data results described in the remainder of this report only pertain to the set of expired grant accounts with undisbursed balances, unless otherwise noted. As stated previously, in 2006 PMS was the largest of the nine federal payment systems, handled about 70 percent of federal grant disbursements, and served nine federal civilian departments, an independent agency, a government corporation, and ONDCP. In analyzing the expired grant accounts with undisbursed balances in the PMS closeout data from 2003 through 2006, we found these accounts were not confined to a few federal awarding agencies, grant programs, or grantees. Instead we found, in 15 of the 16 quarters, at least four of the federal departments using PMS had over 100 expired grant accounts with undisbursed funding. Lastly, we found that over 325 different programs administered the expired grant accounts with undisbursed funding and that thousands of grantees were associated with these grants. We analyzed the quarterly balances of undisbursed funding over 4 calendar years, from 2003 through 2006, according to four program characteristics: size of the funding originally made available to the grantee; whether program funding was awarded based on a formula or on a project basis; the grantee organization (entity) receiving the grant; and whether the program required the grantee to make a contribution to support the grant activity. We selected these four characteristics because they are fundamental elements of grant design that could be readily analyzed using the information from the PMS and the CFDA data sets. When we compared the undisbursed balances among the types within each of the four program characteristic categories, we found, for the first three characteristics, certain types of grants consistently had the largest quarterly balances. We found the largest quarterly balances of undisbursed funding to be consistently among expired grant accounts that had neither the smallest nor the largest funding awards, but rather in the mid-range of funding awards—that is, with funding awards from over $100,000 to $100 million. We found accounts with program funding awarded on a project basis had the largest undisbursed balances compared to those awarded on a formula basis. Lastly, we also found accounts with grants awarded to a state organization consistently had the largest undisbursed balances compared to other types of grantees. However, these results cannot be compared to the program characteristics of all closed federal grants or closed grants with payments processed through PMS from 2003 through 2006, due to the burden of collecting comparable data from eight other federal civilian payments systems or for all closed grants in PMS. Without comparative data, we cannot know whether the program characteristics for these expired grant accounts represented a disproportionate share, compared to all closed federal grants or all closed grants in PMS. Appendix I provides further information on our methodology and program characteristics findings. In our review of past audit reports, we observed that the reports generally focused on expired grants in specific agencies or grant programs. We also found that, when taken together, they suggested the presence of undisbursed balances in expired grant accounts was a long-standing problem. We and agency Inspectors General (IG) have reported for years that specific grant programs or awarding agencies have had expired grant accounts with undisbursed funding. Moreover, by synthesizing the observations from these reports, we found that these grants shared common grants management problems. In recent years, three federal agencies, the Department of Justice’s (DOJ) Office of Justice Programs (OJP), HHS’s National Institutes of Health (NIH), and the Environmental Protection Agency (EPA), have made concerted efforts to improve their grant closeout processes. In 2006 and 2007, several auditors highlighted grants management problems as mission critical in their agency’s Performance Accountability Report (PAR). In 2006, EPA went further and reported a financial performance measure to track the agency’s progress in closing grants. We have reported that the timely closeout of expired grants was a problem at various agencies over the past three decades. In two recent examples, we reported that the State Department’s (DOS) United States Agency for International Development (a PMS customer) did not routinely follow prescribed closeout processes to identify and recover inappropriate expenditures or undisbursed funds and that EPA (an agency that does not use PMS) closed out only 37 percent of grants in fiscal year 2005 within 180 days after the grant project ended as required by its own policy. IG reports identified a variety of awarding agencies or programs with closeout problems dating back to 2000. Maintaining undisbursed balances in expired grant accounts may prevent the deobligation of funding or expose the funding to improper spending or accounting. For example, the DOJ Office of Inspector General (OIG) reported in 2006 that $172 million in undisbursed funding could have been deobligated and that several million dollars in funding used from expired grant accounts was either unallowable or unsupportable. Audit reports identified several awarding agencies or programs with closeout problems. They generally attributed the problems to inadequacies in awarding agencies’ grant management processes, including closeouts as a low management priority, inconsistent closeout procedures, poorly timed communications with grantees, or insufficient compliance or enforcement. While we reviewed several audit reports examining closeout problems, this section summarizes examples from an HHS OIG and DOJ OIG report, and a GAO report on EPA—all issued within the last 3 years. Auditors indicated that grant closeouts were a low priority, at either the grantee organization or federal agency, which contributed to delays in grant closeouts. The audit reports described closeouts as a low priority in the context of staff-related issues. NIH and EPA reported that grantee staff resources were limited, staff were overburdened with other responsibilities, and staff considered grant closeout a low priority. NIH, DOJ, and EPA officials reported similar problems among agency grant staff. Staff turnover, at either the agency or the grantee organization, also led to lapses in the supervision of grants and the transfer of grant-specific information to new staff. Agency staff also reported that delaying grant closeout added to staff workload. For instance, NIH and EPA reported that as time elapsed it became more burdensome for staff to close out an expired account. Auditors noted that grant offices did not always have consistent grant closeout procedures, such as due dates for closeout completion. For example, we reported that EPA used closeouts to ensure that grant recipients had met all financial requirements and had provided final reports, and that any unexpended balances were returned to the agency. EPA’s policy stated that closeouts should occur within 180 days after the grant’s project end date. However, agency officials did not always comply with this policy—in fiscal year 2005 EPA closed out only 37 percent of the grants within 180 days. In its 2006 report, the DOJ OIG reported that two of the three DOJ grant offices had a deadline for closing out grants, while the third office did not; that each of the three DOJ grant offices conducted the closeouts process differently; and that each office had undefined and undocumented “workarounds” that had evolved over time. Auditors reported that agency communication with grantees, either the content or the timing of the communication, also delayed grant closeouts. The communication of inconsistent policies and procedures contributed to grantee confusion, especially for grantees who work with multiple federal programs of offices. For instance, DOJ reported OJP grantees, especially those who dealt with multiple offices, were confused by the variation in language, time frames, requirements, and communications. Auditors found the mistiming of the agency closeout reminders, or the lack of such reminders, also contributed to delays in report submissions. For example, NIH reminded its grantees about their closeout reporting a year ahead of time, too far ahead to serve as a timely reminder. Lastly, auditors also noted awarding agencies were not enforcing their closeout requirements through the application of controls, corrective actions, or penalties. For example, EPA grant officials told GAO they had no realistic options for taking strong action against grantees, usually state governments, for submitting late reports because the states had continuing grants for environmental programs. The HHS OIG found NIH program guidelines provided few specifics about what type of corrective actions were appropriate and when the grant office should apply the actions. The OIG noted that NIH grant offices could impose special award conditions on the grantees, such as additional monitoring or withhold future funding. However, the OIG found that grant officials rarely resorted to withholding future funding from a grantee due to late closeout because agency officials thought this penalty too severe and would slow down future project development. In response to auditors’ concerns, three federal agencies, DOJ’s OJP, HHS’s NIH, and EPA, undertook actions to improve their grant closeout processes. To varying degrees, the agencies’ actions included elevating grant closeouts to a higher agency priority in order to improve monitoring, standardization of procedures, communications with grantees, compliance, or enforcement, or a combination of the above. Auditors reported that when federal grant managers took these actions, agencies generally improved the timeliness of grant closeouts, reduced grant closeout backlogs, or improved their ability to identify and deobligate unspent funds from expired grants, or a combination of the above. In 2000, DOJ’s OJP initiated a pilot project called “Operation Closeout” to deal with grant closeout backlogs. The agency reported that this initiative accelerated the grant closeout process through revised closeout guidelines and elevated the importance of the closeout function as a required procedure in the administration of grants. Over a period of 6 months, “Operation Closeout” closed 4,136 grants, resulting in over $30 million in deobligated funds. In 2006, the DOJ OIG reported that since 2002 grant closeout was a higher priority within DOJ and that its awarding agencies made improvements in the timeliness of grant closeouts. For example, from 2001 to 2005, OJP reduced its backlog of expired grants from 11,356 to 6,237. The report also indicated that OJP was recommending, among several recommendations, establishing a performance measure to monitor efficiency and compliance with its closeout process. In 2006, the DOJ OIG also reported that OJP updated the grant monitoring requirements in its Grant Manager’s Manual, automated its Grant Adjustment Notice (GAN) process, shortened its timeline for closeouts from 180 days to 120 days, and addressed the backlog of grants overdue for closure. By automating the GAN process, auditors reported that OJP reduced the time to respond to grant adjustment requests by 10 days and planned to notify grantees of decisions regarding grant adjustment requests through the Grants Management System (GMS). OJP required that its grant staff conduct and document all its programmatic monitoring efforts in GMS. To address its grant closeout problems, NIH undertook several corrective actions in 2002 and 2003. The agency stated that it continued to emphasize to grantees that the submission of final closeout reports was an agency priority and improve agency monitoring. To address its backlog of expired accounts and reduce the burden on monitoring staff, NIH management assigned dedicated staff to resolving the backlog of accounts. Corrective actions included creating a database to track receipt of final reports, which allowed NIH to send individualized reminders to grantees of outstanding reports. Another planned action was to provide technical assistance to grantees through general outreach efforts or through targeted follow-up with individual grantees. NIH also established a workgroup and a reminder system to improve grantee compliance with its closeout guidelines. Between 1995 and 2005, EPA efforts led to substantial progress in resolving its backlog of expired grants. By 2005, the agency nearly eliminated its backlog of over 23,000 expired grants accumulated between 1999 and 2003. To continue its efforts and to hold program managers more accountable for grants management, EPA developed a corrective action plan. EPA planned to require all managers and supervisors to complete online grants management training; require baseline monitoring for all grants documented in the agency’s Integrated Grants Management System; and integrate grants with financial data and eliminate duplicate data entry. This plan also included incorporating grants management performance measures into the performance standards of project officers, supervisors, and managers with grants management responsibilities. In 2006 and 2007, eight agencies highlighted grants management problems as a management challenge or concern in their agency’s PAR to the President and Congress. Moreover, DOJ, HHS, and EPA reported to the President and Congress that timely grant closeout was a long-standing grants management challenge. In the 2006 DOJ and HHS PARs, both the IG and the independent auditor specifically addressed grant closeout problems and agency progress in addressing the problems. In each case, the IG listed grant closeouts as contributing to the department’s difficulties with grants management at several of its agencies. In response, the departments described both agency-level and departmentwide initiatives to address the problems. In HHS’s 2006 PAR, the independent auditors reported the department had more than 64,000 grants, with a remaining balance of $1.6 billion, eligible for closeout, and that 75 percent of these grants had been expired for more than 2 years. In the HHS 2007 Agency Financial Report, the HHS Inspector General continued to cite grant management, and specifically grant closeouts, as a management challenge. In DOJ’s 2006 and 2007 PAR, the independent auditors highlighted the IGs findings and explained that the closeout delays contributed to misstatements in the department’s financial statements. In its 2007 PAR, DOJ cited grant management process improvements by several of its program offices but also stated that grant management and closeout continued to be a major challenge. In our review of the 2006 EPA PAR, we found that EPA had a financial performance measure—the percentage of eligible grants closed out— specifically to track the agency’s progress in closing grants. The EPA assessed its performance by calculating the percent of grants closed out in the current year that had a “project end date” in the previous year. In 2005, EPA had goal of 90 percent grant closure, and it reported in its 2006 PAR achieving a 95 percent grant closure rate. In 2006, we concluded that, while EPA’s performance measure did not assess compliance since it did not reflect the 180-day closeout standard, the measure was a valuable tool for determining if grants were ultimately closed. As indicated earlier, EPA also planned to incorporate its grant performance measures into performance standards for its grants professionals. EPA’s 2007 PAR reported that the agency had successfully put into place grant management process improvements to correct long-standing problems identified by GAO and the OIG. EPA is also developing a new Grants Management Plan that will go into effect in 2008 to replace and update the plan established in 2003. Also in 2007, the EPA OIG removed the agency’s use of assistance agreements, including grants, from its list of EPA’s management challenges. The OIG attributed the removal of these agreements from its list of management challenges to the substantial actions EPA had taken to improve its management of these agreements. The OIG noted that EPA planned to evaluate implementation of its new policies and the OIG would continue to monitor the agency’s corrective actions in this management area. As previously discussed, OMB Circulars No. A-102 and No. A-110 establish standards for consistency and uniformity among federal agencies in the administration of grants through the preaward, postaward, and closeout phases of the grant life cycle, and Circulars No. A-11 and No. A-136 provide agencies with guidance on preparing and submitting their PARs in terms of performance and financial accountability. However, in our review of these circulars as well as selected agency regulations, we found no explicit instruction to agencies to track or report on undisbursed balances remaining in expired grant accounts. Although not explicitly directed to do so by the OMB circulars, we found that the inclusion of undisbursed balances in expired grant accounts in a department or agency’s GPRA documents—as has been done by DOJ, HHS, and EPA—has the potential to raise the internal and external visibility of the problem. As we reported in 2004, developing strategic plans and reporting on progress toward performance goals can lead to cultural changes within an agency. The focus on results can also stimulate internal problem solving and discussions about performance. Externally, OMB and Congress use GPRA documents, like the PAR, in discussions of agency performance and resource allocation. The existence of undisbursed grant balances in expired grant accounts may hinder the achievement of program objectives, limit deobligating funding for other uses, and expose the funding to improper spending or accounting. Our analysis showed that, taken together, quarterly undisbursed balances for expired grant accounts in HHS’s Payment Management System—which in 2006 handled about 70 percent of all federal grant disbursements—can be significant. Audit reports from agencies not participating in PMS indicate they also have expired grants with undisbursed balances. Data analysis of grant accounts in other federal payment systems may reveal additional expired grants with undisbursed balances. In reviewing audit reports for three agencies, we found that grant closeouts processes can improve when given a high priority and the agency addresses the multiple causes in a concerted fashion. The financial status of long-expired grant accounts is one aspect of agency performance that has implications for broader program and agency-level performance. By elevating this issue as a management priority in their annual performance plans and PARs, the three agencies made grant closeouts a priority for improving program and agency-wide performance. However, OMB circulars relating to grants management and performance reporting do not currently instruct federal agencies to track and annually report on undisbursed funding in expired grant accounts. Given the federal government’s constrained fiscal position, the executive branch could minimize undisbursed funding in expired grant accounts if OMB instructed federal awarding agencies to use its federal financial information systems and GPRA’s performance-reporting infrastructure to track and annually report this information. We recommend that the Director, OMB, instruct all executive departments and independent agencies to take the following two actions: (1) annually track the amount of undisbursed grant funding remaining in expired grant accounts; and (2) report on the status and resolution of the undisbursed funding in their annual performance plan and report in their annual PAR on the amount of undisbursed grant funding in expired grant accounts, why these funds were undisbursed, the actions taken to resolve the undisbursed funding and close the expired grants and related accounts, and outcomes associated with these actions. We provided a draft of this report to OMB and HHS for review and comment. HHS replied via e-mail and had no substantive comments. OMB responded with written comments, which we have reprinted in appendix III. OMB said it supported the intent of our recommendations to strengthen grants management by explicitly requiring federal agencies to track and report the amount of undisbursed grant funding remaining in expired grant accounts and that it believes agencies should design processes with strong internal controls to promote effective funds management for all types of obligations. OMB’s comments did not indicate a commitment to implement our recommendations. OMB stated that, during its regular review, it would consider revising its grants management guidance, Circulars A-102, Grants and Cooperative Agreements with State and Local Government, and A-110, Uniform Administrative Requirements for Grants and Other Agreements with Institutions of Higher Education, Hospitals, and Other Nonprofit Organizations, to include instructions for agency grant managers to track and report this information. OMB added that it does not favor having agencies report on these balances in their PARs and so would not offer instructions under its performance reporting guidance, Circular A-136, Financial Reporting Requirements. We agree that OMB should have discretion in instructing departments and agencies on how to track and report undisbursed balances in expired grant accounts. Our draft report recommended OMB instruct agencies to annually track and report in their PARs the amount of undisbursed expired grant balances. As we reported, some federal agencies such as EPA, HHS, and DOJ have already voluntarily included in their annual PARs their actions to track and reduce undisbursed balances in expired grant accounts. We found that such reporting had raised the internal and external visibility of the challenge and that these agencies had improved their performance. Accordingly, we continue to believe that the PARs would be appropriate vehicles to address on a governmentwide basis the issue of undisbursed balances in expired grant accounts. Such reporting may not be necessary for every department or agency, every year. Should it choose, OMB could always attenuate its requirements by setting a threshold as part of its instructions for reporting these balances in the PARs. We will send copies of this report to the congressional committees with jurisdiction over HHS and its activities, the Secretary of HHS, and the Director of OMB. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to address the following: (1) to what extent are there undisbursed grant balances in expired grant accounts and do they share any program characteristics?; and (2) do these expired grants share grant management challenges and what actions have federal agencies taken to improve grant closeout and diminish undisbursed balances? In the course of our work, we did not evaluate the implementation of closeout procedures for any specific grant program or awarding agency. The following describes the various procedures we undertook to answer these objectives. We began our study by reviewing the key federal guidelines on grant closeouts: Office of Management and Budget (OMB) Circulars No. A-102, Grants and Cooperative Agreements with State and Local Governments, and No. A-110, Uniform Administrative Requirements for Grants and Other Agreements With Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations. Since the President directed executive branch departments to adopt these OMB circulars in their regulations, we also reviewed applicable regulations for eight executive departments (the Departments of Agriculture , Education (Education), Energy ), Health and Human Services , Housing and Urban Development , Justice , Labor , and State ), the Social Security Administration (SSA), and the Environmental Protection Agency (EPA) to identify any differences in grant closeout guidelines between the OMB circulars and the agency regulations. We selected these agency regulations for review because a recent audit had indicated that either a grantee or program had problems with grant closeouts. To identify federal governmentwide guidance relating to federal agency performance reporting, we reviewed OMB Circular No. A-11, Preparation, Submission and Execution of the Budget and Circular No. A-136, Financial Reporting Requirements. To identify what other auditors found and recommended as strategies to diminish unspent funds in expired grant accounts, we interviewed various grant program experts from GAO and federal Offices of Inspectors General (OIG), as well as experts from the National Grants Management Association (NGMA), and the National Association of State Auditors, Comptrollers and Treasurers (NASACT). We also conducted a Web-based literature search for related audit reports and reviewed over 150 reports issued by GAO and various federal OIGs and independent agencies from 2000 to 2006 to identify some common grants management problems related to closing expired grants. During the 2000 to 2006 period, auditors issued reports to the following departments or independent agencies regarding either a grantee or program with grant management problems relating to closing expired grants: USDA, Education, DOE, HHS, HUD, DOJ, DOL, DOS, SSA, and EPA. We reviewed the 2006 and 2007 Performance and Accountability Reports (PAR) from EPA and the 15 cabinet-level executive departments to determine whether grant management, specifically timely grant closeouts and undisbursed balances from expired grants, were identified as a problem, and strategies agencies were employing to address the problem. To identify the amount of undisbursed funding remaining in expired grants, we collected and analyzed data from HHS’s Payment Management System (PMS) and the U.S. General Services Administration (GSA) Catalog of Federal Domestic Assistance (CFDA). This section describes these two data systems, our collection of selected data from each system, our analysis of the data collected, and the results for the program characteristics analysis. PMS is a centralized grants payment and cash management system, operated by HHS’s Program Support Center (PSC) in the Division of Payment Management (DPM). According to DPM, the main purpose of PMS is to serve as the fiscal intermediary between awarding agencies and the recipients of grants and contracts. Its main objectives are to expedite the flow of cash between the federal government and recipients; transmit recipient disbursement data back to the awarding agencies; and manage cash flow advances to grant recipients. PMS is the largest of the nine civilian federal payment systems and executes payments for nine federal departments, one independent agency, a government corporation and the Office of National Drug Control Policy (ONDCP), which, in 2006, represented about 70 percent of all federal grant disbursements. According to HHS, PMS is a full-service centralized grants payment and cash management system. The system is fully automated to receive payment requests, edit them for accuracy and content, transmit the payment to either the Federal Reserve Bank or the U.S. Treasury for deposit into the grantee’s bank account, and record the payment transactions and corresponding disbursements to the appropriate account(s). Appendix II lists the current PMS customers. A few statistics help to illustrate the volume of PMS’s payment processing. In 2006, PMS processed over $320 billion in payments to grant recipients. As of June 2007, according to an HHS official, PMS contained over 200,000 open grants. Cumulatively, the grants executing payments through PMS represent a significant amount of funding—open grants in PMS, as of May 2007, represented over $1.3 trillion in total funding. Over the years, PMS has executed payments for tens of thousands of grantees. The DPM described its role, in operating the PMS, as an intermediary between awarding agencies and grant recipients. DPM personnel operate PMS, making payments to grant recipients, maintaining user/recipient liaison, and reporting disbursement data to awarding agencies. Awarding agencies’ responsibilities include PMS registration of grant recipients (DPM personnel perform this function for cross-serviced agencies), entry of authorization data into PMS, programs and grants monitoring, grant closeout, and reconciliation of their accounting records to the PMS information. Awarding agencies pay HHS a service fee for maintaining accounts and executing payments through PMS. PMS continues to charge agency customers a servicing fee until an account is closed. Several federal agencies collaborate with HHS in executing grant payments including the U.S. Treasury and Federal Reserve Bank. According to DPM, the U.S. Department of the Treasury is responsible for establishing cash management policies, operating the Government On-Line Accounting Link System and the electronic system for processing of payments, check payments, and certain transactions. The Federal Reserve Bank’s responsibilities include direct deposit payments to payees’/recipients’ bank accounts. HHS documentation indicated that other public and private organizations also have roles in executing payments, including the grant recipients and their financial institutions. Grantee responsibilities include executing grants, reporting cash disbursements to PMS, and maintaining their own accounting records. The grantee’s financial institutions are responsible for receiving payments for credit to recipient accounts, and maintaining recipient bank accounts. An independent auditor assessed PMS in 2006. The auditor reported that DPM’s internal controls were suitably designed and tested to provide reasonable assurance that control objectives, including proper payments and remittances, and accurate reporting, were met. To identify the status of undisbursed balances in expired grant accounts, we narrowed our focus to grants executing their payments through PMS. The awarding agencies provide the descriptive information for each grant account to PMS. The data set for each grant account contains over 900 unique data fields. One of the required data fields in each PMS account record is the CFDA number for the assistance program that is associated with each account. Each quarter, PMS distributes to its customers a “closeout” report listing the expired grant accounts that, according to the data system, have not completed all of their closeout procedures. HHS listed an account on a quarterly PMS closeout report if both the latest end date for the account was 3 months old and the latest date of disbursement was 9 months old. PMS does not close a grant account until instructed to do so by the awarding agency. For each grant account, the report includes such information as the identification number, the amount of funding authorized for the grant, the amount charged, and the beginning and end dates for the grant. We initially requested that HHS provide PMS quarterly closeout reports for the period 2000 through 2006, then narrowed our focus to the 2003 through 2006 period. As part of the data request, we requested that HHS append to the closeout data additional information available in PMS for each grant account: the CFDA numbers and the type of grantee organization. Having the associated CFDA number for each grant account enabled us to link the grant account information in the closeout report with the associated program information as listed in the catalog. To test the reliability of PMS closeout data, we (1) reviewed existing documentation related to PMS, including the most recent system audit by the independent auditor, (2) interviewed officials responsible for administration of the database on data entry and editing procedures and the production of closeout reports, and (3) conducted electronic testing for obvious errors in completeness and accuracy. We worked closely with HHS officials responsible for the administration of the database and the production of the closeout reports. When we found discrepancies, we brought them to the attention of the agency officials and worked with them to make corrections before the analyses began. For example, our testing revealed that there were accounts in the PMS closeout data sets that: had CFDA numbers that did not match existing CFDA numbers; were for nongrant programs that were not intended to be entered into the grant payment system; and were for grants that did not have a defined expiration date. We excluded these extraneous entries from our analysis. After conducting these assessment steps, we found that the closeout data were sufficiently reliable for the purposes of this report. The Catalog of Federal Domestic Assistance (CFDA) is the single authoritative, governmentwide compendium and source document for descriptions of federal programs that provide assistance or benefits to the American public. According to GSA, the catalog does not include solicited contracts; foreign activities that do not benefit the domestic economy; personnel recruitment programs; benefits or assistance only available for federal employees; new programs that do not have enacted appropriations; or inactive programs with expired authorization or appropriation. OMB created the catalog pursuant to the Federal Program Information Act to ensure that comprehensive information on federal assistance programs was readily available to the public and interested parties. Later amended in November 1983, revised guidelines transferred the responsibilities of the Federal Program Information Act from OMB to GSA. OMB serves as an intermediary agent between federal agencies and GSA, with oversight responsibility for the necessary collection of program data. OMB Circular No. A-89 provides the federal guidelines for the collection and dissemination of the program information. GSA is responsible for maintaining and distributing CFDA information. By law, federal agencies submit program data to OMB for review. OMB reviews the information and provides any comments to and obtains updates and clarifications from the agency. OMB then submits each program description to GSA, which incorporates these submissions into the CFDA. According to a GSA official, GSA does not verify the accuracy of the information that the federal agencies provide for the program description. Each federal agency is responsible for assuring, among other things, the adequacy and timeliness of program information submitted to OMB. The law authorizing CFDA required that GSA establish and maintain a computerized retrieval system capable of identifying all existing federal domestic assistance programs. GSA now maintains the comprehensive database information on all federal domestic assistance programs. Information about these programs is made available to the public through periodic update and annual issuance of the catalog. Until 2006, GSA distributed printed copies of the CFDA for free. GSA’s free CFDA Web site (http://www.cfda.gov) is now the principal means of distributing the catalog. This Web site enables users to download an electronic file of the catalog or search its contents online. CFDA program description contains a wealth of financial and nonfinancial information, including program objectives, type of programs assistance provided, applicant eligibility requirements, and guidance on how to apply for assistance. To identify expired grants in PMS and two of the four program characteristics analyzed—funding award method and the contribution requirement—we obtained the October 2006 CFDA as an electronic data file from the GSA. We were able to crosswalk the CFDA program data to the PMS data using the CFDA number. The CFDA number is a five-digit number assigned to each assistance program listed in the catalog. In creating a grant account in PMS, HHS requires the awarding agencies to enter the CFDA number for the assistance program that is funding the grant. At our request, HHS appended the CFDA number to each of the accounts listed in the quarterly PMS closeout data given to GAO. We used information from the catalog to identify those grants that had specific time limits, and thus, that we could consider “expired” once the period of availability had ended. We also used CFDA program information to identify the type of funding award method (project or formula-based), and whether a grantee was required to contribute resources, such as matching funds, to the grant project. According to a GSA official, GSA does not verify the accuracy of the program description information submitted by the awarding agency for the catalog. To test the reliability of CFDA data, we selected a random sample of 25 CFDA program descriptions and compared selected information from the CFDA program description to the same program information from other federal sources. Specifically, we checked the reliability of six data fields: the CFDA number, awarding agency, program name, funding award method, contribution requirement and the period of availability of the grant. As we found only one discrepancy for one of six data fields, we can be 95 percent confident that fewer than 17.6 percent of cases in the catalog contain discrepancies between the electronic catalog and information available from other federal sources for these fields. We thus determined the selected CFDA data used in our analyses were sufficiently reliable for the purposes of this report. Prior to conducting the analysis of the 2003 through 2006 expired grants in PMS, we excluded extraneous accounts that appeared in the closeout data. The purpose of these exclusions was to avoid including accounts that might unduly distort the results on undisbursed funds in expired PMS grant accounts. We included accounts that were associated with grants or cooperative agreements. We excluded accounts if we could not associate them with a grant program. For instance, we found some PMS accounts were for nongrants. We also decided to exclude accounts that lacked a CFDA number, since without this number we could not verify that the account was for a grant or obtain other information used in our analyses. We included grant accounts that had a time limit for spending the funds made available and a zero or positive undisbursed balance. As described by HHS staff, the purpose of the PMS closeout report is to alert awarding agencies of accounts in PMS that remain open after their posted end date. If a grant does not have a defined end date, such as the Temporary Assistance for Needy Families (TANF), then HHS staff consider the PMS closeout report merely as a reminder to the awarding agency of the open account and that PMS continues to charge fees on this open account. We identified “expired” grants, grants that had defined end dates, by conducting a content analysis of their associated CFDA program descriptions. Through the content analysis we identified 26 grant programs (HHS and non-HHS), and associated PMS grant accounts, where the CFDA program description indicated no time limit on the availability of grant funding, and excluded these grant accounts from our analysis. We included grant accounts that met the previous criteria and also had a readily identifiable CFDA number. We found well over 100 CFDA numbers listed for grant accounts in PMS that did not have a program description in the October 2006 edition of the CFDA. We searched the 1999 to 2005 editions of the CFDA and the catalog’s historical index to find the program descriptions for these CFDA numbers. We excluded an account if we could not find any information on the CFDA number either in the CFDA or in the CFDA Historical Index, or if the CFDA number and program description had been deleted from the catalog before the 1999 edition of the CFDA. We excluded these grant accounts associated with very old CFDA numbers because pre-1999 catalogs are not readily available, making it unduly burdensome to obtain program information. We also excluded accounts in the 2003 through 2004 PMS closeout data if their associated CFDA numbers (1) were not in the 1999 through 2006 CFDAs, and (2) the accounts associated with the CFDA number did not have a cumulative undisbursed balance of greater than $100,000 for two consecutive quarters. We felt that, where the cumulative amounts of undisbursed funding for the accounts associated with these CFDA numbers were less than 0.1 percent of the quarterly totals and, by 2005, were at or near zero, it was unduly burdensome to collect the CFDA program information that was more than 7 years out of date. We also excluded expired accounts for several block grants in keeping with the 2006 independent audit of PMS which stated that (1) the funds for these block grants continued to be available to the grantees until the obligation/expenditure period expired, and (2) traditional financial reporting requirements do not apply to these programs. We excluded expired grant accounts associated with the following HHS block grant programs from our analysis based on this audit: Community Mental Health Services Block Grant, Preventive Health and Health Services Block Grant, Substance Abuse and Preventive Treatment Block Grant, Maternal and Child Health Services Block Grant, Social Services Block Grant, Low Income Housing Energy Assistance Block Grant, and Community Services Block Grant. To summarize, our reported results describe accounts listed in PMS’s quarterly closeout reports from 2003 through 2006, that were grants or cooperative agreements; had a time limit for spending; had a zero or positive undisbursed balance; had a readily identifiable CFDA number; had a program description in the 1999 through 2006 CFDAs; and did not have special financial reporting procedures. Having excluded the extraneous accounts, we analyzed the 2003 through 2006 payment closeout data as a complete set of current PMS customers, rather than analyzing specific federal agencies or grant programs. When we analyzed the quarterly PMS closeout data for the 2003 through 2006 period, we identified two sets of expired grants accounts. One set consisted of expired accounts for which all of the funds made available had been disbursed, but still had not been closed. We identified a second set of accounts that included those expired accounts reported with a positive undisbursed balance. Most of our analysis focused on those expired accounts with undisbursed balances. For each quarter we totaled the amount of undisbursed funding in the expired grant accounts, without adjusting the amounts for inflation. To identify common program characteristics of the expired grants with undisbursed balances, we conducted further analysis by linking data from the PMS closeout reports to selected program information from the CFDA. We identified four program characteristics for analysis: size of the funding award originally made available to the grantee; whether program funding was awarded based on a formula or on a competitive, project-by-project basis; the grantee organization (entity) receiving the grant; and whether the program required the grantee to make a contribution to support the grant activity. We selected these four characteristics because they are fundamental elements of grant design that could be readily analyzed using the information from the PMS and the CFDA data sets. While grant programs have other fundamental design characteristics such as the purpose of the program (e.g., grant funds are to be used for construction or providing services), they could not be as readily analyzed. PMS grant closeout data provided the data on grant funding size and grantee organization. The CFDA data provided information for the funding award method and the contribution requirement. We began our analyses by sorting the expired grants with undisbursed balances into 13 ranges of funding award size. These ranges were used because the average percentage of funds undisbursed were similar from quarter to quarter and minimally overlapped with the average percent for an adjacent range, reflecting natural breaks in the data analysis. The ranges varied from relatively small grants of under $25,000 to very large grants of up to $1 billion. For each funding range, we identified the quarterly balances of undisbursed funds for each of the 16 quarters from 2003 through 2006. We next sorted the expired grants with undisbursed balances according to the method used to award the funds to grantees. As described earlier, federal awarding agencies typically award their grant funding using a formula or on a project basis, or by using a hybrid of both methods. Our guideline in sorting by funding award method was that if a program description had more than one allocation method, we sorted the grant according to the first allocation method listed in the CFDA program description. Using this information, we found that of 328 unique grant programs in the 2003 through 2006 PMS closeout data with positive undisbursed balances, 54 were awarded on a formula basis and 274 were awarded on a project basis. Next, we analyzed the expired grants with undisbursed balances according to the type of grantee organization receiving the grant. For the grant organization characteristic, we collapsed the organization types used in PMS into six types (state, county, city, domestic nonprofit, domestic for- profit, and other). Lastly, we compared the quarterly undisbursed funding balances for those expired grants that required some form of grantee contribution to those that did not. As described in the CFDA, grant program regulations can require grantees to contribute some form of resources to support grant- related activities, such as requiring the grantee to provide matching funds, share in the costs, or provide in-kind contributions. We sorted grants as having required contribution if the CFDA program description indicated grantees were required to contribute some form of resources to support grant-related activities, such as requiring the grantee to provide matching funds, share in the costs, or provide in-kind contributions. For each program characteristic, we totaled the undisbursed funding according to the various types within each characteristic category. For example, for the method of funding award characteristic there were two types, project or formula-based. All of the program characteristic results are in comparison to other types of grants in the same characteristic category, such as sizes of grant authorizations or type of funding award method. When we compared the undisbursed balances among the types within each of the four program characteristic categories, we found certain types of grants consistently had the largest quarterly balances. Among the 13 funding award ranges, we found the largest quarterly balances of undisbursed funds in midsize grants, that had original funding awards ranging from over $100,000 to $100 million (in nominal dollars) for expired grants from 2003 through 2006. We also found that, between the two funding awards methods, grants awarded on a project basis consistently had the largest quarterly balances of undisbursed funding and that, among the six types of grantees, state grantees had the largest quarterly amounts of undisbursed funding, followed distantly by nonprofit organizations. When comparing grants requiring a grantee contribution and those grants that did not have this requirement—the fourth characteristic examined— we found that neither type had consistently larger quarterly amounts of undisbursed funding. Our analysis has several limitations. First, each analysis of the quarterly undisbursed funding by program characteristic was an independent assessment of the variation in undisbursed funding among expired grant accounts. Consequently, the results for each program characteristic cannot be combined into a general statement about the four characteristics. Second, the results are limited to the expired grant accounts with undisbursed grants balances listed in the PMS closeout report from 2003 through 2006. We were not able to compare these results to all closed federal grants or all closed grants in PMS due to the burden of collecting comparable data for all closed federal grants from the eight other federal civilian payments systems or for all closed grants from PMS. Lastly, we did not interview policy experts or agency grant managers to explore why expired grants with different program characteristics might have larger undisbursed balances. In addition to the contact named above, Thomas James, Assistant Director; Patricia Farrell Donahue, Analyst-in-Charge; Carlos Diz; Wesley Dunn; Sharon Hogan; Elizabeth Hosler; Susan Mak; Anna Maria Ortiz; Neill Martin-Rolsky; Minette Richardson; Jay Smale; and William Trancucci made key contributions to this report. | In 2006, the subcommittee concluded there was a need for increased accountability and transparency for unspent funds in federal programs and agencies, and requested GAO review the status of balances not drawn down by grantees by the time the grants' period of availability had ended. GAO was asked to answer these questions: (1) to what extent are there undisbursed grant balances in expired grant accounts and do they share any program characteristics?; and (2) do these expired grants share grant management challenges and how have federal agencies improved grant closeout and diminished undisbursed balances? To do this, GAO analyzed grant balance data from the largest federal grant payment system; reviewed grant management problems and corrective actions from more than 150 audit reports; and reviewed guidance from the Office of Management and Budget (OMB) and the Code of Federal Regulations. During calendar year 2006, about $1 billion in undisbursed funding remained in expired grant accounts in the largest civilian payment system for grants--the Payment Management System (PMS). PMS is administered by the Department of Health and Human Services and makes payments for about 70 percent of grants and for 12 federal entities. Undisbursed funding is funding the federal government has obligated through a grant agreement, but which the grantee has not entirely spent. Among all of the expired grant accounts in PMS that remained open, these undisbursed funds typically represented about 1 percent of the total funds originally made available for these grants--meaning grantees had spent most of their available funds. However, when expired grant accounts with no funds remaining were excluded and the focus was narrowed to just expired grant accounts with undisbursed balances, GAO found the amount of undisbursed funding represented, on average, about 26 percent of the original funding made available. The expired but still open grant accounts were associated with thousands of grantees and over 325 different federal programs. GAO also found that expired grant accounts with the largest undisbursed balances in PMS for calendar years 2003 through 2006 shared a few common program characteristics. However, the results could not be compared to program characteristics for all closed federal grants or all closed grants using PMS, during this period, due to the burden of collecting comparable data for all closed federal grants from eight other federal civilian payments systems or for all closed grants from PMS. Past audits of federal agencies by GAO and Inspectors General and annual performance reports by at least 8 federal agencies in 2006 and 2007 suggested that grant management challenges including grant closeouts and undisbursed balances are a long-standing problem. Closeout procedures ensure grantees have met all financial requirements, provided final reports, and that unused funds are deobligated. The audits generally attributed the problems to inadequacies in awarding agencies' grant management processes, including closeouts as a low management priority, inconsistent closeout procedures, poorly timed communications with grantees, or insufficient compliance or enforcement. However, when federal agencies, such as the Departments of Health and Human Services and Justice, and the Environmental Protection Agency, took corrective actions, there were improvements in grant closeouts and resolution of undisbursed funding. The actions taken by these three agencies generally focused on making grant closeouts a higher agency management priority, as noted in their recent performance reports, and on improving overall closeout processing. Using federal payment systems to track undisbursed funding in expired grant accounts and including the status of grant closeouts in annual performance reports could raise the visibility of the problem both within the agency and governmentwide, and lead to improvements in grant closeouts and minimize undisbursed balances. OMB circulars do not currently require federal agencies to track and report on undisbursed funding in expired grant accounts. |
NEDCTP’s mission is to deter and detect the introduction of explosive devices into the transportation system. As of June 2014, NEDCTP has deployed 802 of 985 canine teams for which it is able to fund across the transportation system.type for which funding is available, as well as describes their roles, responsibilities, and costs to TSA. There are four types of LEO teams: aviation, mass transit, maritime, and multimodal, and three types of TSI teams: air cargo, multimodal, and PSC. Since our January 2013 report, TSA has taken steps to analyze key data on the performance of its canine teams to better identify program trends, as we recommended. In January 2013, we reported that TSA collected and used key canine program data in its Canine Website System (CWS), a central management database, but it could better analyze these data to identify program trends. Table 2 highlights some of the key data elements included in the CWS. In January 2013, we found that NEDCTP was using CWS data to track and monitor canine teams’ performance. Specifically, field canine coordinators reviewed CWS data to determine how many training and utilization minutes canine teams conducted on a monthly basis. NEDCTP management used CWS data to determine, for example, how many canine teams were certified in detecting explosive odors, as well as the number of teams that passed short notice assessments. However, in our January 2013 report, we also found that TSA had not fully analyzed the performance data it collected in CWS to identify program trends and areas that were working well or in need of corrective action. For example: Training minutes: TSA tracked the number of training minutes canine teams conducted on a monthly basis, as well as the types of explosives and search areas used when training, to ensure teams maintained their proficiency in detecting explosive training aids. However, we found that TSA did not analyze training minute data over time (from month to month) and therefore was unable to determine trends related to canine teams’ compliance with the requirement. On the basis of our analysis of TSA’s data, we determined that some canine teams were repeatedly not in compliance with TSA’s 240- minute training requirement, in some cases for 6 months or more in a 1-year time period. Utilization minutes: We found that TSA collected and analyzed data monthly on the amount of cargo TSI air cargo canine teams screened in accordance with the agency’s requirement. However, it was unclear how the agency used this information to identify trends to guide longer-term future program efforts and activities, since our analysis of TSA’s cargo screening data from September 2011 through July 2012 showed that TSI air cargo teams nationwide generally exceeded their monthly requirement. We concluded that TSA could increase the percentage of cargo it required TSI canine teams to screen. Certification rates: We found that TSA tracked the number of certified and decertified canine teams, but was unable to analyze these data to identify trends in certification rates because these data were not consistently tracked and recorded prior to 2011. Specifically, we could not determine what, if any, variances existed in the certification rates among LEO and TSI teams over time because CTES reported it was unable to provide certification rates by type of canine team for calendar years 2008 through 2010. According to CTES, the agency recognized the deficiency and was in the process of implementing procedures to address data collection, tracking, and record-keeping issues. Short notice assessments (covert tests): We found that when TSA was performing short notice assessments (prior to their suspension in May 2012), it was not analyzing the results beyond the pass and fail rates. We concluded that without conducting the assessments and analyzing the results of these tests to determine if there were any search areas or type of explosives in which canine teams were more effective compared with others, and what, if any, training may have been needed to mitigate deficiencies, TSA was missing an opportunity to fully utilize the results. Final canine responses: Our analysis of final canine responses and data on corresponding swab samples used to verify the presence of explosives odor revealed that canine teams were not submitting swab samples to NEDCTP’s Canine Explosives Unit (CEU). Specifically, we determined that the number of swab samples sent by canine handlers to CEU for scientific review was far lower than the number of final canine responses recorded in CWS. We concluded that without the swab samples, TSA was not able to more accurately determine the extent to which canine teams were effectively detecting explosive materials in real-world scenarios. In January 2013, we recommended that TSA regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. These analyses could include, but not be limited to, analyzing and documenting trends in proficiency training minutes, canine utilization, results of short notice assessments and final canine responses, performance differences between LEO and TSI canine teams, as well as an assessment of the optimum location and number of canine teams that should be deployed to secure the U.S. transportation system. TSA concurred with our recommendation and has taken actions to address it. Specifically, TSA is monitoring canine teams’ training minutes over time by producing annual reports. TSA also reinstated short notice assessments in July 2013, and in the event a team fails, the FCC completes a report that includes an analysis of the team’s training records to identify an explanation for the failure. In April 2013, TSA reminded canine handlers of the requirement to submit swab samples of their canines’ final responses, and reported that the number of samples submitted that same month, increased by 450 percent, when compared with sample submissions in April 2012. CEU is producing reports on the results of its analysis of the swab samples for the presence of explosives odor. In June 2014, TSA officials told us that in March 2014, NEDCTP stood up a new office, known as the Performance Measurement Section, to perform analyses of canine team data. We believe that these actions address the intent of our recommendation and could better position TSA to identify program trends to better target resources and activities based on what is working well and what may be in need of corrective action. In our January 2013 report, we found that TSA began deploying PSC teams in April 2011 prior to determining the teams’ operational effectiveness. However, in June 2012, the DHS Science and Technology Directorate (S&T) and TSA began conducting effectiveness assessments On the basis of to help demonstrate the effectiveness of PSC teams.these assessments, DHS S&T and TSA’s NEDCTP recommended that the assessment team conduct additional testing and that additional training and guidance be provided to canine teams. See the hyperlink in the note for figure 2 for videos of training exercises at one airport showing instances when PSC teams detected, and failed to detect, explosives odor. In January 2013, we concluded that TSA could have benefited from completing effectiveness assessments of PSCs before deploying them on a nationwide basis to determine whether they are an effective method of screening passengers in the U.S. airport environment. We also reported in January 2013 that TSA had not completed an assessment to determine where within the airport PSC teams would be most effectively utilized, but rather TSA leadership focused on initially deploying PSC teams to a single location within the airport—the sterile area—because it thought it would be the best way to foster stakeholders’, specifically airport operators’ and law enforcement agencies’, acceptance of the teams. Stakeholders were resistant to the deployment of PSC teams because they have civilian handlers, and TSA’s response resolution protocols do not require the teams to be accompanied by a law enforcement officer. According to TSA’s Assistant Administrator for the Office of Security Operations, to alleviate airport stakeholders’ concerns regarding TSA’s response resolution protocols, the agency initially deployed PSC teams to the sterile areas, thereby enabling TSA to gather data on the value of PSC teams in the airport environment while reducing the likelihood of a final response from a PSC, since an individual has already passed through several layers of screening when entering the sterile area. However, aviation stakeholders we interviewed raised concerns about this deployment strategy, stating that PSC teams would be more effectively utilized in non-sterile areas of the airport, such as curbside or in the lobby areas. TSA subsequently deployed PSC teams to the passenger screening checkpoints. However, DHS S&T did not plan to assess the effectiveness of PSCs on the public side, beyond the checkpoint, since TSA was not planning to deploy PSCs to the public side of the airport when DHS S&T designed its test plan. In January 2013, we concluded that comprehensive effectiveness assessments that include a comparison of PSC teams in both the sterile and public areas of the airport could help TSA determine if it is beneficial to deploy PSCs to the public side of airports, in addition to or in lieu of the sterile area and checkpoint. During the June 2012 assessment of PSC teams’ effectiveness, TSA conducted one of the search exercises with three conventional canine teams. Although this assessment was not intended to be included as part of DHS S&T’s and TSA’s formal assessment of PSC effectiveness, the results of the assessment suggested, and TSA officials and DHS S&T’s Canine Explosives Detection Project Manager agreed, that a systematic assessment of PSCs with conventional canines could provide TSA with information to determine whether PSCs provide an enhanced security benefit compared with conventional LEO aviation canine teams that have already been deployed to airport terminals. In January 2013, we concluded that an assessment would help clarify whether additional investments for PSC training are warranted. We also concluded that since PSC teams are trained in both conventional and passenger screening methods, TSA could decide to convert existing PSC teams to conventional canine teams, thereby limiting the additional resource investments associated with training and maintaining the new PSC teams. We recommended that TSA expand and complete testing, in conjunction with DHS S&T, to assess the effectiveness of PSCs and conventional canines in all airport areas deemed appropriate prior to making additional PSC deployments to help (1) determine whether PSCs are effective at screening passengers, and resource expenditures for PSC training are warranted, and (2) inform decisions regarding the type of canine team to deploy and where to optimally deploy such teams within airports. TSA concurred and has taken some actions to address our recommendation, but further action is needed to fully address it. Specifically, in June 2014, TSA reported that through its PSC Focused Training and Assessment Initiative, a two-cycle assessment to establish airport-specific optimal working areas, assess team performance, and train teams on best practices, it had assessed PSC teams deployed to 27 airports, cumulating in a total of 1,048 tests. On the basis of these tests, TSA determined that PSC teams are effective and should be deployed at the checkpoint queue. In February 2014, TSA launched a third PSC assessment cycle to determine how PSCs’ effectiveness changes over time in order to determine their optimal duration time when working the checkpoint queue (i.e., how many minutes they can work and continue to be effective). Although TSA has taken steps to determine whether PSC teams are effective and where in the airport environment to optimally deploy such teams, as of June 2014, TSA has not compared the effectiveness of PSCs and conventional canines in order to determine if the greater cost of training canines in the passenger screening method is warranted. According to TSA, the agency does not plan to include conventional canine teams in PSC assessments because conventional canines have not been through the process used with PSC canines to assess their temperament and behavior when working in proximity to people. While we recognize TSA’s position that half of deployed conventional canines are of a breed not accepted for use in the PSC program, other conventional canines are suitable breeds, and have been paired with LEO aviation handlers working in proximity with people since they patrol airport terminals, including ticket counters and curbside areas. We continue to believe that TSA should conduct an assessment to determine whether conventional canines are as effective detecting explosives odor on passengers when compared with PSC teams working in the checkpoint queue. As we reported, since PSC teams are trained in both conventional and passenger screening methods, TSA could decide to convert existing PSC teams to conventional canine teams, thereby limiting the additional resource investments associated with training and maintaining PSC teams. In our January 2013 report, we found that TSA’s 2012 Strategic Framework calls for the deployment of PSC teams based on risk; however, airport stakeholder concerns about the appropriateness of TSA’s response resolution protocols for these teams resulted in PSC teams not being deployed to the highest-risk airports. TSA officials stated that PSC teams were not deployed to the highest-risk airports for various reasons, including concerns from an airport law enforcement association about TSA’s decision to deploy PSC teams with civilian TSI handlers and the appropriateness of TSA’s response resolution protocols. These protocols require the canine handler to be accompanied by two additional personnel that may, but not always, include a law enforcement officer. According to representatives from an airport law enforcement association, these protocols are not appropriate for a suicide bombing attempt requiring an immediate law enforcement response. TSA’s decision to deploy PSC teams only to airports where they would be willingly accepted by stakeholders resulted in PSC teams not being deployed to the highest- risk airports on its high-risk list. Moreover, PSC teams that were deployed to high-risk airports, specifically two airports we visited, were not being used for passenger screening because TSA and the local law enforcement agencies had not reached agreement on the PSC response resolution protocols. We recommended that if PSCs are determined to provide an enhanced security benefit, TSA should coordinate with airport stakeholders to deploy future PSC teams to the highest-risk airports, and ensure that deployed PSC teams are utilized as intended, consistent with its statutory authority to provide for the screening of passengers and their property. TSA concurred with our recommendation, and has taken action to address it. Specifically, as of June 2014, the PSC teams for which TSA had funding and not already deployed to a specific airport at the time our report was issued have been deployed to, or allocated to, the highest-risk airports. According to TSA, it was successful in deploying PSC teams to airports where they were previously declined by aviation stakeholders for various reasons. For example, TSA officials explained that stakeholders have realized that PSCs are an effective means for detecting explosives odor, and no checkpoints have closed because of a nonproductive response. PSCs also help reduce wait times at airport checkpoints because PSC teams are one method by which TSA can operate Managed Inclusion—a tool that allows passengers who have not, for example, enrolled in TSA PreTM to access to PreTM screening lanes. According to TSA, PSC teams provide an added layer of security, making it possible for TSA to provide expedited screening to passengers who have not enrolled in TSA PreTM and therefore have not had a background check. In November 2013, TSA also reported it was making progress in working with stakeholders to allow PSC teams to work at checkpoints at airports where PSC teams were not previously performing passenger screening, but rather were training and screening air cargo. In June 2014, TSA officials reported that of all the airports where PSC teams had been deployed, all but one airport agreed to allow TSA to conduct screening of individuals at passenger screening checkpoint queues. We believe that these actions address the intent of our recommendation, contingent upon TSA comparing PSC teams with conventional canine teams. Chairman Hudson, Ranking Member Richmond, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Chris Ferencik (Assistant Director), Chuck Bausell, Lisa Canini, Josh Diosomito, Michele Fejfar, Eric Hauswirth, Richard Hung, Thomas Lombardi, Jessica Orr, and Michelle Woods. Key contributors to the previous work that this testimony is based on are listed in the report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | TSA has implemented a multilayered system composed of people, processes, and technology to protect the nation's transportation system. One of TSA's security layers is NEDCTP, composed of over 800 deployed explosives detection canine teams, including PSC teams trained to detect explosives on passengers. This testimony addresses the extent to which TSA has (1) regularly analyzed data to identify program trends and areas working well or in need of corrective action, and (2) comprehensively assessed the effectiveness of PSCs, and coordinated with stakeholders to deploy PSC teams to the highest-risk airports and utilize them as intended. This statement is based on a report GAO issued in January 2013 and selected updates obtained from October 2013 through June 2014. For the selected updates, GAO reviewed TSA documentation, including the results of PSC effectiveness assessments, and interviewed agency officials on the status of implementing GAO's recommendations. In January 2013, GAO reported that the Transportation Security Administration (TSA) collected and used key canine program data in support of its National Explosives Detection Canine Team Program (NEDCTP), but could better analyze these data to identify program trends. For example, GAO found that in reviewing short notice assessments (covert tests), TSA did not analyze the results beyond the pass and fail rates. Therefore, TSA was missing an opportunity to determine if there were any search areas or types of explosives in which canine teams were more effective compared with others, and what, if any, training may be needed to mitigate deficiencies. GAO recommended that TSA regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. TSA concurred and has taken actions that address the intent of our recommendation. For example, in the event a team fails a short notice assessment, TSA now requires that canine team supervisors complete an analysis of the team's training records to identify an explanation for the failure. In January 2013, GAO found that TSA began deploying passenger screening canine (PSC) teams—teams of canines trained to detect explosives being carried or worn on a person—in April 2011 prior to determining the teams' operational effectiveness and where within an airport PSC teams would be most effectively utilized. GAO recommended that TSA expand and complete testing to assess the effectiveness of PSCs and conventional canines (trained to detect explosives in stationary objects) in all airport areas deemed appropriate prior to making additional PSC deployments. This would help (1) determine whether PSCs are effective at screening passengers, and resource expenditures for PSC training are warranted, and (2) inform decisions regarding the type of canine team to deploy and where to optimally deploy such teams. TSA concurred and has taken steps to address the recommendation, but additional action is needed. Specifically, TSA launched a PSC training and assessment initiative and determined PSCs to be most effective when working at the airport checkpoint, but TSA does not plan to conduct a comparison of PSC teams with conventional canine teams as GAO recommended. In January 2013, GAO also found that TSA's 2012 Strategic Framework calls for the deployment of PSC teams based on risk; however, airport stakeholder concerns related to the composition and capabilities of PSC teams resulted in the teams not being deployed to the highest-risk airports. GAO recommended that if PSCs are determined to provide an enhanced security benefit compared with conventional canine teams, TSA should coordinate with airport stakeholders to deploy future PSC teams to the highest-risk airports. TSA concurred and has taken steps to address the recommendation. Specifically, the PSC teams for which TSA had funding and not already deployed to a specific airport at the time GAO's report was issued have been deployed to, or allocated to, the highest-risk airports. GAO is making no new recommendations in this statement. |
This background discusses (1) the distribution network for natural gas pipelines, (2) the key federal environmental laws that may be involved in the permitting process for these pipelines, and (3) the key stakeholders that may be involved in the permitting process. A compressor station is a facility that helps the transportation process of natural gas from one location to another. Natural gas, while being transported through a gas pipeline, needs to be constantly pressurized in certain distance intervals. The compressor station compresses the natural gas, thereby providing energy to move the gas through the pipeline. The gas in compressor stations is normally pressurized by special turbines, motors, and engines. Pipeline companies install compressor stations along a pipeline route, typically every 40 to 100 miles. transmission pipelines. PHMSA estimates that there are roughly 2 million miles of distribution pipelines, most of which are intrastate pipelines, in the United States. These pipelines are considered outside of FERC’s jurisdictional responsibilities. Several federal environmental laws and agencies may come into play in the permitting process for natural gas pipelines, depending on the proposed route for the pipeline. The principal laws involved include the National Environmental Policy Act, the Clean Water Act, the Endangered Species Act, and the National Historic Preservation Act. Pub. L. No. 91-190, 83 Stat. 852 (1970), codified as amended at 42 U.S.C. §§ 4321-4347 (2011). Intent in the Federal Register. The Notice of Intent acts as the formal announcement of the project to the public and interested federal, state, tribal, and local agencies. The lead agency is then required to determine the scope of the project. During this scoping process, the lead agency consults with resource agencies—such as the Corps or the Department of the Interior’s Fish and Wildlife Service (FWS)—to identify issues and alternatives to be analyzed in the EIS and allocate assignments for assistance in preparing the EIS. The lead agency will also identify other environmental review and consultation requirements under state, tribal, or local laws. Next, the lead agency prepares a draft EIS and solicits comments from the public; incorporates these comments into a final EIS; and issues a Record of Decision. Among other things, the Record of Decision—which is the final step for agencies in the EIS process—identifies (1) the decision made; (2) the alternatives considered during the development of the EIS, including the environmentally preferred alternative; and (3) plans to mitigate environmental impacts. Environmental assessment (EA). The lead agency prepares an EA when it is not clear whether a proposed project will have significant environmental impacts. An EA is intended to be a concise analysis that, among other things, briefly provides sufficient evidence and analysis for determining whether to prepare an EIS. If during the development of an EA, the lead agency determines that the proposed project will cause significant environmental impacts, the lead agency will stop producing the EA and, instead, produce an EIS. However, an EA typically results in a finding of no significant impact, and this finding is reported in a document that presents the reasons for the agency’s conclusion that no significant environmental impacts will occur when the proposed project is implemented. This finding is typically based on the use of mitigation measures. Categorical exclusion. The proposed pipeline project is classified as a categorical exclusion if a federal agency determines that the project falls within a category of activities that has already been determined to have no significant environmental impact. Under a categorical exclusion, the agency generally does not need to prepare an EIS or EA. NEPA regulations require federal agencies to make diligent efforts to involve the public in the preparation and implementation of NEPA documents. Under these regulations, agencies must provide a public comment period for a draft EIS; there is no corresponding requirement for an EA, but agencies may provide a public comment period. Clean Water Act.requirements of the Clean Water Act, one goal of which is to eliminate the addition of pollutants to waters of the United States. Section 404 of the Clean Water Act requires, among other things, that projects involving the discharge of dredged or fill material into waters of the United States must obtain a permit; this permit is typically issued by the Corps. Gas pipelines may involve such discharges when, for example, they are constructed within a riverbed, stream, or wetland. Additionally, pipeline construction may be subject to Section 402 of the Clean Water Act, which prohibits the discharge of pollutants into waters of the United States without a National Pollutant Discharge Elimination System (NPDES) permit. Pipeline construction is also subject to Section 401 of the Clean Water Act, which requires anyone seeking a permit for a project that may affect water quality to seek approval from the relevant state water quality agency. Pipeline projects may also be subject to many Endangered Species Act. agencies to ensure that any action they authorize, fund, or carry out is not likely to jeopardize the continued existence of a species listed as threatened or endangered under the act, or destroy or adversely modify its critical habitat. To fulfill this responsibility, the agencies must, under some circumstances, formally consult with FWS or the Department of Commerce’s National Marine Fisheries Service (NMFS) when the actions they authorize may affect listed species or designated critical habitat. Formal consultations generally result in the issuance of biological opinions by FWS or NMFS. The biological opinions contain a detailed discussion of the effects of the action on listed species or critical habitat and FWS’s and NMFS’s opinions on whether the pipeline company has ensured that its action is not likely to jeopardize the continued existence of the species or adversely modify critical habitat. In cases where a pipeline project as proposed is likely to either jeopardize the species or cause the destruction or adverse modification of its critical habitat, the opinions are to provide a “reasonable and prudent alternative” to avoid jeopardy or adverse modification that FWS or NMFS believes the pipeline company could take in implementing the action. Pub. L. 93-205, 87 Stat. 884 (1973), codified at 16 U.S.C. §§ 1531-1544 (2011). National Historic Preservation Act.Preservation Act (NHPA) requires federal agencies to take into account the project’s effect on any historic site, building, structure, or other object that is listed on the National Register of Historic Places. The Advisory Council on Historic Preservation oversees implementation of the Section 106 NHPA authority. In general, the advisory council delegates much of its authority under NHPA to state historic preservation offices. These offices identify historic properties and assess and resolve adverse effects on them under NHPA. Section 106 of the National Historic Rivers and Harbors Act of 1899. Harbors Act of 1899, projects such as pipelines that could affect navigable waters of the United States must receive authorization from the Corps. Specifically, the Corps regulates any work or structures in, over, or under navigable waters or any work that may affect the course, location, or condition of those waters. Pub. L. No. 69-560, 44 Stat. 1010; Pub. L. No. 71-520, 46 Stat. 918. FERC facilitate the timely development of pipeline projects.approves the construction of interstate pipelines by issuing a certificate of public convenience and necessity, which includes conditions that the pipeline company receive all required federal authorizations before beginning construction, if it has not already done so. FERC does not become involved in the permitting process for intrastate pipelines. Federal resource agencies. Federal resource agencies are responsible for managing and protecting natural and cultural resources such as wetlands, forests, wildlife, and historic properties. Virtually all applications for pipeline projects require some level of coordination with one or more of the following federal agencies, as well as others, to satisfy requirements for environmental review: The Advisory Council on Historic Preservation seeks to promote the preservation, enhancement, and sustainable use of the nation’s historic resources. For proposed natural gas pipeline projects, the Advisory Council on Historic Preservation reviews and provides comments on those pipeline projects that may affect properties listed or eligible to be listed on the National Register of Historic Places pursuant to the NHPA. The Bureau of Indian Affairs is responsible for, among other things, approving rights of way across lands held in trust for an Indian or Indian tribe. In addition, the Bureau of Indian Affairs must consult and coordinate with any affected tribe. The Bureau of Land Management (BLM) is principally responsible for issuing right-of-way permits authorizing natural gas pipelines to cross federal lands.federal agency, such as National Forest System lands, as well as BLM lands, BLM is responsible for issuing an authorization. When pipelines cross the lands of another The Corps has the authority to issue permits for the discharge of dredged or fill material into waters of the United States under Section 404 of the Clean Water Act. The Corps also has jurisdiction over structures or work in navigable waters of the United States under Section 10 of the Rivers and Harbors Act. If any activity could affect a federal project, such as a levee, dam, or navigation channel, permission from the Corps is required in accordance with Section 14 of the Rivers and Harbors Act of 1899. EPA is responsible for administering a wide variety of environmental laws. EPA’s responsibilities for the pipeline permitting process include commenting on EISs under the Clean Air Act; it also has the authority to participate in the Section 404 permit process. FWS is generally responsible for implementing the Endangered Species Act, among other laws, for freshwater and terrestrial species that may be affected by a pipeline construction project. The Forest Service is responsible for managing 193 million acres of National Forest System lands, through which many thousands of miles of natural gas pipelines cross. If a proposed pipeline crosses more than one federal agency’s lands, BLM issues a right-of-way permit. In cases where the pipeline only crosses National Forest System lands, the Forest Service issues a special- use authorization. NMFS implements, among other things, the Marine Mammal Protection Act and the Endangered Species Act for most marine species and anadromous fish (i.e., fish that spend portions of their life cycle in both fresh and salt water). State resource agencies. State-level agencies are generally responsible for managing and protecting a state’s natural and cultural resources. State resource agencies, such as state environmental or water quality agencies, as is the case with their federal counterparts, participate in and review assessments of environmental impacts in accordance with their responsibilities under federal or state laws. In some cases, federal agencies have delegated authority to state resource agencies for carrying out federal laws. Additionally, state historic preservation offices advise and consult with federal and other state agencies to identify historic properties and assess and resolve adverse effects to those properties under the NHPA. Tribal governments. As part of the planning and review process for pipeline projects, federal agencies engage in government-to- government consultation between American Indian Tribes and Alaska Native Corporations. Consultation is a deliberative process that aims to create effective collaboration and informed federal decision making. Tribal consultations can be a factor in the overall pipeline project schedule. Local governments. Local governments involved in natural gas pipeline projects may include counties or municipalities that are empowered by state law or constitution to carry out provisions to protect the environment or safety of local citizens. This may include requiring soil and erosion plans or zoning laws. Public interest groups. Public interest groups, such as Earthjustice, Delaware Riverkeeper, and the Pipeline Safety Trust, advocate for a number of issues, including the environment and public safety. They may comment on a proposed pipeline project during, for example, the NEPA process or any state processes that include public comment periods. Private citizens. Private citizens can provide comments and opinions in venues like public hearings. Like public interest groups, private citizens may comment on a proposed pipeline project during, for example, the NEPA process or any state processes that include public comment periods. Both the interstate and intrastate pipeline permitting processes are complex in that they can involve multiple federal, state, and local agencies, as well as public interest groups and citizens, and include multiple steps. The interstate permitting process involves three key phases: a voluntary pre-filing phase, an application phase, and a post- authorization phase with multiple steps. According to stakeholders we spoke with, the interstate process is consistent because FERC acts as a lead agency in coordinating multiple stakeholders. The intrastate process can also include multiple stakeholders and steps. However, those stakeholders and steps vary from state to state, and most states do not have a lead agency coordinating the process. We identified three key phases in the interstate permitting process for natural gas pipelines: pre-filing, application, and post-authorization. During these phases, federal, state, and local agencies, as well as public interest groups and citizens, may play a role in approving or commenting on the application for a permit to construct interstate pipelines. According to some industry representatives we spoke with, the interstate permitting process can be time-consuming, depending on the size and complexity of a proposed project, but it is consistent because FERC, as the lead agency, assists in coordinating with other stakeholders on the NEPA environmental analysis. In 2002, FERC established a pre-filing phase to facilitate and expedite the review of natural gas pipeline projects through early coordination with FERC and cooperating agencies (see fig. 1). The intent of this phase is to involve stakeholders sooner so that potential issues can be identified and resolved earlier, thereby taking less time overall. Use of this phase is voluntary, and FERC must approve a company’s request for pre-filing. For those projects that are less complex, such as those that do not involve federal lands, endangered species, or crossings of waters of the United States, applicants may choose not to use the pre-filing phase. According to FERC officials, in 2012, 67 percent of applicants for major interstate pipeline construction projects chose to use this phase. In the pre-filing phase, FERC and the applicant focus on gathering the necessary information for the environmental analysis, which may involve numerous federal, state, and local agencies and is typically the most complex and time-consuming step of the permitting process. Once FERC approves a company’s request to use the pre-filing phase for a project, agency staff notify other potential cooperating agencies that FERC has approved the use of the pre-filing phase and hold a planning or information meeting with the applicant and the agencies to discuss land and resource issues and concerns. FERC and the agencies also discuss the agencies’ ability to commit to an environmental review schedule. FERC will then work with the applicant and those agencies that are to have a role in the permitting process to initiate the NEPA scoping process—that is, the process of defining and refining the scope of an EIS or EA and the alternatives to be investigated—and begin the environmental analysis. Applicants are to hold “open house” meetings in the vicinity of the proposed project to share information about the project with the public. FERC staff often attends these meetings to answer any questions about the FERC permitting process and to invite the public to participate in the process at future dates. According to FERC’s website, applicants may incorporate proposed mitigation measures into the project design from comments received during these meetings. After these meetings, FERC will issue a Notice of Intent in the Federal Register for the preparation of an EA or EIS and seek additional public comments. FERC staff may also hold public scoping meetings for major projects that require an EIS or EA. Information given by the public during scoping meetings can help the company prepare environmental mitigation measures. According to industry representatives we spoke with, FERC’s pre-filing process was helpful at resolving potential problems earlier in the process, but other stakeholders said the pre-filing process is confusing and may limit public input. For example, one natural gas industry representative noted that the pre-filing phase has made the overall process less complicated. Another stated that it has resolved potential project “derailers,” such as issues with routing the pipeline through areas with endangered species, and has saved time for obtaining a permit. In addition, another industry representative said that early identification of stakeholders also increases coverage of potential resource impact issues so that appropriate surveys, mitigation practices, coordination with local and state requirements, and planning for habitat management or conservation can be coordinated with proposed project construction timelines. On the other hand, some state officials and representatives of public interest groups were more skeptical of the pre-filing phase. One representative of an environmental group said the public is unaware of the pre-filing phase and suggested that FERC and other stakeholders specifically reach out to environmental groups during the pre-filing phase if they want to resolve potential issues early in the process. However, another representative from an environmental group commended FERC for establishing an e-mail notification system that enables the public to sign up for e-mails on the progress of a specific project. Once pre-filing activities are completed or, if the applicant chooses to forgo the pre-filing phase, the applicant submits an application for a certificate of public convenience and necessity to FERC (see fig. 2 for steps in the application phase). FERC issues a Notice of Application, which includes the following: the unique number assigned to the project; the ways in which stakeholders, including the public, can become involved in the proceedings; and the methods for filing comments with FERC. There are several factors taken into account when FERC establishes a schedule for the environmental review, including the scope and complexity of the project, the requirements of any cooperating agencies, and the requested time frame of the applicant. Schedules may be adjusted if new concerns are identified, new information is introduced, or the number of comments received is greater than anticipated. However, FERC has no authority to enforce that schedule with cooperating agencies. FERC then analyzes the information in the application and begins the scoping process for those proposed projects that did not use the pre-filing phase or continues the scoping process for those proposed projects that did use the pre-filing phase. If a company did not use the pre-filing phase, FERC will begin the scoping process and consult with cooperating agencies to gather information. Next, FERC will issue a Notice of Intent to prepare either an EA or EIS. FERC, along with any cooperating agencies, will prepare either an EA or a draft EIS, depending on the potential environmental effects of the project. Cooperating agencies are responsible for assisting FERC in the preparation of the EA or EIS for those issues that fall within their jurisdiction. For example, if a project impacts waters of the United States, the Corps is likely to participate in the development of the EA or EIS because it is responsible for the regulation of activities in jurisdictional waters of the United States and would need to evaluate proposed impacts to those waters to inform a permit decision pursuant to its authorities under Section 404 of the Clean Water Act and/or Section 10 of the Rivers and Harbors Act of 1899. The environmental analysis incorporates the necessary information from all federal agencies in one document. While FERC may issue the certificate of public convenience and necessity before all federal permits, certificates, or authorizations are complete, it will not grant the authority to construct a pipeline without these federal authorizations. Pipeline companies must coordinate with the relevant agencies to ensure that these permits, certifications, and authorizations are completed. This may happen during the application phase or after FERC issues its certificate. Some states have developed written agreements with federal agencies that establish a process for carrying out their roles in consultation, review, and compliance with one or more federal laws. In some cases, state agencies have received the authority from federal agencies to implement federal laws and regulations. For example, the Clean Air Act gives EPA the authority to limit emissions of air pollutants, such as nitrogen oxides and methane, that result from constructing and operating natural gas compressor stations and pipelines. Such emission limits are established through a preconstruction permit issued by EPA, or, in some cases, by a state or local agency that has received authority from EPA to issue Clean Air Act permits in its jurisdiction. According to EPA, at least 75 percent of preconstruction permits are issued by state and local agencies, and EPA’s regional offices issue the remaining preconstruction permits. In areas where the state agency issues the clean air permits under the rules of their state implementation plan, EPA provides minimal oversight because the state is the permitting authority and therefore has primacy over decision making. In addition, state agencies may have delegated authority to process and issue federal Water Quality Certifications, required under Section 401 of the Clean Water Act, and Consistency Concurrences, under the Coastal Zone Management Act. Environmental permits issued by federal agencies can also vary by state or by region. For example, the Corps issues two types of permits to authorize activities under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act of 1899: (1) individual permits, and (2) general permits. The type of permit used depends on the type and extent of proposed impacts on aquatic resources and whether a general permit is available to authorize such impacts. The Corps issues individual permits for specific projects that may have more than minimal impacts on aquatic resources, either individually or cumulatively, or are not otherwise authorized by general permits. The Corps issues general permits for activities resulting in no more than minimal adverse effects on the aquatic environment. The following three types of general permits are used for natural gas pipeline construction projects that require the discharge of dredged or fill material into waters of the United States and/or work or structures affecting the course, location, or condition of navigable waters: Nationwide permit. This type of general permit is intended to streamline and expedite the evaluation and approval process throughout the nation for certain types of activities that have only minimal impacts, both individually and cumulatively, on the aquatic environment. Activities that meet the terms and conditions of this type of permit, such as natural gas pipeline construction projects, are already authorized by the Corps. The Corps district verifies that the project meets the conditions outlined in the applicable nationwide permit. Corps headquarters, rather than one of the 38 district offices, issues these permits. However, one of the Corps’ eight division offices may add regional conditions to these permits in order to protect local aquatic ecosystems or to minimize adverse effects on ecologically critical areas or other valuable resources. Regional general permits. This type of permit authorizes activities that commonly occur in a particular region and that are expected to have a minimal impact on waters of the United States, but that do not warrant national authorization. Corps district offices issue this type of permit. Programmatic general permits. This type of general permit is established in those states or localities where there is a similar existing state, local, or other federal agency regulatory program. It is designed to avoid regulatory duplication. These types of permits may allow activities, including work in waters of the United States associated with pipeline projects, to have greater impact on waters than the nationwide general permits, provided there is still no more than minimal adverse effect on the environment. The programmatic general permit will identify those impacts that may be verified by the state or other entity with no review by the Corps, as well as any activities that may require notification to the Corps before verification is provided. Once the programmatic general permit is issued, the state or local agencies review proposed projects to verify that the proposed activities meet the terms and conditions of the permit, coordinating with the Corps’ district offices as necessary. Corps district offices receive annual reports from state and local agencies regarding the use of the programmatic general permits. Districts also retain the right to review any proposed project they determine may not meet the terms and conditions of the programmatic general permit. Most Corps districts primarily use nationwide permits to authorize work in waters of the United States in association with pipeline construction activities. Eight districts have developed regional general permits for certain activities, that may include pipeline construction, and six districts have developed state programmatic general permits. According to a Corps headquarters official, Corps districts may use different permitting mechanisms in different states to evaluate work in waters of the United States in association with pipeline projects. The regulations allow for this flexibility to account for regional differences in the aquatic environment, endangered species, historic sites, state regulations, or other factors. For example: In Pennsylvania, Corps district offices will generally rely on the Pennsylvania State Programmatic General Permit-4, under which the Pennsylvania Department of Environmental Protection verifies certain impacts that may occur in waters of the United States from the construction of some pipelines if the project meets certain criteria.According to Corps district officials, the Corps does not use a nationwide permit for these types of impacts because doing so would duplicate a similar state permit. Officials in the Corps’ Fort Worth district office said they typically use a nationwide permit to authorize work in waters of the United States in association with pipeline construction. Officials said they have not considered the use of a programmatic general permit because there are no similar permitting programs or authorizations required by the state of Texas. In Florida, Corps district officials issue both nationwide permits and regional general permits for work in waters of the United States in association with pipeline construction. Headquarters officials said the use of a programmatic general permit has not been considered because state regulatory processes are not similar enough to develop such a permit. In addition to coordinating with federal agencies on the environmental analysis, FERC may work with state resource agencies and local governments during the permitting of a natural gas pipeline. For example, an interstate natural gas pipeline project that runs through Pennsylvania would require several federal, state, and local permits, licenses, approvals, and certifications, as shown in table 1. However, some state and local actions are preempted—that is, they are superseded or overridden by federal law—because the actions conflict with federal law. For example, state certificates of necessity and convenience, which otherwise may be issued by state public utility commissions or other state agencies, are preempted because FERC’s certificate of public convenience and necessity supersedes the state’s. The process differs slightly depending on whether an EA or EIS is prepared, but in either case, FERC, acting as the lead agency, issues either a draft EIS or EA, and obtains public comments on the environmental analysis that was completed. FERC will respond to those comments, and issue its order either approving or denying the certificate of public convenience and necessity. According to representatives of one environmental group we spoke with, the public is not given sufficient time to intervene in the pipeline permitting process and often must hire attorneys to help them raise a motion with the agency because the process is complicated. According to representatives from several interest groups we spoke with, citizens are often unable to take these additional steps. However, FERC officials said that, while the agency establishes a deadline for timely motions to intervene, a motion to intervene can still be considered once the deadline has passed. Officials also said that an entity would be well-advised to file a motion to intervene as soon as possible. State officials we spoke with said that citizens are not well informed of the complicated interstate pipeline permitting process. Once FERC has issued a certificate of public convenience and necessity or denied an application, the applicant or the party to the proceeding can request that FERC rehear the case or take FERC to court over the outcome of the case. Otherwise, in order to proceed, the pipeline company must file an implementation plan with FERC including, but not limited to, how the company will implement any environmental mitigation actions identified in the environmental analysis, the number of environmental inspectors the company will assign to the project to ensure that mitigation measures are implemented, and procedures the company will follow if noncompliance occurs. FERC must give written authorization before construction can begin. Following that authorization, the pipeline company must file weekly status reports with FERC documenting inspection and compliance until all construction activities are completed. In addition, FERC is to regularly inspect the construction. Section 7 of the Natural Gas Act grants the right of eminent domain when FERC issues a certificate of public convenience and necessity; the pipeline company therefore has the right to acquire the property for that project by eminent domain if it cannot acquire the necessary land by agreement or if it cannot agree with the landowner on the compensation to be paid for the land. If a new intrastate natural gas pipeline construction project does not cross a state border, then the responsibility for approval of pipeline routes falls to the individual states, and FERC does not play a role in siting the pipeline. The permitting process for these pipelines varies from state to state and may involve many federal, state, and local stakeholders. Unlike the interstate process, the intrastate process in most of the states we reviewed does not use a lead agency to authorize and coordinate siting and environmental reviews. As is the case with the interstate permitting process, pipeline companies must consider two issues when planning an intrastate natural gas pipeline: land acquisition and the need to identify the siting authority that oversees the location and route for that pipeline. To acquire rights to the land necessary to build the pipeline, pipeline companies will generally attempt to negotiate right-of-way agreements with individual landowners along the intended route. If negotiations fail, the companies may seek to acquire the land through eminent domain proceedings. There is no uniform standard for right-of-way agreements and eminent domain authority, and procedures vary by state. However, BLM will process permits for intrastate natural gas pipelines located on federal lands administered by the Bureau. Of the 11 states we reviewed, 5 have agencies charged with siting intrastate natural gas pipelines. These 5 states require advance approval of the location and the route of the pipeline. The remaining 6 do not have siting agencies that require advance approval of location and route. Table 2 shows these differences among the states we examined. As the table shows, the requirements of the application process differ from Florida—which generally requires state certification before constructing certain intrastate natural gas pipelines—to Texas, which does not require pipeline companies to obtain a permit to construct an intrastate pipeline and which gives natural gas utility pipeline companies statutory right of eminent domain without any prior state approval. According to public interest and industry group representatives we spoke with, the intrastate process for permitting and siting pipelines needs to be more transparent. In many states, it is difficult to determine the process for pipeline siting and whether the state has an agency with siting authority. They also told us that the intrastate process is challenging to navigate without an agency that takes the lead on siting and coordinating the environmental review, as FERC does at the interstate level. Additionally, representatives from two public interest groups we spoke with explained that it is more difficult for the public to comment on proposals for intrastate pipelines because the state processes are not transparent, and the public may not learn about pipelines until after they have been approved. The availability of eminent domain authority can also change how companies deal with land owners and, as a result, can change land owners’ perspective on the process as a whole, according to the public interest group representatives. Federal agencies become involved in the intrastate natural gas pipeline permitting process if federally protected resources have the potential to be affected by a project. For example, the Corps becomes involved when a proposed pipeline will be constructed in aquatic resources over which it has jurisdiction and FWS becomes involved if the route crosses an area with a plant or habitat on the federal list of threatened or endangered species. State environmental laws and regulations are applicable to intrastate pipelines. However, in 10 of the 11 states we reviewed, no single entity is responsible for coordinating all of the environmental reviews, including federal and state authorizations, during the intrastate permitting process. For example, in Rhode Island, the Energy Facility Siting Board is the authority for approving the siting and construction of natural gas pipelines; the pipeline company is responsible for obtaining all necessary permits, including all permitting and licensing under the jurisdiction of the state’s Department of Environmental Management. Conversely, the New York State Public Service Commission is the lead agency for the siting of intrastate natural gas pipelines. This department coordinates with other affected state agencies and local governments on the permitting process—one stop licensing. For interstate pipelines, FERC’s public record information system contains documents that provide dates associated with the phases of the permitting process; however, FERC does not track the time it takes to complete the process. FERC officials said data on processing time frames is of limited use when planning a project because the variability among projects can make them incomparable. Using the information available on interstate natural gas pipeline projects certified from January 1, 2010, to October 24, 2012, we determined that the average processing time from pre-filing to certification for interstate natural gas pipeline projects was 558 days, and the processing times ranged from 370 to 886 days. These projects varied in size and function and included pipelines, pipeline expansions, compressor stations, and other pipeline facilities. For projects that begin in the application phase, the average processing time from formal filing to certification was 225 days for this period. The processing times for these projects, which tended to be for compressor stations and smaller pipeline expansions, ranged from 63 to 455 days. For intrastate pipelines, because the permitting process varies by state, the time frames for those processes may also vary. As is the case with interstate pipelines, time frames associated with permitting of intrastate pipelines may also vary because of differences in stakeholders, siting, and environmental factors and range in the amount of time to complete the permitting process. Some state agencies gave us estimates of time frames for specific parts of the process, but we found little comprehensive data on the intrastate permitting process in the states we reviewed. Comprehensive data are probably not available because most states do not have a lead agency that coordinates all the reviews necessary to complete the permitting process. For example, North Dakota state officials estimated that the siting part of the permitting process for intrastate pipelines takes just over 3 months; however, these 3 months do not include the time associated with any federal or state environmental reviews that may be necessary for pipeline projects. A New York state official estimated that the entire intrastate permitting process, including siting and all environmental reviews, takes 60 to 90 days for small pipelines, 3 to 6 months for medium pipelines, and 12 to 18 months for large pipelines. However, according to the official, these time frames vary depending on the complexity of the project and public opposition. The following factors can further affect the time frame for an interstate or intrastate pipeline project’s permitting process, as our stakeholders explained: Corps Section 404 Clean Water Act and Section 10 Rivers and Harbors Act permitting. The Corps does not have statutory deadlines or time frames for evaluating applications for natural gas pipelines or other types of regulated activities. However, the Corps has two performance measures specific to the timing of permit decisions. For standard individual permits, the Corps has a goal of completing its reviews and making permit decisions for 50 percent of permit applications within 120 days from receiving complete applications. In fiscal year 2011, the Corps reported that it had issued a decision on 71 percent of these applications within 120 days. The Corps has a goal of processing 75 percent of general permits within 60 days from receiving a complete request. In fiscal year 2011, the Corps reported that it had acted on 90 percent of these requests within 60 days. However, a headquarters official explained that the Corps collects information on time frames for reviewing applications and issuing decisions for all utility projects under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act and does not separate data specific to natural gas pipelines from its reviews of other utility projects. According to Corps officials, application review can take longer for a number of reasons, such as the time it takes to receive all necessary information from the applicant and the time it takes for other agencies to complete decisions necessary for the Corps to finalize its review. For example, according to a Corps district official and Pennsylvania state officials, the Pennsylvania Department of Environmental Protection had, in recent years, a backlog of applications that delayed the transfer of applications to the Corps, but that backlog has been cleared. Pennsylvania officials said this backlog had probably occurred because the number of pipeline applications doubled since hydraulic fracturing of Marcellus Shale began in Pennsylvania. FWS and NMFS review under the Endangered Species Act. Federal reviews required under the Endangered Species Act can also affect time frames for the evaluation of natural gas pipeline projects. These projects can be permitted under the act in two ways. First, under section 7 of the act, federal agencies must ensure that any action they carry out (or actions of a nonfederal party that require a federal agency’s approval, permit, or funding) is unlikely to jeopardize the continued existence of a listed species or destroy or adversely modify its critical habitat. To fulfill this responsibility, federal agencies must consult with either FWS or NMFS (whichever agency has jurisdiction) when their actions may affect listed species or critical habitat. Formal consultations generally result in the issuance by FWS or NMFS of reports known as “biological opinions,” which discuss in detail the effects of proposed actions on listed species and their critical habitat, as well as that agency’s opinion on whether a proposed action is likely to jeopardize a species’ continued existence or destroy or adversely modify its critical habitat. The opinion also determines the quantity or extent of anticipated “incidental take”—that is, take that is not intentional but occurs nonetheless as a result of carrying out an agency action. FERC consults with FWS or NMFS under section 7 of the Endangered Species Act for the construction of interstate natural gas pipelines. For actions without a federal nexus (i.e., no federal funding, permit, or license), section 10 of the Endangered Species Act provides an avenue for entities to obtain permits for activities—such as the construction of a natural gas pipeline or a highway—that may result in the take of a listed species. An applicant for a permit is to submit a habitat conservation plan that shows the likely impact of the planned action; steps taken to minimize and mitigate the impact; funding for the mitigation; alternatives considered and rejected; and any other measures FWS or NMFS may require. According to representatives of an industry association we spoke with, their members report successful coordination of consultations under section 7 of the act because a federal agency, such as FERC for interstate pipelines and BLM for some intrastate pipelines, can assist the pipeline company in establishing long-term mitigation plans and other requirements for section 7 approval. The section 10 review process is less preferable, according to representatives, because the pipeline company is responsible for coordinating the relevant federal and state agency reviews and permits before the section 10 review is completed, which takes more time than a section 7 consultation. Delays in state and local government reviews. State and local permitting and review processes can take time and affect federal decision-making time frames because some federal agencies cannot issue their permits until state and local governments have completed their own permitting processes. For example, permits for federal programs delegated to states, such as section 401 of the Clean Water Act, can take time for state agencies to review and are needed for the Corps to issue an individual permit or verify a general permit. According to a Corps official and state officials, some states experience delays in completing these reviews. Overlap of federal, state, and local environmental processes. According to representatives of an industry association we spoke with, jurisdictional overlaps between federal, state, and local agencies force pipeline companies to obtain environmental permits or approvals from more than one level of government for the same activity. In some cases, the pipeline company must coordinate the pipeline route with the requirements for permits and reviews required by up to four different authorities at the federal, state, county, and municipal level. For example, these representatives stated, EPA’s regional office serving Alabama requires that ordinances be adopted to create a local construction storm water permitting program to regulate the same construction sites that the Alabama Department of Environmental Management already regulates under its statewide program. According to these representatives, natural gas pipeline projects throughout the state of Alabama are required to comply with the state issued general permit as well as overlapping permits for the same activities in any of the 67 counties and hundreds of small towns that their projects may pass through. These industry representatives reported project delays and resource allocation constraints because several layers of reviews and permits involving various federal, state, and local stakeholders often take place to address the same environmental issues for the same natural gas project. However, according to representatives of public interest groups we spoke with, efforts to combine federal, state, and local processes can undermine the opportunity for public comment. Incomplete applications. Officials in all of the Corps district offices that we spoke with reported that incomplete applications may delay their review because applicants need time to revise their information. Applications are considered incomplete for a variety of reasons. For example, the application may be missing jurisdictional information (i.e., where the waters of the United States are located relative to the project) or the applicant may miscalculate impacts. Officials from a state resource agency told us that environmental consultants, hired and given processing deadlines by pipeline companies, may submit incomplete applications in order to meet those deadlines. According to a Corps headquarters official, if applicants do not submit all of the appropriate documentation, the permit process may be delayed. Project opposition. Public opposition and litigation can lengthen the time needed to review a pipeline project or even lead to the cancellation of a project. For example, public interest groups can work with the public to request extended comment periods and public hearings for proposed natural gas pipeline projects that may adversely affect the environmental resources in the area. According to officials from federal and state agencies and representatives from industry and public interest groups we interviewed, several management practices could be implemented to help overcome some of the challenges of a complex permitting process identified by these stakeholders. These practices would help overcome the challenges involved in implementing an efficient permitting process and obtaining public comments on pipeline projects. In this regard, in March 2012, the president signed Executive Order 13604, which aims to institutionalize best practices and reduce the amount of time required to make permitting and review decisions for infrastructure projects, including pipelines. Stakeholders we spoke with and the administration, in its plan for implementing the executive order, identified the following management practices as effective, among others: Ensuring a lead agency is coordinating the efforts of federal, state, and local permitting processes for intrastate pipelines. Representatives from industry and public interest groups we interviewed noted that the interstate process is better coordinated than intrastate processes because FERC is designated as the lead agency for the environmental review of a pipeline project, but there is no similar lead agency in the intrastate permitting process. Representatives of a public interest group noted that the absence of a lead agency also makes it difficult for the public to become involved in the permitting process because citizens often do not know which agency to contact about a pipeline project. In that regard, in July 2001, the Interstate Oil and Gas Compact Commission and the National Association of Regulatory Utility Commissioners’ pipeline siting working group recommended that each state establish a coordinating effort within the governor’s office to monitor and assist in expediting the permitting process, while eliminating duplication of activities among state and local permitting entities. They further recommended that states identify all participants in the permitting process, consider naming a lead agency to monitor processing schedules within existing regulatory requirements, and determine information that needs to be communicated to the public. Ensure effective collaboration of the numerous stakeholders. Stakeholders we interviewed emphasized the importance of collaboration among the numerous stakeholders involved in the permitting process. Some federal officials noted delays occur in the permitting process when stakeholders do not collaborate effectively. For example, a federal agency’s permitting process may be delayed if it receives insufficient information from a cooperating agency. The federal plan for implementation of Executive Order 13604 identified several examples of best practices to enhance interagency coordination. Some federal agencies have memorandums of understanding or agreements with other agencies to establish collaborative relationships that relate to the permitting process. For example, as described earlier, FERC and nine other agencies signed an interagency agreement for early coordination of required environmental and historic preservation reviews to encourage the timely development of pipeline projects. FERC and FWS also have a memorandum of understanding that focuses on avoiding or minimizing adverse impacts on migratory birds and strengthening migratory bird conservation through enhanced collaboration. Providing planning tools to help companies plan routes for pipelines and avoid sensitive environmental resources. Industry representatives we spoke with noted that there is a need for technology tools that can aid in the proper routing of pipelines when companies are planning a project. Such tools should involve mapping software and best practices for specific areas of the country so that agencies do not need to reassess environmental impacts each time a company plans a project. These tools would also allow the project to be routed with the fewest environmental impacts at an early stage in the pipeline company’s design process. Without such tools, it is difficult for pipeline companies to route a project given the various federal, state, and local requirements that are not available in a single location. For example, FWS is currently developing such a tool—the Information, Planning and Conservation (IPaC) System—that is expected to let companies determine whether there are any endangered and threatened species in a potential project area and obtain information about the measures the companies can take to help protect and conserve those species when designing and constructing a project. This system is expected to help companies make better routing decisions early on, eliminating the need to modify project plans later in the permitting process. The federal plan to implement Executive Order 13604 selected IPaC as an example of a best practice to “reduce surprises and help project proponents make better informed design decisions early, when there is more flexibility to make minor modification with minimum disruption of the project goals.” Another planning tool that was mentioned as making the process more efficient by industry representatives was the Pennsylvania Natural Diversity Inventory Environmental Review Tool, which screens proposed projects to identify, avoid, or mitigate impacts on federal or state-identified threatened or endangered species. Industry representatives said this tool has been helpful to determine potential adverse impacts and plan mitigation. In addition, BLM designates pipeline corridors as part of its land use planning process. According to BLM officials, corridors reduce environmental impacts by allowing projects to share access roads and use previously disturbed areas. They also reduce the need for new data collection and land use plan amendments. Offering industry the option to fund contractors or agency staff to expedite the permitting process. Industry representatives said that many pipeline companies are willing to fund contractors or agency staff to speed up their application review process, which has slowed because of increasing numbers of energy projects and fewer agency resources. For example, stakeholders cited FERC’s practice of allowing applicants to fund a third-party contractor to review applications and assist the agency in preparing NEPA environmental documents. The third-party contractor is selected by and works under the supervision of FERC officials but is paid by the pipeline company. Other federal agencies have similar practices that allow applicants to offer funding assistance during the permitting process. A FWS official said this outside support is essential for agencies given the heavy workload and short time frames associated with pipeline projects. However, not all agencies have Congressional authority to accept funds. For instance, according to Corps officials, the agency cannot accept funds from private entities and can only accept funds from non-Federal public entities under specific circumstances. Increase the opportunities for public comments. According to representatives of some public interest groups and some state officials we interviewed, the public needs to have more opportunities to comment on a proposed pipeline project during the permitting process. A representative from one group observed that, while the typical NEPA process for public input allows the public to comment throughout the environmental review, FERC only offers a brief period for formal public comments. Representatives of other groups mentioned that, because the pipeline permitting process is complicated, it is difficult for the public to know when and how to comment and that additional information from the applicant, FERC, and states would be helpful. The implementation plan for Executive Order 13604 includes multiple best practice examples to improve outreach and education of the public. For example, the Department of the Interior is developing a web-based clearinghouse for environmental information on energy resource development. This clearinghouse is to provide environmental best practices, methods for conducting environmental assessments to aid in decision making, links to applicable federal and state laws related to energy development, and information on the various impacts of energy development projects. We provided a draft of this report for review and comment to the Departments of Agriculture, Defense, and the Interior; EPA; and FERC. The Department of Agriculture provided written comments in which they generally agreed with the overall findings of the report. The written comments are presented in appendix II of this report. The Department of Defense generally agreed with the overall findings of the report and provided technical or clarifying comments, which we incorporated as appropriate. The Department of the Interior and FERC provided technical or clarifying comments, which we incorporated as appropriate. EPA indicated that they had no comments on the report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, and of the Interior; the Administrator of EPA; the Chairman of FERC; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives for this review were to determine (1) the processes necessary for pipeline companies to acquire permits to construct interstate and intrastate natural gas pipelines; (2) information available on the time frames associated with the natural gas pipeline permitting process; and (3) stakeholder-identified management practices, if any, that may improve the permitting process. For purposes of this report, we consider the permitting process to involve steps companies need to take to obtain a permit, authorization, certificate, or approval from a federal, state, or local entity in order to construct a natural gas pipeline. To understand processes and permits required to construct natural gas pipelines at the federal level, we reviewed relevant federal laws and regulations, as well as agency documentation, such as the interagency agreement between the Federal Energy Regulatory Commission (FERC) and nine other federal agencies regarding their coordination during the review process for the National Environmental Policy Act and efforts to facilitate the development of natural gas pipeline projects. In addition, we reviewed literature on natural gas pipeline permitting issues and previous relevant GAO reports. We interviewed officials with regulatory responsibilities at FERC, the Army Corps of Engineers (Corps), the departments of Agriculture and of the Interior, and the Environmental Protection Agency. We also interviewed a range of other knowledgeable individuals—including representatives of public interest groups, such as the Pipeline Safety Trust and Delaware Riverkeeper Network; and representatives of industry groups, such as the American Gas Association and the American Petroleum Institute—whom we identified as having expertise related to the permitting of natural gas pipelines. To determine the processes for obtaining permits to construct natural gas pipelines at the state level, we selected a nonprobability sample of states for further review. We developed the following list of criteria to use as a tool for determining which states to include in our review: size of pipeline network (miles of pipe); amount of natural gas production (trillion British thermal units); amount of natural gas consumption (trillion British thermal units); natural gas inflow capacity (Million Cubic Feet per Day); natural gas outflow capacity (Million Cubic Feet per Day); recommendations from federal agency officials and other knowledgeable individuals. Because we anticipated that states differ in their pipeline permitting processes, it was important to include states that ranked both high and large on the selection criteria, as well as states that ranked are low and small. We selected states by identifying the top five and the bottom five of each selection factor. For example, in considering the size of the pipeline network, we identified the five states with the most miles of pipeline and the five states with the fewest miles of pipeline. We also identified the states that were of congressional interest, recommended by a federal agency, and/or other knowledgeable individuals we spoke with. The states selected in our review are those that were most frequently recommended and/or identified in our ranking process. We selected states that were recommended and/or identified in our ranking process at least four times to be included in our review. Twelve states were above this threshold—California, Colorado, Delaware, Florida, Louisiana, New York, North Dakota, Oklahoma, Pennsylvania, Rhode Island, Texas, and Vermont. Louisiana was later omitted from our review because of limited response from the state. For our selected states, we reviewed relevant documentation and conducted interviews with state agency officials and officials at Corps district offices in California, Florida, Pennsylvania, and Texas. Because our sample was a nonprobability sample, the information we obtained is not generalizable to all states but provides illustrative information. To identify the information available on the time frames associated with the natural gas pipeline permitting process, we conducted interviews with federal officials, industry associations, and public interest groups. In addition, we reviewed documents contained in FERC’s eLibrary, which is a record information system of electronic versions of documents issued and received by FERC on natural gas pipeline projects. FERC provided us with information on projects certified from January 1, 2009, to October 24, 2012. Owing to time and resource constraints, we limited our review to projects certified since January 1, 2010, and used eLibrary to access documents that contained information on the pre-filling date, traditional filling date, and certification date of these projects. In addition, we conducted interviews with FERC officials to determine the completeness of the documents contained in the system. We conducted this performance audit from May 2012 to February 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to this report included Karla Springer, Assistant Director; Pedro Almoguera; Cheryl Arvidson; Cindy Gilbert; Griffin Glatt-Dowd; Holly Sasso; Carol Herrnstadt Shulman; Barbara Timmerman; and Jeremy Williams. | Recent growth in domestic natural gas production, particularly due to increased production from shale, is resulting in an increase in the pipelines needed to transport that gas. Constructing natural gas pipelines requires clearing and maintaining rights-of-way, which may disturb habitat and historical and cultural resources. These resources are protected under a variety of federal, state, and local regulations implemented by multiple agencies. The laws, regulations and stakeholders involved in the permitting process depend on where the pipeline is constructed. FERC is the lead federal agency in approving interstate pipelines, coordinating with federal, state, and local agencies, but FERC is not involved in the approval of intrastate pipelines. In response to the Pipeline Safety, Regulatory Certainty, and Job Creation Act of 2011, GAO determined (1) the processes necessary to acquire permits to construct interstate and intrastate natural gas pipelines, (2) information available on the time frames associated with the natural gas pipeline permitting process, and (3) stakeholder-identified management practices that may improve the permitting process. GAO reviewed relevant laws and regulations and interviewed federal officials, state officials from a nonprobability sample of 11 states, and representatives from natural gas industry associations and public interest groups. GAO makes no recommendations in this report. The Departments of Agriculture and Defense generally agreed with the findings, and the other agencies had no comments Both the interstate and intrastate natural gas pipeline permitting processes are complex and can involve multiple federal, state, and local agencies, as well as public interest groups and citizens, and include multiple steps. The interstate process involves a voluntary pre-filing phase, an application phase, and a post-authorization phase with multiple steps that stakeholders reported to be consistent among projects because the process is led by the Federal Energy Regulatory Commission (FERC). FERC coordinates with federal, state, and local agencies that have statutory and regulatory authority over various environmental laws and regulations. For example, if a proposed pipeline may affect endangered species, FERC coordinates with the U.S. Fish and Wildlife Service, which reviews the impacts on such species. The intrastate process can also involve multiple stakeholders and steps, but, unlike in the interstate process, GAO found that the stakeholders and steps vary by state. For example, of the 11 states GAO reviewed, 5 have agencies charged with approving the route of natural gas pipelines and require advance approval of the location and route, and the remaining 6 do not. Pipeline companies must also comply with various federal and state environmental laws and regulations; however, in most of the 11 states, no one agency is charged with coordinating the implementation of these laws and regulations as FERC is for the interstate process. Time frames associated with the interstate and intrastate permitting processes vary because of multiple factors, according to stakeholders. For the interstate process, FERC does not track time frames, citing the limited usefulness of such data. GAO analyzed public records and found that, for those projects that were approved from January 2010 to October 2012, the average time from pre-filing to certification was 558 days; the average time for those projects that began at the application phase was 225 days. For the intrastate process, because processes vary by state, the time frames of those processes may also vary. GAO found little comprehensive data on the intrastate process. According to GAO's discussions with stakeholders, several factors can affect the time frame for the permitting process of a given project, including different types of federal permits or authorizations, delays in the reviews needed by governmental stakeholders, and incomplete applications. For example, state and local permitting and review processes can affect federal decision-making time frames because some federal agencies will not issue their permits until state and local governments have completed their own permitting processes, according to some stakeholders. Officials from federal and state agencies and representatives from industry and public interest groups told GAO that several management practices could help overcome challenges they associated with an efficient permitting process and obtaining public input: (1) ensure a lead agency is coordinating the efforts of federal, state, and local permitting processes for intrastate pipelines, (2) ensure effective collaboration of the numerous stakeholders involved in the permitting process, (3) provide planning tools to assist companies in routing pipelines and avoiding sensitive environmental resources, (4) offer industry the option to fund contractors or agency staff to expedite the permitting process, and (5) increase the opportunities for public comments. |
Data collected from the states and reported by EPA indicate that EPA and states have made progress in cleaning up releases from underground storage tanks over the past decade and a half. According to EPA, of the more than 447,000 releases confirmed as of the end of 2004, cleanups had been initiated for about 92 percent, and about 71 percent of these cleanups had been completed. Figure 1 shows confirmed releases from underground storage tanks, cleanups initiated, and cleanups completed annually from fiscal years 1997 through 2004. As this figure indicates, the number of new releases confirmed annually declined, from about 12,000 in 2003 to less than 8,000 in 2004—about 35 percent. However, while figure 1 shows a decline in the number of releases confirmed annually over the period, it also shows a decrease in the number of cleanups initiated and completed. According to EPA, the number of cleanups completed each year has generally decreased over recent years and fell by 23 percent—from more than 18,000 in fiscal year 2003 to just over 14,000—in fiscal year 2004. Furthermore, there still remains a national backlog of almost 130,000 cleanups yet to be completed. EPA’s UST Program is primarily implemented by the states. EPA has become directly involved in program implementation only in Indian country and when states have been unwilling or unable to establish effective underground storage tank programs or to address contamination at specific sites. Instead, EPA’s primary role has been to establish standards and regulations to assist the states in implementing their programs. While all EPA-approved underground storage tank programs must be no less stringent than the federal program, individual aspects of each state program differ. For example, state time frames for conducting inspections vary widely. Also, while some states use only state environmental personnel to conduct inspections, others use state-certified private inspectors, or both. Furthermore, state program requirements and standards are sometimes more stringent and inclusive than those under the federal program. For example, states often regulate home heating fuel tanks, tanks on farms, and above-ground tanks that RCRA generally excludes from the federal program. EPA’s UST Program receives approximately $70 million each year from the LUST Trust Fund, about 80 percent of which is used for administering, overseeing, and cleaning up sites. The remaining money has been used by EPA for negotiating and overseeing cooperative agreements, implementing programs on Indian lands, and supporting regional and state offices. EPA spends about $6 million annually from the LUST Trust Fund on the agency’s program implementation, management, and oversight activities. Amounts distributed to the states from the fund each year vary depending primarily on whether they have an EPA-approved program, the total number of each state’s tanks, and the number of releases from those tanks. Until recently, states could use these funds only for cleanup and related administrative and enforcement activities, and EPA awarded each state about $187,000 annually from the agency’s State and Tribal Assistance Grant account to help administer their programs and cover inspection and enforcement costs. Historically, states have used about one-third of their LUST Trust Fund money for administration, one-third for oversight and state-lead enforcement activities, and one-third for cleanups, according to EPA. The Energy Policy Act of 2005, enacted in August 2005, includes a number of provisions addressing issues relating to training, tank inspections, prohibitions on fuel deliveries to problem tanks, and funding tank inspections and enforcement, among others. With regard to training, the act requires EPA to publish guidelines specifying training requirements for tank operation and maintenance personnel and authorized EPA to award up to $200,000 to states that develop and implement training programs consistent with these guidelines. In addition, the act requires EPA and any state receiving federal UST funding to inspect all regulated tanks not inspected since December 22, 1998, within 2 years of the date of enactment. After these inspections are completed, EPA or the state must generally inspect regulated tanks once every 3 years. The act allows EPA to extend the first 3-year period for up to 1 additional year if an authorized state demonstrates that it has insufficient resources to complete all inspections within the first 3-year period. Furthermore, beginning in 2007, the act prohibits deliveries to underground storage tanks that are not in compliance with applicable regulations and requires EPA and states to publish guidelines for implementing the delivery prohibition that would, among other things, identify the criteria for determining which tanks are ineligible for delivery. Finally, the act authorizes substantial appropriations from the trust fund during fiscal years 2005 through 2009 for a variety of activities, including release prevention, compliance, training, inspections, and enforcement. EPA collects data on the total number of underground storage tanks and the status of cleanup activities relating to these tanks from all states, and reports this information semiannually. Table 1 shows key tank-related data reported by EPA as of March 31, 2005. EPA’s semiannual reports also include, among other data, the number of emergency response actions taken by an implementing agency, such as the state, to mitigate imminent threats to human health and the environment from an underground storage tank system. EPA, however, does not require states to provide specific data on all known abandoned underground storage tanks. While abandoned tanks are included in the data reported to EPA, they are generally aggregated with the other data and cannot be separately identified. However, all 5 states we contacted compile some limited data on abandoned tanks and report this information separately to the EPA regional office that manages each state’s LUST Trust Fund cooperative agreement. In this regard, all 5 states separately report the number of initiated and completed cleanups of abandoned tanks using trust fund money. However, these data do not include separate information on cleanups of known abandoned tanks using state funds or any known abandoned tanks where cleanup has not yet been initiated. EPA officials believe that the data the agency currently obtains from states are sufficient for general program oversight, identifying program trends, and determining the progress of individual states’ programs. However, because states generally do not provide separate data on all abandoned tanks, EPA has limited ability to assess and track states’ progress in cleaning up contamination from these tanks. In addition, although one of the primary purposes of the LUST Trust Fund is to provide money for cleaning up abandoned tank sites, EPA lacks information—such as the number of releases from known abandoned tanks in each state and how many of these releases have been or are being cleaned up—to help it determine how to most efficiently and effectively allocate funds to the states for this purpose. EPA allocates amounts from the fund to each state based, in part, on the data each currently provides, but these allocation decisions do not now take into account the specific number of the state’s abandoned sites that may require cleanup funds. While tank owners and operators are primarily responsible for cleaning up contamination from leaks in their underground storage tanks, some states assist them through financial assurance or indemnification funds. These funds also sometimes pay for cleanups of abandoned tank sites. However, not all states have indemnification funds and, in 10 of the 40 states that have such funds, claims for cleanup cost reimbursements exceeded fund balances in fiscal year 2004. Consequently, EPA is monitoring the states’ funds to determine their viability as financial assurance mechanisms. EPA, through the LUST Trust Fund, provides some limited support to states for cleaning up abandoned sites as well as for administering, overseeing, and enforcing their cleanup programs. The 5 states we contacted—California, Maryland, Michigan, North Carolina, and Pennsylvania—use differing approaches to ensure funding to clean up contamination from tank leaks. Three of these states—Maryland, Michigan, and North Carolina—are experiencing difficulties in funding cleanups of abandoned tank sites and officials of 2 of these states told us that available resources will be insufficient to clean up all of the abandoned tanks in their state. Owners and operators are primarily responsible for cleaning up contamination from leaks in their underground storage tanks. However, according to the director of EPA’s UST Program, many of these owners/operators, most of which are small, independent businesses, do not have the financial capacity to pay for expensive cleanups. EPA estimates that the average remediation cost per site has been about $125,000, but costs sometimes have exceeded $1 million. Under RCRA, tank owners and operators must maintain evidence of financial responsibility for carrying out cleanup actions, using one or more of a variety of mechanisms, including commercial insurance, corporate guarantee, letter of credit, qualification as a self-insurer, or an EPA-approved state financial assurance fund. For commercial insurance, the owner/operator usually pays premiums as well as a deductible amount and/or co-payments before the policy begins to cover remediation costs up to some limit of coverage per leak incident. To assist owners/operators in funding cleanups, as of November 2004, 40 states had established state assurance or indemnification funds. State indemnification funds typically have deductible and co-payment requirements similar to those for commercial insurance, but these funds are managed by the state. Indemnification funds are usually capitalized through gasoline and diesel fuel taxes or fees paid by owners/operators registering or obtaining permits for underground storage tanks, as required. Any cleanup costs above the maximum coverage provided by insurance or the indemnification fund are borne by the tank owner/operator. A state fund qualifies as a financial assurance mechanism if EPA has approved it for that purpose. In deciding whether to approve a fund, EPA considers the certainty of the availability of funds for cleanup, the amount of funds that will be made available, the types of costs covered, and other relevant factors. The 5 states we contacted vary in their approaches to ensuring that contaminated tank sites are cleaned up and that tank owners/operators, to the extent possible, pay the remediation costs. Three of the 5 states—California, North Carolina, and Pennsylvania—currently have financial assurance funds that reimburse owners/operators for cleanup costs under varying conditions. Maryland and Michigan have no such funds, and, instead, tank owners/operators rely primarily on commercial insurance to pay cleanup costs. California: California's Underground Storage Tank program includes a state financial assurance program—the Underground Storage Tank Cleanup Fund—to assist tank owners/operators in funding site cleanups. The fund, established in 1989, is the state’s primary mechanism for reimbursing owners/operators for their costs of cleaning up leaking underground storage tanks incurred after January 1, 1988. The fund is available to most owners/operators of tanks subject to EPA’s Underground Storage Tank Program, as well as owners of certain small home heating oil tanks. The California State Water Resources Control Board administers the fund, which is primarily capitalized through a storage fee—paid by owners of regulated and permitted underground storage tanks—for each gallon of petroleum placed in the tanks. According to board officials, the fund collects about $240 million annually and, except for $200,000 per year that is used for enforcement, monies from the fund are all used for tank cleanups, including such activities as direct cleanup by responsible parties, agency oversight, and replacement of drinking water wells. In fiscal year 2004, California spent approximately $208 million to reimburse responsible parties for direct expenses incurred in cleaning up leaking underground storage tanks. The fund reimburses tank owners/operators for cleanup costs up to $1.5 million per incident for “reasonable and necessary” remediation costs. Claimants are divided into four classes: class "A" claimants do not have to pay a deductible before costs are reimbursed by the fund; class “B” and “C” claimants must pay the first $5,000 in eligible corrective action costs; and class “D” claimants are responsible for the first $10,000. An Underground Storage Tank Petroleum Contamination Orphan Site Cleanup subaccount was established as part of the fund in September 2004, capitalized with $30 million ($10 million per year for 2005 through 2007) transferred from the fund to reimburse cleanup costs incurred in cleaning up abandoned contaminated urban brownfield sites. In addition to reimbursing owners/operators, state officials said that $5 million a year is transferred from the fund to a subaccount to address emergency, abandoned, and recalcitrant tank site cleanups. Board officials we interviewed told us that the fund is adequately capitalized and that they do not always spend all available funds each year. Nevertheless, these officials also said that the state is interested in ways to minimize program costs and is experimenting with pay-for-performance remediation contracts, which are now being used at 20 cleanup sites in the state. Maryland: Although Maryland has 2 trust funds that have financed certain cleanup activities, it does not have a fund that EPA has approved for use as a financial assurance mechanism. According to state officials, owners and operators primarily use commercial insurance to demonstrate financial responsibility. The state’s Oil Contaminated Site Environmental Cleanup Fund has provided limited cleanup assistance to owners/operators of federally-regulated underground storage tanks, among others. The fund provides funding of up to $125,000 per leak occurrence from underground storage tanks—subject to deductibles from $7,500 to $20,000—and is primarily capitalized by a fee of 1.75 cents per barrel of oil imposed at the first point of transfer into the state. However, the program stopped accepting applications for reimbursement from owners and operators of federally-regulated underground storage tanks on June 30, 2005. In addition, the Maryland Oil Disaster Containment, Cleanup, and Contingency Fund finances, among other things, state cleanup costs for abandoned sites. Revenues for this fund, according to the fund’s fiscal year 2004 annual report, are generated by a fee of 2 cents per barrel of oil transferred into the state. From July 1, 2003 to June 30, 2004, this fund paid out about $3.5 million. Michigan: Michigan's state financial indemnification program for underground storage tanks was terminated in June 1995, because it had insufficient funds to pay existing and future claims. Since that time, tank owners/operators have been required to annually show proof of financial assurance to cover cleanup costs in order to operate in Michigan. Small owners/operators usually provide this proof by obtaining commercial insurance. The state has used a number of sources to fund limited cleanup work at underground storage tank sites, including the Cleanup and Redevelopment Fund, the Clean Michigan Initiative Bond Fund, the Environmental Protection Fund; State General Funds, and the Environmental Protection Bond Fund. Appropriations from these funds address soil, groundwater, and sediment contamination from all sources, including leaking tanks. Most of these funds are no longer available for new projects. According to state officials, in the fall of 2004, state legislators voted to establish a Refined Petroleum Fund that will be capitalized by a 7/8 cent-per-gallon fee on refined petroleum products to be collected through 2010. This fund is expected to accrue approximately $60 million each year, a portion of which is expected to be used to clean up underground storage tank sites. A Refined Petroleum Cleanup Advisory Council was also established to provide the governor and legislature with recommendations on how to spend the fund’s revenues. State officials told us that the council is expected to recommend an increase in the 7/8 cent fee to implement its other recommendations. North Carolina: North Carolina has a state fund that acts as a financial assurance mechanism and that reimburses owners/operators for most of the costs for site assessments, cleanups, and damages related to leaking underground storage tanks. This fund applies to leaks discovered after June 30, 1988, from commercial underground tanks containing petroleum. The fund is primarily capitalized by a 0.297 cent-per-gallon excise tax on motor fuel sales; a small part of the state inspection tax on motor fuel and kerosene; and annual tank operating fees. Under provisions of the fund, owners/operators of tanks that have upgraded corrosion, leak, and overfill protection pay the first $20,000 of assessment and cleanup costs and the first $100,000 in third party liability costs. The fund then pays all other cleanup costs deemed reasonable and necessary, up to $1 million, and an additional $500,000, with a 20 percent co-payment by the owner/operator, after which any remaining amount is paid by the owner/operator. The state paid approximately $21 million in reimbursements for tank assessment and cleanup costs from this fund in fiscal year 2004. Because the balance of the fund was not sufficient to cover all obligations, in June 2002, the fund began operating from month to month, paying out funds on a first-come, first-paid basis. This action resulted in a significant backlog of claims with pending payments, according to the fund’s annual report. Consequently, EPA is currently monitoring North Carolina’s fund to determine its viability as a financial assurance mechanism. To address concerns about the viability of the trust fund, North Carolina officials are considering requiring tank owners/operators to use other forms of financial assurance, such as commercial insurance. Pennsylvania: Pennsylvania’s Underground Storage Tank Indemnification Fund was created by the state Storage Tank and Spill Prevention Act of 1989, as amended, and is administered by the State Insurance Department, according to the fund’s 2004 annual report. The fund reimburses tank owners/operators for reasonable and necessary cleanup costs for leaks that occur in regulated tanks on or after February 1, 1994, the date it began operation. The maximum amount of coverage under the fund is currently $1.5 million; however, for claims reported prior to January 1, 2002, the limit was $1 million, according to state officials. The aggregate limit is $1.5 million for owners of 100 or less tanks and $3 million for owners of 101 or more tanks. The fund also covers bodily injury and property damage claims that arise from a leak, indemnifies certified tank installers, and provides loans to owners/operators for upgrading their facilities. According to the fund’s 2004 annual report, a claimant for reimbursement from the fund must be an owner or operator of a tank registered with the Pennsylvania Department of Environmental Protection, and must report the claim to the fund within 60 days of the discovery of the release. Claimants must also pay the first $5,000 per tank of allowable cleanup costs and $5,000 per tank of third-party liability claims. State program officials told us that Pennsylvania law requires that the fund be managed on an actuarial basis and that the fee structure be reviewed yearly to maintain solvency. They also said that the fund's objective is to have positive cash flow and invested assets for a projected period of at least 5 years. The fund is primarily capitalized by (1) a 1.1 cents-per-gallon fee (for 2004) on substances such as gasoline, new motor oil, and aviation fuel, (2) investment income generated from fund balances, and (3) a capacity fee of 8.25 cents-per-gallon for substances such as diesel, kerosene, and used motor oil. While the fund does not directly cover costs for remediating abandoned tank sites, it is authorized to provide allocations to the Pennsylvania Department of Environmental Protection—which manages the cleanup of contamination from these tanks—up to a maximum of $12 million annually: $5.5 million for general environmental cleanup, $5.5 million for catastrophic release, and $1 million for pollution prevention. Department officials told us that each year it must request funding from the fund’s board. The board then allocates funds to the department based on the fund’s ability to pay tank owners’ claims. State fund officials told us that the department has not requested the maximum allocation amount for the past several years, and in some years they have not spent the full amount of the money they requested. According to state officials, the fund collected $68 million in 2004, and paid out $64 million. These state officials told us that the fund is fully capitalized and is working effectively with a balance of $215 million, as of May 2005. In addition to the states’ funding sources and mechanisms, the LUST Trust Fund assists states in (1) overseeing and enforcing corrective actions taken by tank owners/operators and (2) cleaning up leaking abandoned tanks or tanks that require an emergency action. EPA allocates amounts from the trust fund to each state based on a number of criteria, such as the total number of tanks in the state, the number of confirmed releases, and whether EPA has approved the state’s program, among other factors. However, these criteria do not include the number and cleanup status of a state’s abandoned tanks. According to EPA program officials, states historically have used about two-thirds of the federal trust fund money allocated to them each year to oversee and support the cleanups paid for by state funds, tank owners/operators, and other financial assurance mechanisms, while the states have used the remaining one-third to directly pay for cleanups of abandoned tanks that are not covered by the other funding sources. As Table 2 shows, for the 5 states we contacted, the amount of funds that EPA awards from the fund and the portions of these funds the states allocate for cleaning up tank sites varies, as do the amounts of their own funds that they spend on leak cleanups. The LUST Trust Fund’s contribution to state cleanup efforts is generally small compared to amounts paid by tank owners/operators, state indemnification programs, and other state mechanisms for cleaning up sites each year. For example, in fiscal year 2004, EPA awarded $61.7 million in trust funds to assist states’ leaking tank cleanup efforts. However, according to EPA, states, on average, spend a total of about $1 billion to $1.5 billion each year on tank site cleanups. To illustrate, EPA program officials told us that for every federal dollar spent to clean up tank sites, states spend as much as $18 of their own funds. If cleanup costs paid by owners/operators were included, the actual ratio of dollars spent by other sources to federal dollars could be significantly higher than the 18 to 1 calculation provided by EPA. However, tank owners’/operators’ costs to remediate a site are difficult to determine since they are not always captured in state and federal records. While state records may include deductible and co-payment amounts paid by owners/operators under state programs, they do not typically include any costs these parties pay that are disallowed by the state. Furthermore, amounts that owners/operators pay in excess of program limits are not captured in state and federal data. For example, California program officials told us that a leaking tank site in Santa Monica contaminated the public water supply with MTBE, which is typically very expensive to clean up. While the owner/operator estimated that it may require $50 million to clean up the site, the state indemnification fund limits reimbursements for cleanup costs to a maximum of $1.5 million per tank per leak incident. As a result, the approximately $48.5 million in additional non-reimbursable costs paid by the owner would not be reflected in program records. Some states’ indemnification funds and other resources may be insufficient to clean up all of the leaking abandoned tanks in their state. For example, according to a survey of states conducted for the Association of State and Tribal Solid Waste Management Officials in early 2005, claims for the reimbursement of cleanup costs exceeded the fund balances in 10 states. Of the 5 states we contacted, officials of 3—Maryland, Michigan, and North Carolina—told us that the state is experiencing difficulties in funding cleanups at abandoned tank sites. Furthermore, officials of 2 of these states—Maryland and Michigan—said that available resources will be insufficient to clean up all of them and that additional resource allocations from the LUST Trust Fund would help address these funding shortfalls and enhance the states ability to clean up leaking tank sites. Because of funding constraints, Maryland is now prioritizing and deferring cleanups of its abandoned tank sites. The state requires tank owners/operators to demonstrate financial responsibility to pay for cleanup costs, which they generally do by obtaining commercial insurance to fund cleanups of the state's nonabandoned tank sites. However, state officials are concerned that commercial insurance may not provide a dependable source of funding for tank site cleanups, because insurers have sometimes been reluctant to pay cleanup costs when leaks occur. For example, files for the Henry Fruhling Food Store site in Harford County, Maryland (see app. I), indicated that the site's owner/operator experienced problems in getting the insurance company to pay for cleanup costs because he could not prove that the leak occurred during the period of coverage. The absence of insurance funds to pay cleanup costs may lead to more abandoned sites—sites where the owners/operators are unable to pay the cleanup costs themselves—which will require the state to fund cleanup with its own funds or seek federal resources. State officials told us that, in the absence of increased allocations of federal trust funds, they asked the state legislature to approve an increase in Maryland’s special oil transfer fee to fund the state's tank cleanup needs. The state legislature subsequently approved the fee increase. Since Michigan’s indemnification fund was terminated in June 1995, because of insufficient funds to pay existing and future claims, tank owners/operators have been required to show proof of financial assurance to cover cleanup costs in order to operate in Michigan. Small owners/operators usually provide this proof by obtaining commercial insurance. However, state LUST program officials cited anecdotal evidence showing that insurance claims for remediation costs are frequently denied because it is often difficult to prove that the release occurred under the period of coverage. If the owner/operator cannot or is unwilling to pay the costs and these costs are not covered by insurance or some other form of financial assurance, the burden for cleaning up a site will fall on the state. In addition, Michigan program officials told us that the state’s causation standard further exacerbates the funding problem for abandoned tanks because it requires that the state prove that the present owner/operator is responsible for a site's contamination before it can be held responsible for cleanup. Proving responsibility becomes difficult in cases where releases have occurred at some point in the past and ownership of the property has changed. If responsibility cannot be established, the state must then fund any cleanup of the site. In addition, state officials said that underground storage tank owners/operators acquiring properties after March 6, 1996, can limit their liability for pre-existing contamination by performing a baseline environmental assessment of the property—any contamination found at that point becomes the responsibility of the owner/operator who caused the contamination or the state if a responsible party cannot be identified. According to program officials, Michigan now has a backlog of 9,000 confirmed releases from leaking underground storage tanks, an estimated 4,200 of which are at abandoned sites. State program officials estimate that it will require about $1.7 billion in public funds to remediate these 4,200 releases alone. However, according to these officials, resources available from all state sources are not adequate to remediate these releases. North Carolina’s commercial trust fund can be used to assist owners/operators with cleanup costs and to assist landowners in cleaning up abandoned sites where the tank owner/operator cannot be located or is unwilling to perform the cleanup. In recent months, according to program officials, claims against the fund have exceeded revenues, causing timeframes for paying reimbursements to stretch out over a year. As a result, the state is now prioritizing sites based on relative risk and directing work only to emergency releases and those leaks that pose the highest risks that can be funded with available resources. While neither California nor Pennsylvania are experiencing significant problems funding cleanups of leaking tank sites, officials in both states said that they could use more federal funding for leak prevention initiatives and welcome the flexibility to use federal trust funds for that purpose, as provided by the Energy Policy Act of 2005. In an ongoing review, we are examining the scope and magnitude of states’ workload and funding needs for cleaning up contamination from leaking underground tanks. Specifically, for each of the 50 states, the District of Columbia, and 5 U.S. Territories, we are examining (1) how much funding is currently available for cleaning up contamination from leaking tanks, (2) the extent to which tank cleanup funds have been used for purposes other than cleanups, if at all, and (3) what future revenues will be available to clean up contamination from leaking tanks. States become aware of leaking underground storage tanks through a variety of methods, including owner/operator reports, complaints by local residents, incidental discovery during land redevelopment or removal of tanks for upgrading or replacement, and compliance inspections. Regular and frequent tank inspections also can detect new leaks—and potentially prevent future ones—before they can lead to serious environmental or health damage, and lessen or avoid the need for costly cleanups. Once contamination from leaking tanks is detected and confirmed, the 5 states we contacted generally use risk-based systems to prioritize sites for cleanup according to the immediate threat they pose. Whether funded by the tank owners/operators, state indemnification or other funds, or other means, states generally direct and oversee site remediation. However, in circumstances where a site presents an imminent threat, has no viable responsible party, does not qualify for funding under a state plan, or for which the magnitude of the cost and cleanup work is beyond state resources, the state may ask EPA to assume oversight responsibility. Tank owners/operators are primarily responsible for identifying, confirming, and reporting any leaks that occur in their underground storage tanks and dispensing systems. EPA and the states have established a number of requirements that tank owners/operators must follow to ensure and facilitate the early detection of possible leaks. In this regard, in 1988, EPA issued regulations governing leak detection, among other things. Under these requirements, tank owners/operators must notify the designated state or local authority when they discover a release or when leak detection equipment indicates that a leak may have occurred. This notification must generally occur within 24 hours. Tanks must generally be monitored for leaks at least once every 30 days. Despite these requirements, leaks can remain undetected and/or unreported. According to state officials, owners/operators sometimes do not conduct proper inventory checks or leak detection procedures and may intentionally disconnect leak detection equipment. Also, tank tightness tests are imprecise and tanks can lose small amounts of pressure or vacuum during the test and still pass. Such small pressure leaks can result in large releases of the tanks contents over time. In some cases, tightness tests have failed to detect significant leaks altogether. For example, during investigation of the Tranguch Tire Service site in Pennsylvania, the state Department of Environmental Protection requested tank tightness test results for that facility as well as 3 nearby tank operating facilities. Even though test results showed that 3 of these facilities had passed their tests, 2 of them were ultimately found to have leaking tanks, including all 6 tanks at the Tranguch facility (see app. I). In addition to problems in detecting leaks, some owners/operators fail to report suspected or actual leaks once they are discovered. For example, the tank owner/operator of the fourth facility in the Tranguch investigation did not provide tightness test results as requested but admitted that a leak had occurred at the site several months earlier that he had not reported. While owners/operators identify many leaks through established testing and monitoring procedures, EPA and officials of the 5 states we contacted told us that many leaks are discovered only when tanks are removed for replacement or closure. When tanks are replaced or facilities closed, in some states—such as California, Maryland, and Pennsylvania—a state certified or licensed environmental consultant or contractor removes the tanks, sometimes with state or local agency oversight. Other states, such as Michigan and North Carolina, do not require the contractor to be certified. In Michigan, however, any person who removes or installs a tank must have a million dollars in pollution liability insurance, according to state officials. As part of this process, soil samples generally are taken from the excavation and tested to determine whether contamination is present. However, leaks are often readily apparent because of the presence of liquid product (gasoline or diesel fuel) and/or strong fumes; state or local environmental or health agencies may discover leaking tanks when investigating homeowner complaints about such odors in their residences or gasoline contamination in their well-water. Frequently, unknown and abandoned tanks are discovered when land is being excavated during property redevelopment. In these cases, states generally follow the same process of sampling and testing described above to assess contamination at the site. However, if contamination is found, the responsibility for cleaning up these sites differs from state to state. For example, in Pennsylvania, the new owner of the contaminated property would be responsible for cleaning it up, according to state officials. However, Michigan state officials told us that Michigan law limits the cleanup responsibility to those who actually caused the contamination. Therefore, the state would have to pay for the cleanup unless it could identify the party or parties who caused the contamination, which can be difficult. In general, cleanup costs for abandoned tanks where no owners or operators can be found usually become the state’s responsibility. In addition to other methods for discovering leaking tanks, state or local environmental agencies may detect leaking tanks or indications of possible leaks while inspecting facilities for compliance with regulatory requirements. EPA recommended that states conduct tank inspections at least once every 3 years. However, of the 5 states we contacted, as of mid-2005, only 2 regularly inspected their tanks as frequently as EPA recommended, according to state officials. State officials told us that California requires annual inspections of all tanks; Maryland inspected its state’s tanks every 3 years; Michigan generally inspected every 3 years, depending upon the location of the tanks and state inspection staffing levels; North Carolina inspected once every 4 or 5 years, due to funding limits; and Pennsylvania inspected at least once every 5 years. EPA reported that, as of September 2004, about 35 percent of the nation’s underground storage tanks were not in “significant operational compliance” with the applicable release detection and prevention requirements, indicating a need for greater emphasis on inspections. EPA and state officials agreed that regular inspections of underground storage tanks provide the opportunity to detect new leaks before serious environmental or health damage can occur and potentially prevent future leaks. Even if performed on a regular basis, infrequent inspections may allow violations of leak prevention and other tank requirements to go undetected long enough for leaks to occur and contamination to spread, potentially resulting in environmental and health consequences and the need for costly cleanups. While more frequent inspections potentially could enhance preventive efforts, state officials in 4 of the states we contacted told us that increasing the frequency of inspections would require additional resources. Although EPA recommended inspections at least once every 3 years, EPA program officials recognized both the value of increased inspections and some states’ need for additional resources to conduct more frequent inspections, and supported providing more flexibility in the use of LUST trust funds for these purposes. In 2001, after reviewing EPA's and states' efforts to enforce UST Program regulations, we recommended that EPA negotiate with each state to reach a minimum frequency for physical inspections of all its tanks and present to the Congress an estimate of the total additional resources the agency and states would need to conduct the inspection, training, and enforcement actions necessary to ensure tank compliance with federal requirements. In addition, to strengthen EPA’s and the states’ ability to inspect tanks and enforce federal requirements, we suggested that the Congress consider (1) authorizing EPA to establish a federal requirement for the physical inspections of all tanks on a periodic basis and (2) increasing the resources available to the UST Program, based on a consideration of EPA’s estimate of resource needs. We noted that one way to do this would be to increase the amount of funds the Congress provides from the trust fund and to authorize states to spend a limited portion of these amounts on inspection, training, and enforcement activities to detect and prevent leaks, as long as this did not interfere with tank cleanup progress. Generally consistent with our recommendations, the Energy Policy Act of 2005, among other things, generally requires inspections once every 3 years, increases amounts authorized to be appropriated from the fund, and authorizes these funds to be used for inspections, training, and other enforcement and prevention activities. The 5 states we contacted all use risk-based systems to prioritize leaking underground storage tank sites for cleanup according to the immediate threat they pose to human health, safety, and/or the environment. California prioritizes cleanup sites based on risk, with the highest risk sites remediated first. California uses many of the same procedures employed under the American Society for Testing and Materials’ (ASTM) risk-based corrective action process. This process has 3 tiers and tables to determine priority rankings. The highest priority is assigned to sites that pose a threat to human health and the next highest to those posing an environmental threat. Under this system, immediate threats are abated first and then sites with the likelihood of future impact are addressed. Maryland uses a risk-based determination to prioritize both abandoned and nonabandoned leaking tank sites. For example, if contaminated well water is the primary threat involved, well samples are drawn and tested and the levels of the various compounds found are compared to EPA safe drinking water standards. Abandoned sites whose cleanup will have to be paid for by the state are remediated if they pose an immediate threat to public health. The cleanup of nonabandoned sites is paid for by the tank owners and operators and begins immediately regardless of threat level. Michigan uses a modified ASTM four-tier classification system to prioritize sites according to their threat. The classification system ranges from class 1—an immediate threat to the public or environment—to class 4—no demonstrable long-term threat. Michigan also uses a risk-based assessment and corrective action process, based on the ASTM process, which allows contamination to remain on-site as long as it is possible to demonstrate that human health and the environment are adequately protected. North Carolina prioritizes leaking tank sites according to three levels of risk: high, intermediate, and low. High risk sites are those that pose an immediate threat to human health and the environment because, for example, a leak presents an explosion hazard from petroleum vapors or a release is within 1,000 feet of a drinking water well. Intermediate risk sites include those that contaminate or potentially could contaminate surface water, a wellhead protection area, or an area that recharges drinking water aquifers, or have groundwater contamination levels high enough that natural attenuation may be impeded. Low risk sites involve releases that do not fall into the other 2 categories or that pose no significant risk to human health or the environment. The state is now addressing only the highest risk sites and emergency releases, with the goal of moving them to the intermediate risk level. North Carolina also uses a risk-based assessment and corrective action process wherein more contamination can remain on site as long as adequate protection of human health and the environment can be demonstrated. For instance, the state groundwater standard for benzene is 1 part per billion, but cleanup to 5,000 parts per billion may be allowed if it can be shown that the remaining pollution poses no threat to human health and the environment. Pennsylvania Department of Environmental Protection uses a modified ASTM system that classifies abandoned tank sites based on 4 priority levels. Priority 1 sites are those that pose an immediate threat to human health, safety, or sensitive environmental receptors; Priority 2 sites pose short-term (up to 2 years) threats; Priority 3 sites pose long-term (greater than 2 years) threats; and Priority 4 sites present no such demonstrable long-term threats. Pennsylvania does not generally prioritize responsible party lead cleanup sites addressed under the state’s indemnification fund—all eligible sites receive funding for cleanup and are required to follow the corrective action regulations, according to state officials. Under RCRA regulations, tank owners/operators must notify the designated state or local authority when they discover a release or when leak detection equipment indicates that a release may have occurred. Owners and operators must then undertake appropriate cleanup action in accordance with the regulations. Environmental consultants, in collaboration with the state or local environmental agency, usually perform the site assessment, determine the technology and approach needed to contain and remediate the contamination, and implement and complete site cleanup. The method of cleanup selected is tailored to the specific characteristics of the site, including the probable pathways the contamination will follow to threaten the soil, groundwater, and/or the health of surrounding residents. Depending on whether the contamination has reached the local groundwater, treatment methods can range from the removal and on-site treatment of contaminated soil to expensive on-site pump-and-treat and vapor extraction systems, activated carbon filtration systems for municipal water systems, and vapor extraction and water treatment units for nearby impacted or threatened homes and businesses, among others. EPA seldom becomes directly involved in this process unless the site is located at federal facilities or on Indian reservations. However, states may ask EPA to lead or support the cleanup at sites that present an imminent threat, have no viable responsible party, do not qualify for funding under a state plan, or for which the magnitude of the cost and cleanup work is beyond state resources. For example, state officials asked EPA to assume the lead on cleaning up the Tranguch Tire Service site in Pennsylvania after they determined that site remediation costs would far exceed the state’s resources (see app. I). This leak involved the release of an estimated 25,000 to 50,000 gallons of gasoline. The leaking fuel reached the aquifer and the contamination plume migrated off-site into the sewer system of the surrounding residential neighborhood. According to an EPA Region 3 official, gasoline and gasoline fumes seeped into the basements of 20 to 30 homes through the sewer system as well as into a nearby creek. After conducting an environmental investigation of the area, the Pennsylvania Department of Environmental Resources required the owner/operator of Tranguch to begin site characterization and cleanup work. However, in 1995, the owner of the Tranguch facility declared bankruptcy and the state assumed responsibility for characterizing the site and mitigating vapors in area homes. By March 1996, the state had spent $2 million on the site and did not have the funds that were going to be necessary to clean up the site due to its potential magnitude. According to state officials because of the emergency nature of the situation and funding problems, the state asked EPA to take over as the lead agency for remediating the site, which EPA did in late August 1996. To date, in addition to the amounts paid by the owner/operator and the state, the Tranguch site remediation has required over $25 million in federal funding, primarily from the Oil Spill Liability Trust Fund. While the data that states report to EPA on underground storage tanks provides the agency with information it can use to determine the overall trends and status of the UST Program, the lack of specific and complete data on known abandoned tanks limits EPA’s program oversight and its ability to efficiently and effectively allocate LUST Trust Fund resources. Without such information, neither EPA nor the Congress can readily determine the number of abandoned tanks requiring cleanup nationwide, whether this number is growing, whether states are initiating and completing or deferring work, and what the potential impacts on state resources and, ultimately, the LUST Trust Fund may be. Furthermore, although one of the primary purposes of the fund is to help states clean up releases from abandoned tanks, EPA currently allocates resources to the states without taking into account how many abandoned tanks each state has, how many are leaking, or how many are being cleaned up. All 5 of the states we contacted provide data to EPA on their abandoned tanks aggregated with other tank data and separately identify and report some limited information on abandoned tanks to EPA regional offices. Asking the states to separately identify information on all known abandoned tanks in the reports they currently provide to EPA should not pose an additional burden. In any case, we believe that requiring states to specifically report information on all known abandoned tanks would provide EPA useful data for overseeing the UST Program and more efficiently and effectively allocating LUST Trust Fund resources. While the extent to which this situation exists nationwide is unknown, officials in 2 of the 5 states told us that the state’s present resources are inadequate to cover cleanup efforts. At the same time, the LUST Trust Fund has continued to grow through a continuing inflow of fuel tax revenue and accrued interest—reaching a balance of about $2.2 billion at the end of 2004—with only about $70 million to $76 million (less than 4 percent of the total fund balance) allocated annually to support state programs. Furthermore, the EPA and state officials we contacted believed that greater emphasis on leak prevention activities, such as tank inspections, is necessary to detect compliance problems that can lead to future leaks and uncover physical evidence of leaking tanks so that states can respond more quickly, if warranted, to prevent or limit the potential health and environmental impacts on nearby communities. Moreover, the cost of taking measures to prevent a release is generally much less than the cost of cleaning up a release after it occurs. The underground storage tank provisions of the Energy Policy Act of 2005 may lead to increased resources for cleanups of leaking tanks and stronger enforcement efforts that could prevent leaks and lead to the early detection of existing leaks, thereby reducing the need for costly cleanups. To improve EPA's oversight of the leaking underground storage tank program and its ability to determine how to most efficiently and effectively allocate, LUST Trust Fund dollars to the states, we recommend that the Administrator of EPA require that states separately identify, in their reports to the agency, information on the number and cleanup status of all known abandoned underground storage tanks within their boundaries. We provided a draft of this report to EPA and the states of California, Maryland, Michigan, North Carolina, and Pennsylvania for their review and comment. In commenting on the draft report, EPA stated that, in general, the agency thinks that the report’s findings and conclusions have merit, and that it will assess the feasibility of implementing our recommendation. EPA agrees that the UST Program could benefit from more specific information about abandoned tank sites. However, EPA notes that the process that states must conduct to establish that a tank is abandoned—that its owner is unknown or unwilling or unable to pay for leak cleanups—may involve ownership searches to identify the potentially responsible party and assessment of their financial ability and willingness to pay for cleanup. With this in mind, EPA is concerned about placing an undue burden on states by requiring them to provide specific data on abandoned tanks. Therefore, EPA stated that, in consultation with the states, the agency will consider how best to incorporate our recommendation. We share EPA’s concern about placing an additional burden on states by asking them to determine whether a given tank is abandoned by undertaking a potentially labor-intensive and costly effort to establish who owns the tank and whether this owner is financially able and/or willing to pay for cleaning up a leak. However, we are not suggesting that states should make an effort to identify unknown abandoned tanks; rather we are recommending that they report to EPA separately the information they currently have on tanks that they know are abandoned, and, as new abandoned tanks are identified in the normal course of program operations, report this information to EPA as well. This should place no additional burden on the states. States currently provide EPA data on known abandoned tanks aggregated with all other tanks in the state. We are simply recommending that the states break out the data on their abandoned tanks from total tank data. Because a limited portion of these data—information on abandoned tanks being cleaned up using LUST Trust Fund resources—is currently broken out and provided to EPA’s regions, the UST Program could easily utilize these existing data and EPA would only have to require states to break out the remaining data on the number and cleanup status of their known abandoned sites. Having more complete data on abandoned tanks would allow EPA to better determine the potential scope of the problem and the progress that states are making towards addressing it. It would also permit EPA to take this information specifically into account in allocating LUST Trust Fund resources. Given EPA’s concerns, we have clarified our recommendation by explicitly stating that EPA should require states to provide separate data on the number and cleanup status of all their known abandoned tanks. EPA also provided technical comments, which we have incorporated into this report as appropriate. Appendix III contains the full text of the agency’s comments in a letter dated November 2, 2005. Officials from the state of Maryland said that they had no comments on the draft report. California, Michigan, North Carolina, and Pennsylvania officials provided a number of technical comments, which have been incorporated into the report where appropriate. In addition, Pennsylvania officials expressed concerns similar to those raised by EPA relating to the additional burden on states of identifying unknown abandoned tanks. As noted, we have clarified our recommendation to address these concerns. We will send copies of this report to the appropriate congressional committees and to the Administrator of EPA. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix IV. The following are summaries of the major events surrounding the discovery and cleanup of contamination from leaking underground storage tanks at 5 sites: (1) Coca-Cola Enterprises in Yuba County, California; (2) Henry Fruhling Food Store in Harford County, Maryland; (3) Bob’s Marathon in Grand Ledge, Michigan; (4) R.C. Anderson Trust in Nash County, North Carolina; and (5) Tranguch Tire Service, Incorporated, in Luzerne County, Pennsylvania. The summaries include a chronology of significant site occurrences as well as additional information on the amount of leaked fuel, the contaminants involved, the impacts of the leak on the surrounding environment, the costs of remediating the site, the extent, if any, of EPA involvement in the cleanup, communication between state agencies and the affected public, and litigation relating to the site. The Coca-Cola distribution warehouse on this site was built around 1970, and was originally used by another business to build mobile home units. The 5,000 gallon underground storage tank system from which the leak at the site occurred was reportedly installed on the east side of the building when the facility was first constructed. The property and surrounding area are zoned for commercial use. Yuba County Municipal Airport is located south of the site and a public drinking water supply well operated by the City of Olivehurst is located about 850 feet east of where the tank system was formerly located. June 1989 – August 1989: Coca-Cola Enterprises had the 5,000 gallon unleaded gasoline underground storage tank, piping, and dispensing system excavated and removed. The tank and piping appeared to be in good condition, but the soil exhibited a slight petroleum odor. According to the case file, soil samples were collected from beneath the system during removal, as required by California law. Field observations and analysis of soil samples determined that gasoline was present in the soil in concentrations that required remediation. The Yuba County Office of Emergency Services filed a report of an unauthorized leak. December 1989 – May 1990: The site was assessed for contamination, which included a soil gas survey and the drilling and installation of borings, groundwater monitoring wells, and vapor extraction test wells. Groundwater samples were collected as monitoring wells were installed and an analysis indicated that the groundwater at the site was contaminated. A municipal drinking water supply well for the City of Olivehurst was discovered 850 feet east of the tank system’s former location. However, analyses of water samples from that well did not indicate any contamination. While the contaminant plume extended 120 feet east of the release site, it was confined within the south and east boundaries of the property. May 1990 – December 1990: The Yuba County Air Pollution Control Office authorized the on-site aeration of contaminated soils that had accumulated during drilling activities. The soil aeration began in June 1990 and was successfully completed in September 1990. A quarterly groundwater monitoring and sampling program also began in June 1990. In August, a vapor extraction pilot test was conducted at the site. Petroleum was found in a monitoring well at the site in September and about 7.5 gallons of gasoline were removed from the well from September through November. By December, approximately 0.08 feet of gasoline remained in the well. February 1991 – August 1991: The environmental consultant for Coca-Cola Enterprises submitted its contamination assessment report, which assessed the extent of vertical and lateral petroleum contamination at the site during the underground storage tank removal operations. The consultant also completed a remedial action plan that presented a conceptual system design for remediating hydrocarbon contamination in soil and groundwater at the site. The plan proposed an integrated remediation system incorporating (1) a groundwater pump and treat system with an air stripper system to remove volatile compounds from groundwater, (2) a vapor extraction system to remove vapors from contaminated soils in the unsaturated zone, and (3) a thermal oxidizer to burn off the vapors. The consultant estimated that remediating the site would take 3 years, with one additional year for monitoring and closure. Cleanup oversight was transferred from the Yuba County Office of Emergency Services to the Regional Board in February 1991, and in March 1991, the Board approved the contamination assessment report. April 1993 – September 1998: The recommended remediation system began operation in April 1993. In August 1996, an oxygen release compound was installed in three perimeter vapor extraction wells to increase the dissolved oxygen concentrations in groundwater to enhance the natural bio-attenuation of petroleum hydrocarbons. The pump and treat system was operated intermittently until mid-November 1997, when the system was shut down to prepare for site closure following approval from the Central Valley Regional Water Quality Control Board. Groundwater monitoring and sampling continued on a quarterly basis. In September 1998, the Board approved initiation of closure activities. October 1998 – Early 2000: Monitoring indicated that the contamination plume might have migrated southward in late 1999 and early 2000 and that further remediation and changes in the sampling and analysis program were warranted. The remediation system was restarted in December 1999, with upgrades to increase flow rates and optimize efficiency and was completed in January 2000. The revised sampling and analysis program was initiated shortly thereafter. A survey to identify water wells within 2,000 feet of the leak site conducted by the site’s environmental contractor identified 2 municipal wells, with the nearest approximately 850 feet east of the former tank location. However, sampling and analysis conducted by the Olivehurst Public Utility District revealed that the closer of the 2 wells had not been affected by the contamination plume. April 2002 – March 2004: Because monitoring and sampling showed no hydrocarbon concentrations in the groundwater monitoring wells, the system was again shut down in April 2002. Quarterly groundwater monitoring after the system was shut down showed that the contamination plume had stabilized and, in July 2003, the site’s owner requested the Central Valley Regional Water Quality Control Board’s approval to close the monitoring wells. The wells were abandoned by late December 2003, and the Board closed the remediation case in March 2004. Status as of August 2005: Cleanup was completed. Contaminants and compounds of concern: Gasoline (total petroleum hydrocarbons as gasoline—TPHG), benzene, toluene, ethyl-benzene, xylenes (BTEX) Size of leak: Unknown. Impacts of contamination: Soil and groundwater contamination occurred at the site but was contained within the property. Remediation cost: The California underground storage tank fund spent $1,202,745 to reimburse site owners/operators for site remediation costs. Additional amounts that may have been paid by Coca-Cola Enterprises that were not reimbursed are unknown. U.S. Environmental Protection Agency involvement: None. Communication between responsible agencies and the public: No evidence of public meetings appears in the case files. Litigation: Case files show no evidence that any lawsuits were filed relating to this site. The Henry Fruhling Food Store was a single family dwelling with an attached small grocery store. The store had two 1,000 gallon underground storage tanks and distribution systems for gasoline, which were installed around 1966, and a 500 gallon underground tank and distribution system for kerosene. Sometime prior to 1966, 2 similar underground gasoline tanks and distribution systems were located on the site but were removed by the previous owner. 1970: A nearby resident complained to the owners of the store about an odor or taste of gasoline in his well water. The tank maintenance company performed a pressure test on the tanks and distribution systems but found no leaks. June 1980 - September 1980: Water samples taken by the Harford County Health Department in response to another nearby resident’s complaints about gasoline in his well water indicated petroleum contamination. That resident was warned not to consume water from his well and the matter was referred to the Maryland Department of Natural Resources (the predecessor of the Maryland Department of Environment) for action. That resident also filed a complaint with the department concerning the presence of oil in his well, which was referred to the Environmental Health Administration and the Harford County Health Department for follow-up actions. October 1980: One of the store owners was badly burned—and later died—when gasoline fumes ignited in his basement. November 1980: Water samples obtained by the Harford County Health Department identified gasoline in the store owner’s well and the case was assigned to the Department of Natural Resources for enforcement action. January 1981 – February 1981: In mid-January, the Department of Natural Resources notified the company that had maintained the tanks and pumps since 1976 of their determination that a pollution violation had occurred. The Department ordered the company to (1) stop discharging petroleum products into state waters, (2) test the tightness of the tanks and supply systems, and (3) initiate actions to recover petroleum from groundwater at the site. In late January, tightness tests were performed on both tanks and supply systems and no leaks were found. In early February, the Department took auger probes at various locations throughout the Fruhling property and found explosive vapors in the soil in the vicinity of the tanks and pumps. In a letter dated February 27, the Department told the owners and the tank maintenance company that (1) they had achieved substantial compliance with their order and that no gasoline was then leaking into the groundwater, (2) based on information they had provided the Department, there was a “strong probability” that, at some time during the past several years, repairs had been made to the gasoline pump that may have eliminated a leak in the system, and (3) the results of the recent boring tests and a survey of the area indicated “a very low probability” that the gasoline in the groundwater was coming from any source other than the system at Fruhling’s store. Furthermore, the letter stated that the Department’s only concern was the removal of any recoverable gasoline from the groundwater at the site. July 1981 – August 1981: A gasoline recovery and separator system was installed at the site and well-pumping operations began. The accumulated effluent from the separation process was sampled by the Harford County Health Department on a periodic basis and was spread back on the ground with the knowledge and approval of the County Health Department. This process continued through January 1988, at which time a Department of Environment official ordered the owner to stop the discharge. November 1981 – December 1981: The Department of Natural Resources notified both the store owner and the tank maintenance company that they had satisfactorily removed the gasoline from the well and complied with their January 1981 order. The Department also advised them that the remaining unrecoverable gasoline in the groundwater was a pollution problem that would be referred to the Harford County and state health departments for appropriate action. Because laboratory results of samples taken continued to show unacceptable levels of “aromatic hydrocarbons”, the County required the owner to continue well pumping until the residuals were reduced to an acceptable level. May 1983: A nearby resident raised concerns regarding contamination of his well. Water samples taken over approximately the preceding three year period showed minimal contamination from petroleum products. October 1987 - December 1987: Testing indicated that gasoline contamination was migrating into a new well at the site. January 1988: In late January, an official from the Maryland Department of the Environment (formerly the Department of Natural Resources) investigated a complaint that effluent runoff from the site was flowing onto a neighbor’s property. The investigating official informed the store owner that spreading effluent on the ground was no longer allowed because this process allowed pollution to migrate back into the soil and groundwater. The owner was issued a site complaint and was directed to shut down well pumping operations until further notice. At this point, pumping and effluent discharge on the ground had gone on for over 6 years and the gasoline separation unit had been removed and water had been pumped onto the owner’s yard for the last 2 years. June 1988: The state took over cleanup operations at the site after the LUST Trust Fund was established in 1988. Well-pumping from the old well at the store site continued intermittently until mid-1988. October 1988: The Maryland Department of the Environment oversaw the removal of the underground gasoline and kerosene tanks and distribution systems, which was funded from the LUST Division account. Inspection revealed no gasoline storage tank perforations, but soil beneath one gasoline storage tank showed explosive readings. According to a state official who visited the site in early December 1987, the store owner stated that the tank maintenance company had pumped the tanks dry prior to going bankrupt, but she did not recall the exact date. January 1989 – April 1989: The Maryland Department of the Environment contracted with an environmental consulting company to perform 2 soil gas surveys at the site to delineate the extent of subsurface gasoline vapor contamination. Analysis of the samples revealed the presence of elevated gasoline vapor levels at the site, with the highest concentrations detected near the former pump island. Late 1989 to early 1990: The state installed charcoal filtration units on the store owner’s water system and that of a nearby neighbor. March 1993 – November 1993: The Maryland Department of the Environment retained a consultant to review ongoing remediation activities at the site and determine the adequacy of activities to control and abate contamination. The consultant concluded that the contamination plume appeared to be getting larger and that vapor recovery efforts were inadequate. In November 1993, the Department had the consultant test a combined air sparge and soil vapor extraction system. Test results indicated that this system could significantly enhance the existing recovery system. March 1997: The Maryland Department of the Environment continued to operate the pump-and-treat system at the Fruhling residence. The system had treated over 2 million gallons of water. The former Fruhling domestic well, shallow monitoring wells, and deep monitoring wells all continued to show elevated levels of dissolved gasoline constituents. As a result, the Department periodically operated a soil venting system to keep the wells within discharge guidelines. Periodic sampling of the surrounding residences identified no additional contaminated domestic wells. 1999: The groundwater recovery system at the site was shut down. September 2001: The Maryland Department of the Environment maintained a granular activated carbon treatment system at two residences and performed quarterly sampling at six residences. Petroleum contamination was still present at the Fruhling property and MTBE levels were detected at the residence across the street from the Fruhling property. Dissolved petroleum contamination levels had decreased at all sampling locations. Status as of August 2005: The site recovery system remained on site, but was turned off. The site was being monitored in operation and maintenance status, with sampling performed every three months. No significant contamination had been detected in the residential wells from which samples had been taken since July 2001. However, well- monitoring still showed some signs of low-level contamination. Contaminants and compounds of concern: Benzene, toluene, ethyl benzene, total xylenes, and MTBE. Size of leak: Unknown; Very little liquid product (gasoline) was recovered. Impacts of contamination: One death occurred from a leak-related explosion and fire. Groundwater and residential drinking water wells were contaminated with petroleum products. Real estate development and sales were impeded. Residents were granted a reduction in property taxes. Remediation cost: Maryland spent about $708,595 in state funds to remediate the site. Prior amounts that might have been spent by the company that installed the tanks and the store owner are not included. U.S. Environmental Protection Agency involvement: None. Communication between responsible agencies and the public: In December 1989, officials of the Maryland Department of the Environment met with affected parties to discuss contamination at the Fruhling site. In May 1990, the Department held a follow-up public meeting to present the results of its initial investigation of the groundwater contamination. Litigation: In late July 1981, one of the affected residents filed a lawsuit against the company that installed the tanks on the site, the tank maintenance company, and the store owner. Consequently, the tank maintenance company’s insurer retained a consulting company to investigate the alleged contamination of groundwater by the producer’s petroleum products. In May 1982, the consultant for the insurance company concluded that the products of the oil company represented by both the tank installer and the tank maintenance company were not responsible for the groundwater contamination in the affected resident’s well. In March 1988, the lawsuit was settled out of court for $25,000, with the defendants expressly denying liability. Prior to the settlement, the insurance company for the store owner settled with the affected resident for $7,500. In 1990, the owner’s insurance company denied responsibility for any claim under the owner’s policy. Nevertheless, in early 1991, the state of Maryland sued the owner to recover cleanup costs. In October 1990, a neighbor in the area of the site sued various parties involved in the purchase of his property, including the real estate company, the real estate agent, and the former owners of his house for not disclosing the groundwater contamination at the time of purchase. Bob’s Marathon is a gasoline service station and automobile repair shop bordered by mixed-use commercial and residential properties in the city of Grand Ledge, Michigan. Two reported releases occurred at the facility and the released gasoline migrated toward a municipal water supply well located directly down-gradient and very close (approximate 800 feet) to the site. MTBE, benzene, and other gasoline components from this spill potentially impacted the city’s water supply for about 8,300 people. April 1986: Bob’s Marathon registered all three of its underground storage tanks with the state. December 1991 – February 1992: Two of the three tanks failed their tightness tests. One of the owners reported to the Michigan State Police Fire Marshall Division that she had discovered a leak during a routine tank gauging inventory check. The leak involved the loss of approximately 4,500 gallons of gasoline from a 6,000 gallon tank. An environmental consultant retained by the owners sent the Michigan Department of Natural Resources (MDNR) the required 20-day report of initial abatement measures and installed eleven monitoring wells at the site in December. The monitoring wells were used to determine the directional flow of groundwater at the site and intercept the gasoline plume. In January, the consultant installed a product skimming system in six monitoring wells and a passive recovery system in three wells that reportedly contained product sheen on the water table and additional monitoring wells. Also in January, the consultant submitted a site investigation work plan, site characterization report, free product removal report, and interim corrective action plan to MDNR. MDNR concluded that the interim corrective action plan was unacceptable and provided the consultant with a list of concerns in a deficiency letter. The environmental consultant estimated that, by late February, 280,300 gallons of contaminated groundwater and 1,200 gallons of gasoline were removed by the skimming system. March 1992: Because the consultant’s response to the MDNR deficiency letter was not adequate, MDNR did not approve the work plan. The consultant then recommended that a second consulting firm with a greater capacity to more cost-effectively manage long-term projects take over the work at the site. The new consultant submitted an interim corrective action plan and a site investigation work plan to MDNR for approval. MDNR approved the second consultant’s interim corrective action plan. In an interoffice communication, a MDNR official recommended that the first consultant be denied payment for work conducted at the site and that MDNR consider the consultant a potentially responsible party because of its failure to take timely action to abate the situation at the site. Three new underground storage tanks were installed on the west side of the service station building—two 6,000 gallon tanks and one 15,000 gallon tank. Approximately 400 cubic yards of soil were removed and disposed of during excavation for these tanks. April 1992: MDNR tentatively approved the site investigation work plan. The groundwater remediation system began operating. June 1992: An MDNR official stated that the department approved an interim groundwater treatment system at the site because of the close proximity of municipal wells and that this action was necessary because of the first consultant’s failure to take timely action to abate the spread of the contamination. January 1993: The company operating the groundwater treatment system decided to no longer operate and maintain it because of uncertainty regarding reimbursement from the Michigan Underground Storage Tank Financial Assurance program for future work. February 1993: The site owner replaced the second consultant with a third after a dispute over the need to purchase the remediation equipment and other issues. This third environmental consultant made modifications to the existing groundwater treatment system. May 1993: MDNR informed the owners of Bob’s Marathon that they had failed to define the full nature and extent of the groundwater contamination. While the groundwater plume was advancing toward the Grand Ledge municipal well field, the leading edge of the plume had not yet been defined. Furthermore, MDNR said that the groundwater system was ineffective and the contamination plume continued to migrate, impacting additional groundwater. As a result, MDNR requested that the owners provide all information on the releases and investigations of the releases including all soil and groundwater response actions and investigations. MDNR conditionally approved the amended third consultant’s site investigation work plan. Approximately $850,000 of state financial assurance program funding had been spent at the site. June 1993: An oil/water separator was added to the groundwater treatment system. July 1993 – August 1993: The site’s third environmental consultant informed MDNR that, due to delays in payment from the state financial assurance program, it was unable to continue site investigation activities at the site. In light of this development, MDNR reminded the site owners of their obligation to conduct all appropriate corrective actions to remedy the environmental problems caused by the release of contamination at the site including eliminating any impacts to the Grand Ledge municipal well field. The owners’ attorney informed MDNR that the owners were unable to proceed with site investigation and remediation activities without the assurance of funding. In response, MDNR said that if all current claims were approved and paid by the state financial assurance program, the one million dollar limit for reimbursement under the program would have been reached at the site and the owners would be responsible for financing the remaining corrective actions including a final remedy. The Michigan Department of Public Health notified the city of Grand Ledge of MTBE contamination of a municipal water supply well. October 1993: MDNR notified the owners that the site would be listed in the “Proposed List of Michigan Sites of Contamination” for fiscal year 1995. November 1993: A second leak of 400 to 800 gallons of gasoline was discovered and reported to the Michigan State Police. December 1993: According to the third consultant’s initial abatement report, the second leak was discovered by the owner/operator when he noticed a strong petroleum odor in the site treatment building. When he opened the cover of equipment used as part of the cleanup system, he observed approximately one foot of gasoline. The owner then inspected the underground storage tank system and found a mixture of water and gasoline in the area of one tank, due to a pin-hole leak in a gasoline supply line. The leak detection equipment installed on the system—that should have detected the leak, sounded an alarm, and automatically shut off the system—was not functioning. The tank system was taken out of service until the perforated line could be replaced. January 1994: The groundwater treatment building was damaged by fire. According to the fourth consultant’s investigation report, the owner had discovered gasoline in the equipment and a fire started, damaging the equipment, before he could remove it. (As of June 2005, the equipment had not been restored to service). March 1994: MDNR advised the owners of their responsibility to repair the fire damaged system and conduct hydrogeological studies related to both leaks at the site. The owners responded that they could not continue the remediation work required to clean up the site and they terminated the services of the consultant at the site. MDNR assumed control of the investigation and cleanup of the leak at the site. April 1994: MDNR obtained emergency funds to complete the groundwater investigation and develop a corrective action plan. MDNR hired a consultant to update and collect additional information for the site with the overall goal of protecting the Grand Ledge municipal water supply from contamination originating from the site. December 1994: The city of Grand Ledge expressed concern that levels of benzene in a municipal well continued to increase, indicating continuing migration of contamination. The city asked MDNR for monitoring well test results and a report on the current status of the remediation by the end of the month. February 1995: The mayor of Grand Ledge asked a state representative to intercede with state agencies to facilitate the issuance of all permits and release of state funds needed to allow the design and construction of soil vapor extraction and groundwater extraction and treatment systems to proceed immediately. July 1995: Grand Ledge allowed access to the city well field to construct and maintain a water treatment system for the contaminated municipal well and a groundwater blocking well. December 1995: The system to treat well water contaminated with volatile organic compounds began operating and a barrier well was installed to prevent the plume from continuing to reach the municipal well. Status as of August 2005: Treatment facilities were operating and cleanup was ongoing. Michigan Department of Environmental Quality officials told us that the air sparge and soil vapor extraction systems were recently turned off to conduct performance monitoring but carbon treatment of the impacted municipal well was ongoing. According to these officials, they expected to complete site cleanup between 2007 and 2010. Contaminants and compounds of concern: Benzene, toluene, ethyl benzene, xylenes, and MTBE. Size of leak: The first release was approximately 4,500 gallons of gasoline; a second release was 400 to 800 gallons of gasoline. Impacts of contamination: The leak impacted the water supply for the city of Grand Ledge and the city had to provide potable water to about 8,300 residents. Remediation cost: As of about March 2005, approximately $2,150,000 had been spent to clean up contamination from the site. Approximately $950,000 of this amount came from the Michigan Underground Storage Financial Assistance Fund to reimburse costs incurred by the owner prior to the state taking over site remediation. According to Michigan Department of Environmental Quality officials, continuing efforts to clean up the site through 2007 to 2010 will involve additional costs of up to approximately $500,000. U.S. Environmental Protection Agency involvement: None. Communication between responsible agencies and the public: None identified. Litigation: A number of lawsuits have been filed relating to the leak at Bob’s Marathon, according to Michigan environmental officials. An apartment complex east of Bob’s Marathon filed a lawsuit against the facility’s owners. In addition, the owners are involved in litigation with the first two consultants and have filed a lawsuit against the facility’s gasoline supplier, which is, in turn, suing the manufacturer of the hose that caused the site’s second leak. The R.C. Anderson Trust site was owned by R.C. Anderson from 1949 to his death in 1984, when the property was passed on to his heirs and was managed as a trust by a bank. Three businesses were located on the site—a gasoline station (abandoned), a tractor dealership (subsequently a furniture store), and an automobile repair garage. Contamination was first reported to the North Carolina Department of Environment, Health, and Natural Resources (DEHNR) in July 1992, during removal and closure of the underground and above ground storage tanks and excavation of the soil around the tanks and pump island. Land uses in the vicinity are commercial, agricultural, and single family residential. July 1992 – September 1992: In July 1992, three underground gasoline storage tanks and one above ground diesel tank were removed. Evidence of a gasoline release from a 3,000 gallon tank was discovered and reported to DEHNR. An August 1992 closure report also concluded that there was “high potential” of petroleum contamination on-site. Accordingly, in September 1992, DEHNR notified the bank that was acting as trustee for the R.C. Anderson property, that it was in violation of pollution control rules and regulations and must take action to comply with corrective action rules. December 1992: The environmental consulting company completed a Comprehensive Site Assessment/Corrective Action Plan to assess the surrounding conditions and risks to area populations from the remaining contamination at the site. This report was filed to satisfy the requirements of the North Carolina law pertaining to investigations for soil and water cleanup. May 1993: DEHNR reviewed the Comprehensive Site Assessment report, determined it to be inadequate and required the bank to submit (1) a more complete report which adequately identified the full vertical and horizontal extent of the contamination plume(s) and (2) a corrective action plan. Both reports were to be submitted by July 15, 1993. October 1993: The environmental consultant resubmitted the Comprehensive Site Assessment report to DEHNR for review. The report identified several sources of pollution affecting soils and groundwater at the site. One was a 3,000 gallon underground storage tank that had two large rust holes. Another source was the fuel pump island where stained soil was found when it was removed. In addition, the soil below the 10,000 gallon above ground storage tank (which had held diesel oil) was stained at the fill area from apparent spills during filling operations. The above ground tank itself, however, showed no evidence of leaks. Last, used motor oil was drained onto the ground near the gasoline station building when it was being used as a truck repair center. The area surrounding the site was surveyed for water wells, public water supply intakes, and off-site monitoring wells for potential receptors and migration pathways. Sixteen private water supply wells and another 32 “suspected” water wells were observed within a 1,500 feet radius of the site. There were no public water supply intakes identified within one-half mile of the site and no off-site monitoring wells were found within 1,000 feet of the site. November – December 1993: Excavation and treatment began in 1993. A smaller (530 gallon) underground gasoline storage tank was discovered during excavation of contaminated soil and removed. While the tank was located in an already contaminated area of the site and was rusted, it showed no sign of leaks, according to the consultant. A closure report on this tank dated the end of December was filed with the Raleigh Office of DEHNR. January – December 1994: A second round of soil excavation with on- site bioremediation procedures was performed in 1994. In early March, the environmental consultant completed a Corrective Action Plan that included the excavation and treatment of contaminated soils and an air sparging and pump and treat facility for groundwater remediation. Under the plan (1) soil treatment was to be completed by August 1994; (2) a groundwater treatment system was to be installed by January 1995, and operated for 5 to 15 years; and (3) system shut-down and project completion dates would be based upon monitoring test results and state approvals. In late March, DEHNR approved the Comprehensive Site Assessment. In July, the Raleigh Office of DEHNR issued a soil contaminant and treatment permit. During 1994, soil excavation and on- site bioremediation procedures were performed on approximately 6,400 tons of contaminated soil at the site. An estimated total of 11,792 tons of soil were excavated and treated on-site January – December 1995: Following completion of the soil treatment, the environmental consultant monitored groundwater quality at the site and reported results on an approximate quarterly basis. The samples taken in October continue to show groundwater contamination. The contamination plume was estimated at approximately 230 feet by 210 feet and expected to migrate slowly to the northeast. The reports continued to recommend the design and installation of a groundwater remediation system to remove the contamination present. January 1996 – January 2003: Groundwater monitoring continued on a semi-annual basis. In 1996, North Carolina enacted a law temporarily suspending remediation work for low-priority underground storage tank release sites. The R.C. Anderson Trust site was initially given a low- priority ranking and, therefore, remediation work at the site stopped. However, primarily because of the site’s threat to uncontaminated private domestic water supply wells, its ranking was changed to high priority in July 1997 and remediation activities resumed. The Containment and Treatment of Contaminated Soil permit originally issued In July 1994 by the Raleigh Office of DEHNR was renewed in 1998. A groundwater remediation system using pump and treat with air sparge technologies was installed and began operation in late May 2002. Groundwater at the site was sampled 10 times through January 2003. According to the consultant’s Groundwater Monitoring Report dated September 12, 2003, as of January 2003, benzene, lead, MTBE, and 1,2- dichloroethane were still present at on- and off-site monitoring wells in concentrations above North Carolina groundwater quality standards. However, according to DEHNR officials, benzene was not detected in any off-site monitoring well during that monitoring event and lead was detected in an off-site monitoring well but not above the NC groundwater standards. Status as of August 2005: According to state officials, North Carolina State Session Law 2004-124 suspended further work on most high risk sites due to constrained state funds. As a result, the state is now prioritizing sites based on relative risk and directing work only to emergency releases and those releases that pose the highest risks that can be funded with available resources. Initial treatment work was completed at the Anderson Trust site and the recovery system remains on-site but is currently shut down. Contaminants and compounds of concern: Benzene, toluene, ethyl benzene, xylenes, MTBE, Naphthalene, and lead. Size of leak: Unknown. Impact of contamination: One residential drinking water well adjacent to the site was contaminated and abandoned. A potential threat of contamination exists for 17 additional residential wells within 1,500 feet of the site. Remediation cost: According to state officials, the North Carolina Commercial Fund has reimbursed the owner (including consultants) for $943,407.93 of reasonable and necessary expenses performed to remediate the site. In addition, total reimbursable expenses to complete cleanup and close the site are estimated by these officials at $1.1 million. However, this estimate does not include deductible amounts and other expenses not approved by the fund or otherwise deemed ineligible for reimbursement, such as contamination from the above ground tank or used motor oil paid by the R.C. Anderson Trust. U.S. Environmental Protection Agency involvement: None. Communication between responsible agencies and the public: No evidence of formal public meetings was identified. Litigation: No information on lawsuits was found in the case files. The Tranguch site was a gasoline and tire retreading service station in a mixed commercial and residential area of northeastern Pennsylvania that was abandoned in 1995. Several operating gasoline service stations as well as numerous abandoned or removed underground storage tank systems lie within the vicinity of the site. A residential neighborhood, part of which is built over an abandoned coal mine, surrounds the site. While it is unknown exactly when the underground storage tanks at the site began to leak, residents’ complaints of gasoline odors in their homes suggest that leaks may have begun sometime in the late 1980s to early 1990s. By 1993, the Pennsylvania Department of Environmental Resources (PADER) determined that gasoline vapors from the sewer system were affecting homes. Although PADER found contamination at four other facilities in the vicinity of the Tranguch site, the department determined that the Tranguch facility was primarily responsible for the leak impacting the residential area. EPA estimated that the facility had released an estimated 25,000 to 50,000 gallons of gasoline. The resulting gasoline plume contaminated soil and groundwater, and spread generally northeastward through the adjoining community to encompass about a 70-acre area, including 11 businesses, two doctor’s offices, two churches, two parks, 26 vacant lots, and 359 residential properties and impacted the lives of up to a reported 1,500 neighborhood residents. Prior to 1990: Neighborhood residents near the Tranguch facility had complained of an odor from the facility that smelled like automobile or truck emissions as early as August 1976. However, an investigation conducted at the Tranguch facility at that time did not reveal any problems. Case files do not contain any additional complaints about this site until February 1990. February 1990 – April 1993: Over the three-year period, PADER investigated complaints of gasoline or other odors in residences in the area of the Tranguch facility, including one home on three separate occasions. In March 1993, a local Department of Public Safety environmental protection specialist performed an investigation at this residence and verified the presence of strong gasoline odors. Because of the saturated condition of the soil, he was able to trace gasoline residue to a nearby abandoned underground storage tank facility. The environmental specialist referred the matter to PADER for follow-up. In April 1993, PADER took a water sample from the basement sump of this residence that tested positive for the presence of gasoline, and began efforts to determine whether nearby underground gasoline storage tanks were the source of the contamination. Out of eleven commercial locations in the vicinity, PADER identified four operating facilities— including the Tranguch facility—and one abandoned underground storage tank facility as potential contamination sources. May 1993 – June 1993: PADER directed the owner of the abandoned facility to register and either properly close (remove) or upgrade the tanks at that site. At this time, PADER also asked the owners of the Tranguch facility and the other two operating facilities to either provide the department with proof that their tanks passed tank tightness tests or conduct the tests. The abandoned site owner notified PADER that he intended to close the site and PADER directed him to submit and implement a site characterization plan. Because none of the three operating facilities responded to the PADER request, it asked for this information a second time. July 1993: The owner of one of the three operating facilities submitted a report to PADER showing that, in June 1993, tanks at the site had passed a tightness test. However, PADER discovered that the owner had installed this facility’s current tanks in 1991 to replace older tanks and, at that time, 1,042 tons of fuel-contaminated soil had been removed from the site. The older unregistered tanks remained in operation until they were replaced in 1991. The owner filed a closure report for these tanks in October 1993. PADER documentation indicates that the report lacked some of the required sampling and soil analysis information. Also in July 1993, the owner of another of the three operating facilities admitted to having had an unreported release in April 1993. Tanks at this site were subsequently removed. Three of these tanks had never been registered. Soil contamination was evident during excavation, and during the removal process an abandoned heating oil underground storage tank was also discovered at the site. Also, when the owner of the facility removed his tanks, PADER observed contamination at the site and the owner arranged for a site investigation/characterization. August 1993: PADER requested a third time that the owner of the Tranguch facility provide tank tightness test information and submit and implement site assessment and remedial action plans. PADER surveyed leak detection methods used at the Tranguch site and the owner stated that the facility was performing these methods. These methods were required to be performed under the state’s applicable rules and regulations in force at that time. PADER also requested that the owners of the other two operating facilities and the abandoned facility submit and implement site assessment and remedial action plans. September 1993: The owner of the Tranguch facility submitted information to PADER indicating that tanks at the site had passed a tightness test. PADER again requested that the Tranguch owner perform a site characterization and report the results to the department within 14 days. Also, the local fire department received two additional complaints of gasoline-like fumes in area residences. PADER Emergency Response Program personnel also began regularly monitoring vapor levels in area homes. October - November 1993: Beginning in October, the PADER Emergency Response Program started to install interim remedial systems designed to prevent gasoline vapors from entering affected homes and, by mid-May 1994, had installed thirteen. Also in October, PADER identified a fourth operating facility as a potential contributor to the area’s contamination and asked the owner of that facility to perform a site characterization. Through late November 1993, the owner of this fourth facility took no action in this regard, but requested to review any PADER documentation indicating that this facility had contributed to pollution in the area, as well as files regarding the other facilities in the area. Although PADER arranged for access to these files, the owner never reviewed them. PADER confirmed the presence of gasoline-like fumes in six area residences and arranged to have a preliminary subsurface investigation performed on the area impacted by the contamination. The investigation revealed that gasoline had contaminated the groundwater at several locations down-slope from the Tranguch site and that gasoline contamination had spread to the neighborhood sewer system. The gasoline contamination was also found to stop up-slope of the location of the Tranguch tanks, within the property boundary. PADER officials, representatives of the city of Hazleton, and the city fire chief met with the owner of the Tranguch site and requested that he immediately remove all gasoline from his tanks. The following day, the owner informed PADER that he had discovered that 375 gallons of gasoline had been lost from his tanks early in November. The owner admitted to PADER officials that—contrary to what he had told PADER in August 1993—he had just started to comply with the required leak detection methods the previous day. Accordingly, PADER issued a second compliance order to the Tranguch site owner, requiring him to, within 24 hours, remove all gasoline from the underground storage tanks on-site, begin cleaning up the leaked gasoline, and take steps to monitor and mitigate vapors in area homes. Although the owner appealed this order, he had his tanks and lines drained and began monitoring wells to recover gasoline. Furthermore, although the Tranguch site owner made inquiries regarding homes impacted by vapors, he took no action to monitor or mitigate vapors in area homes. City personnel began venting the neighborhood sewer system. December 1993 – February 1994: PADER held a public meeting with area residents, city and township representatives, and state legislators regarding the spill. School district officials, city and township representatives, and a state legislator were also contacted during subsequent site activities. In December, PADER issued a compliance order to the owner of the fourth operating facility suspected of contributing to the contamination. The order required a complete site investigation, including tank system tightness testing, a review of leak detection and inventory records, and sub-surface sampling and analysis of soil and groundwater at the site. The owner’s attorney responded with a letter stating his client’s intent to appeal the order, and included attachments with information in defense of his decision. From the information presented in the attachments, PADER determined that the owner had not employed any type of automatic leak detection devices on the two underground gasoline storage tanks installed at his site in 1962, and that he might not have complied with applicable federal and state leak detection regulations. In January 1994, the legal counsel for PADER informed the owner that the information he had submitted was incomplete and did not satisfy the compliance order’s requirements. Later in January, the owner formally appealed the December compliance order. PADER documents indicate that problems with gasoline-like vapors entering neighborhood homes and commercial establishments grew progressively worse through the end of 1993 and, by early 1994, 28 residences and 1 commercial building had been impacted. The Hazleton City Health Officer determined that a neighborhood home was unfit for human habitation and the owner temporarily relocated because of the presence of potentially harmful gasoline vapors and the explosion potential from the collected gasoline vapors in the basement of his home. (This homeowner had previously reported gasoline-like odors in his residence in February 1990, March 1992, August 1992, and March 1993). In January 1994, all product recovery efforts at the Tranguch site ceased because of the owner’s failure to pay his environmental consultant, but resumed after he was able to secure a loan. March – April 1994: Local residents formed an organization called the Group Against Gas (GAG) at about this time and announced plans for a class action lawsuit against parties responsible for the contamination. PADER received from the owner’s consultant a proposed Tranguch site characterization plan that included monitoring well locations. Upon review, PADER requested that the consultant perform additional site characterization work, including the installation of additional monitoring wells. Also, PADER notified the owner of the fourth operating facility of a June 1994 hearing date and the owner wrote to his state senator in an unsuccessful attempt to stay the PADER compliance order. May – August 1994: In May, PADER held a second public meeting with area residents, city and township representatives, and state legislators regarding the spill. Also in May, to avoid compliance proceedings, the owner of the fourth operating facility conducted a site investigation and found gasoline in the groundwater near his underground storage tank systems. He withdrew his appeal of the compliance order, voluntarily removed all fuel from his underground gasoline storage tank systems, and sent a summary report of the investigation to PADER. In August, this owner notified PADER of his intent to conduct and complete a site characterization by the end of September 1994. An August court order required the Tranguch facility owner to take interim remedial actions to recover leaked fuel, monitor and mitigate vapors in area homes, and complete and report on site characterization by mid-October 1994. PADER obtained funds from the state Leaking Underground Storage Tank fund to conduct an extensive characterization study of the impacted area. February 1995 – September 1995: The owner had all six tanks at the Tranguch facility removed. Tanks were found to be perforated with “fist- sized holes,” and gasoline contamination was evident at the site. The owner of the Tranguch facility declared bankruptcy. In addition, in May, the owner of the fourth operating facility had two tanks removed. Both tanks were found to be deeply pitted and the fill end of one tank had pinholes and corrosion that extended through the steel. Evidence of gasoline contamination was observed during the excavation. November 1995 - December 1995: The Pennsylvania Department of Environmental Protection (PADEP, formerly PADER) issued another compliance order to the owner of the Tranguch facility, who again appealed it. March – July 1996: PADEP’s environmental consultant issued a report on, among other things, the sources of contamination in the area. While the consultant found contamination at all 5 of the sites, it found that the Tranguch facility was primarily responsible for the leak impacting the residential area. According to the consultant’s report, the contamination at the other four facilities did not significantly contribute to the contamination plume that was affecting the area residences. Although PADEP continued to monitor conditions in the area, it no longer had the funds necessary to mitigate the contamination threat. Therefore, PADEP asked EPA to lead the remediation of the Tranguch site and the impacted area. EPA took over as lead agency for the site, while PADEP continued to work with EPA by providing technical support. EPA entered into an Interagency Agreement with the U.S. Army Corps of Engineers (USACE) for contracting services for the site. The first phase of a two-phase remedial action plan developed for the site, included soil vapor extraction of the source area and passive oil skimmers for collection of petroleum products. August – October 1996: EPA confirmed PADEP’s findings regarding the contamination plume. The EPA On-Scene Coordinator (OSC) determined that (1) gasoline contamination at the Tranguch site impacted surface waterways as well as groundwater at the site, (2) site conditions met or exceeded removal criteria described in the National Oil and Hazardous Substances Pollution Contingency Plan (NCP), and (3) the site posed an imminent and substantial threat to the public health of residents in the area of the plume because of the threat of fire, explosion, and direct inhalation of benzene. The OSC estimated that over 900,000 gallons of gasoline leaked from the tanks. EPA requested remediation funds under the Oil Pollution Act of 1990 and received an initial $180,000 to begin removal actions, including installation and maintenance of two underflow dams on Black Creek to reduce the gasoline contamination which was entering the creek. EPA sampled and tested air quality in 53 of 362 homes in the area. The gasoline vapors detected in 52 of the 53 homes were below EPA’s benzene action level of 21.5 µg/m(micrograms per cubic meter). (The state action level at that time was 32 µg/m was below 32 µg/m January – February 1997: A passive basement air filtration system in a home was changed to an active system because of a health threat. October 1997: EPA received approval to discharge treated groundwater from the Tranguch site remediation plant into the Hazelton sewer system. November - December 1999: USACE constructed a soil vapor extraction system on the Tranguch site. EPA identified the Tranguch facility and three other businesses as parties responsible for the contamination based on a USACE groundwater flow model developed to predict the flow of spilled gasoline. According to the model, while much of the spilled material would have come from the Tranguch facility, some material from three other operating facilities would have mingled with the plume from the Tranguch leak and subsequently be transported to Black Creek. Furthermore, for part of the year, spilled material from one of the operating facilities would flow directly to the creek. However, EPA’s Region 3 General Counsel recommended that the agency not issue removal enforcement orders to these parties because it considered them to be “de minimis” (small volume) contributors to the contamination. In addition, since the Tranguch facility was in bankruptcy, EPA believed that Tranguch would not be able to comply with the order and, therefore, did not issue one. Consequently, Oil Pollution Act funds continued to be used to clean up the site. February 2000 - September 2000: The Pennsylvania Department of Health, concerned over air sample results, asked EPA to install ventilation and continue testing air quality in 9 homes. EPA began installing sewer vents at homes. EPA, USACE, state agencies and a state Representative held public meetings with area residents in July and August to discuss what corrective actions EPA would take at the Tranguch facility. An additional 72 residents (32 in July and 40 in August) requested sampling of the air in their homes. October 2000: A public meeting was held to discuss site sampling, health issues, and site history for the Tranguch site and affected area. Officials from the (1) Agency for Toxic Substances and Disease Registry (ATSDR), (2) PADEP, (3) Pennsylvania Department of Health, (4) U.S. Coast Guard, (5) USACE, and (6) EPA were present, among others. More than 150 Tranguch area residents were also present. The USACE official stated that the site was complex because of the existence of an underground coal mine. During the meeting, residents voiced concerns about health and property values, ATSDR stated that long-term exposure to benzene (a gasoline component) had been connected to cases of leukemia, and officials agreed to look into the impacts of the contamination on property values in the area and the EPA representative noted that potential buyers of homes in the immediate area had to be informed about the contamination. Also, based on leak data from Tranguch’s tank tightness tests, he stated that 50,000 gallons or less of gasoline had leaked at the Tranguch site rather than the 900,000 gallons originally estimated. December 2000: The EPA Office of Inspector General received a hotline complaint alleging EPA mismanagement of the Tranguch site cleanup. January 2001: Throughout Pennsylvania, the action level for benzene is 32 µg/m. However, according to EPA Region 3 officials, citing concerns over the limited information on the extent of the contamination, PADOH set a more conservative level for the Tranguch site of “non-detect”. In response, EPA identified 8.3 µg/mEPA continued to sample air in area homes and, as of February 2001, had sampled 308 of approximately 350 homes within the contaminated area. Luzerne County Commissioners adopted a resolution urging the Board of Assessment Appeals to eliminate property taxes for two years for properties determined by the federal government and appropriate agencies to be eligible for relief. According to a local newspaper article, one of the state’s U.S. senators wrote to the EPA Administrator and asked the agency to buy the homes of the area’s residents and the other U.S. Senator agreed to meet with the Administrator about the spill. March 2001: EPA implemented weekly “Unified Command” meetings to keep all interested parties up to date and to provide EPA an opportunity to address issues and/or questions any of the attending representatives might have. While the primary members of these meetings were federal, state, and local officials, representatives from the Group Against Gas participated in the meetings as ex officio members. EPA requested and received an additional $11,500,000 in funding from the U.S. Coast Guard to install groundwater collection and soil gas extraction systems, as well as groundwater and soil gas treatment systems. This funding brought the Tranguch site cleanup ceiling to $25,698,188. The mayor of Hazelton, Pennsylvania declared the area affected by the Tranguch leak a local disaster emergency and Luzerne County Commissioners declared the area in a state of emergency. In addition, the Pennsylvania Emergency Management Agency was asked to determine if the area met criteria for being declared a disaster area. April 2001: On behalf of EPA, the USACE completed a remediation plan for the area sewer system and began work on a groundwater collection system, a soil vapor extraction/biovent system, and new storm and sanitary sewer lines. A public meeting was held with affected Tranguch site residents. Officials from EPA, ATSDR, Pennsylvania Department of Health, and PADEP attended the meeting. A state elected official introduced a resolution in the state House of Representatives to declare the affected area a national disaster area and to purchase the homes of citizens within the affected area. The resolution was unanimously approved. In a letter to the Governor of Pennsylvania, the Pennsylvania Emergency Management Agency (PEMA) stated that the impacted area did not meet the eligibility criteria to obtain disaster assistance from the Federal Emergency Management Agency (FEMA). PEMA recommended that the Governor not certify that a major disaster or emergency existed in order to request assistance from FEMA. Further, PEMA determined that the hazard-mitigation funding needed to “buy-out” (purchase the homes of) affected residents would be inadequate, based on eligible costs, to address this situation. Based on PEMA’s review, the governor denied a state buy-out for residents. May 2001: The local board of supervisors again declared a state of emergency for the area affected by the leak and granted property tax relief for the affected property owners. PADEP approved EPA’s permit to discharge treated contaminated groundwater into Black Creek. However, some residents questioned EPA’s remediation strategy. EPA and city, township, and school district attorneys met concerning Tranguch site cleanup. The Pennsylvania Department of Health began conducting a health study of area residents impacted by the leak. June 2001: An environmental consulting company retained by EPA completed a report on subsurface airflow modeling for the soil vapor extraction/biovent system. EPA saw to the installation of 288 sewer vents in area homes and allowed residents to use a suite and hotel amenities, such as the pool, at a local lodge to get away from the site construction noise. August 2001: EPA’s Inspector General reported that (1) EPA managed the cleanup of contamination from the Tranguch leak adequately, but the agency could have better communicated with the local community and the Pennsylvania Department of Health; (2) a federal buyout was not warranted and that residents’ desire for a buy-out was based on an inaccurate perception of the threat posed by the leak; and (3) about $2.8 million in remediation costs might not have been warranted. The University of Pittsburgh Graduate School of Public Health completed a “Preliminary Findings” report that examined whether Hazle Township residents were at increased risk for cancer compared to that of Luzerne County residents and residents of Pennsylvania as a whole. The authors of the study stated that “these findings suggest that the incidence of leukemia and prostate cancer in the Hazle Township is increased compared to Luzerne County and the state of Pennsylvania”. While prostate cancer has been linked to such factors as age, race, family history and high intake of dietary fat, research literature has linked leukemia—in particular, acute myelogenous leukemia—to benzene exposure. However, the study authors could not definitively identify the gasoline leak as the source of the excess leukemia. December 2001: EPA completed replacement/repair of the sewer lines in and around the plume of contamination. PADOH issued its health study report, which provided its recommendations for determining when indoor air monitoring would no longer be necessary. In effect, it reset the site-specific action level for benzene of 8.3 µg/m back to the statewide action level of 32µg/minvestigations of the spill and its potential impacts. In response to a question on how long residents with carbon filters in their homes to purify their indoor air should run them, the EPA representative said “…based on the sampling that we’ve done throughout the community, there’s no reason to run those filters”. Following the meeting, according to a newspaper account, the EPA representative said that as long as environmental officials are capable of eliminating potential chemical exposure for residents, federal officials would not consider options to relocate affected area residents. The University of Pittsburgh completed data collection efforts for its health study, which was funded by a $100,000 grant from the Pennsylvania Department of Community and Economic Development. August 2002: The Luzerne County Board of Commissioners unanimously approved a resolution requesting the Luzerne County Board of Assessment Appeals to approve requests to reduce to zero value, for January 1, 2003 to December 31, 2003, the real estate assessments of those properties that were adversely affected by the Tranguch gasoline leak, as determined by the federal government. September 2002: EPA requested and received an additional $600,000 in funding from the U.S. Coast Guard. This funding brought the Tranguch site cleanup cost ceiling to $26,298,188. November 2002: A PADEP report on an evaluation of the abandoned mine under the area concluded that it had no significant environmental impact on the community. January 2003: Under contract with EPA, USACE found small amounts of petroleum contamination in one tunnel of the abandoned coal mine. PADEP discontinued support for the air filters in residences. The local school district sent a letter to EPA asking for $44,000 in compensation for the economic loss resulting from not having use of the athletic field. The University of Pittsburgh Graduate School of Public Health staff presented their preliminary findings of the Hazleton Health Effects study to the mayor of the city and the community at a public meeting. According to a local newspaper article, this study (1) included more individuals (451 compared to 207) and more households (190 compared to 84) than in the earlier “Preliminary Findings” study; and (2) found no statistically significant increase in overall cancer or leukemia incidence for the Laurel Gardens community of Hazleton residents in the area of the Tranguch site compared to the county and state populations. The study team, however, did stress that further investigation was warranted for both thyroid and brain cancer, according to the newspaper account. February 2003 – March 2003: The local Board of Supervisors extended the state of emergency for the area impacted by the leak through March 10, 2003, and supported a “buyout” of affected homes by the federal government. The Luzerne County Board of Commissioners also declared that a state of emergency continued to exist. GAG wrote to the governor asking that PEMA reevaluate the designation of the area as a disaster area, which had been denied earlier. April 2003: Local and national elected officials representing the area sent letters to the new governor, requesting a reevaluation of the previous governor’s determination on the area’s eligibility for being declared a disaster area. The Hazelton city council gave a property tax break to area residents impacted by the leak for the third consecutive year. According to EPA officials, the local school district verbally requested that an athletic field in the affected area be restored. In addition, the local school district sent a letter to EPA requesting a meeting in May with the EPA on-site coordinator and USACE to discuss (1) compensation for loss of use of the athletic field, and (2) a lack of communication between EPA and the school district over the issue. May 2003: The federal government agreed to pay the local school district $120,000 for the restoration of the athletic field. EPA held a public meeting with attendees expressing concerns about the cleanup. July 2003 - September 2003: Citing health and property concerns, more than 250 Hazleton and Hazle Township residents petitioned for a congressional hearing into EPA’s response to and management of the leak impacting their community. October 2003: PADOH completed a study showing that, of the twenty- two types of cancers and total cancers considered, only the incidence of leukemia and all cancers was significantly higher in the affected community than would be expected. However, according to the PADOH study, the relationship of leukemia incidence to the environment was unclear, only in rare circumstances can an occurrence be causally linked to a specific agent with certainty, and the mechanisms for the induction of cancer from benzene exposure are not clear. The University of Pittsburgh Graduate School of Public Health completed a health study providing its final “Summary of Primary Findings” on area residents’ increased risk of developing cancer. This report summarized the university’s two previous studies that separately examined residents in the affected area of Hazle Township and the City of Hazelton. In addition, the study examined the total population of affected area residents by combining the Township and the City. The study also examined affected residents living in both locations classified into three potential exposure categories (high, medium, and low) based on the proximity of their residence to the underground gasoline plume. Most notably was the high exposure category of those living directly over or adjacent to the projected contamination plume. The study investigators concluded that while the combined population did not experience an excess of all cancers, a statistically significant excess of leukemia was observed. For the high exposure category, the study investigators concluded that the observed versus expected cases of leukemia was statistically significant, but that, because of the small size in this subgroup, these results should be interpreted with some caution. The study investigators made a number of recommendations including long-term systematic surveillance and screening for members of the potentially higher risk population. The University of Pittsburgh Graduate School of Public Health staff completed their final report of the Hazleton Health Effects Study 1990- 2000. The study’s findings suggest that from 1990 to 2000 no statistically significant increase in overall cancer or leukemia incidence in the affected area of Hazleton with the exception of brain cancer in white males compared to the county and state populations for the period 1990 through 2000. December 2003: According to a local newspaper article, Laurel Garden residents impacted by the gasoline leak asked the Luzerne County District Attorney to investigate the case. In the request letter, the residents said that they believe they were “needlessly and recklessly endangered” by the owners of fuel stations in the vicinity of the impacted area, EPA, and PADEP. March 2004: In a letter to local officials, EPA stated that no further action would be taken to address contamination from the abandoned mine based on three factors: (1) the contamination did not appear to be from the Tranguch property, (2) the contamination did not have a pathway to surface waters, and (3) due to the small amount of contamination present, the vapors were not migrating from the mine location and therefore did not threaten nearby residents. July 2004: EPA remained the lead agency responsible for the Tranguch site, while PADEP agreed to provide operation and maintenance services for the groundwater and soil vapor extraction treatment systems. Status as of September 2005: All leaking underground storage tanks had been removed from the Tranguch site and only residual contamination required remediation. In addition, cleanup efforts at all affected residential homes had been completed and well over 95 percent of total costs to clean up and monitor the site had been expended. Remediation activities to remove residual contamination are expected to continue for another 4 to 5 years, costing about $100,000 per year. The remediation system will be shut down periodically to monitor its effectiveness and determine whether mitigation goals for groundwater and soil contamination have been reached. This monitoring is expected to cost about $30,000 to $40,000 per year. The remediation system might need to be shut down a few times before the contamination threat can be considered mitigated and the removal project completed. However, once this determination is made, groundwater and soil gas (vapor) monitoring will permanently stop, all remaining monitoring wells (currently approximately 80 but likely will be less because some are expected to close every year depending on sampling results) will be closed (costing about $1,000 per well), the treatment system removed, and the underground piping abandoned in place. Contaminants and compounds of concern: Benzene, toluene, ethyl benzene, and xylenes (BTEX). According to EPA officials, some methyl- tertiary butyl-ether (MTBE) was identified but was never considered a contaminant of concern. Size of leak: An estimated 25,000 to 50,000 gallons of gasoline was released into the soil. Impacts of contamination: The leaking gasoline contaminated soil and groundwater, entered into the sewer system through cracked pipes, and further spread generally northeastward through the adjoining community to encompass an area of about 70-acres, including 11 businesses, two doctor’s offices, two churches, two parks, 26 vacant lots, and 359 residential properties. Remediation cost: According to EPA officials, about $25.2 million of Oil Spill Liability Trust Fund monies have been spent to date to clean up the contamination resulting from the Tranguch leak. In addition, according to state officials, Pennsylvania spent about $2 to $3 million in cleanup funds. The site owner spent an unknown additional—but relatively small amount—on cleanup. U.S. Environmental Protection Agency involvement: EPA assumed responsibility for cleaning up the site at the request of PADER in 1996. Communication between responsible agencies and the public: Pennsylvania state agencies and EPA either held or participated in at least nine public meetings and other forums regarding the Tranguch leak from 1993—when the leak was first confirmed—through 2003. In December 1993 and May 1994, PADER held public meetings with area residents, city and township representatives, and state legislators regarding the contamination. According to EPA officials, beginning in July 1996, EPA held meetings with local officials and public meetings with area residents and others to discuss plans for remediating the contamination at and around the Tranguch site and to update the status of the cleanup. Through May 2003, EPA held at least five such meetings and participated in at least one meeting sponsored by GAG. The meetings often included representatives from the state and other federal agencies—such as USACE—involved in cleanup operations, among others. Litigation: Nearby residents affected by the contamination sued numerous parties, including the owners of the gas stations in the vicinity of the leak as well as certain oil companies, asserting that the contamination had caused personal injury and property damage, among other things. These lawsuits are still pending. The objectives of this review were to identify (1) information available on the number and cleanup status of leaking underground storage tanks, (2) existing sources of funding for cleanups at contaminated tank sites, and (3) processes used to identify, assess, and clean up sites in 5 states with large numbers of leaking tanks—California, Maryland, Michigan, North Carolina, and Pennsylvania. In addition, to provide some perspective on how leaking underground storage tank sites are identified and cleaned up, we are providing information on the history and cleanup status of one leaking tank site in each of these 5 states. To identify information available on the number and cleanup status of leaking underground storage tanks, we reviewed and evaluated data from EPA’s underground storage tank program semi-annual activity reports for the period from March 31, 2001 through March 31, 2004. Each activity report contains data on the number of active and closed tanks, confirmed releases, cleanups initiated and completed, and cleanup backlog for the 50 states, 5 territories, and the District of Columbia, arranged by EPA region. To assess the reliability of the EPA data, we interviewed EPA program officials at headquarters and in Regions 3, 4, 5, and 9; conducted electronic (logic and other) reliability testing of the data itself; and obtained and reviewed EPA’s responses to questions designed to determine the reliability of the data. In addition, we compared the semiannual data reported by EPA with data provided by the states we visited. In general, we found only minor discrepancies during our reliability testing of the data. For example, the Maine and Massachusetts semi-annual data for the period ending March 31, 2002, were inadvertently switched. Once we brought this to EPA’s attention, it was immediately corrected. We also found that EPA reported 1,000 less closed tanks and 1,000 more active tanks than reported by the state of Pennsylvania. The state had initially provided EPA data on the number of closed tanks as of September 30, 2004 but the following day provided EPA a correction to this amount. However, EPA inadvertently did not update its records to reflect this correction. While acknowledging these problems, we have determined that the reliability of the semi-annual data is adequate for the purposes used in this report. To identify funding options available for cleaning up contaminated tank sites and the processes used for identifying, assessing, and cleaning up leaking tanks by 5 states with large numbers of leaking tanks, we discussed possible funding options with EPA officials, conducted structured interviews with state program officials, and reviewed documents provided by these officials or located on their internet sites. To select which states to include in our review, we used data from EPA’s semi-annual activity report and applied it to our selection criteria and picked the five states having the highest combined score. Our selection criteria consisted of the following five quantitative indicators: states with the largest average number of active tanks for the last 3 states with the largest average number of confirmed releases for the last states with the largest average number of backlogs for the last 3 years; states with the largest increase in backlogs during the last year divided by the number of active tanks for that state; and states with the largest increase in new tank releases for the last year divided by the number of active tanks for that state. Three of the five indicators used 3-year average data to minimize the impact of single-year fluctuations and to reduce the effect of state officials’ periodic revisions to the data. Two of the five indicators included adjustments for the number of active tanks in the state so as to reduce the possible selection bias in favor of states that have large numbers of active tanks. For each indicator, we assigned a numerical score corresponding to its ranking compared to the other states. For example, because EPA data indicated that California averaged the most releases over the last three years, we assigned it a score of 56 out of a possible 56 points; alternatively, because American Samoa averaged the least number of releases over the last three years, we assigned it a score of one. To determine a state’s total ranking for all five indicators, we added the scores for each state across all indicators and ranked each from highest to lowest. As a final step in our state selection process, we reviewed the 5 states with the highest quantitative score to ensure that, taken as whole, these states (1) were geographically diverse, (2) had different EPA regional offices overseeing their LUST programs, (3) included states with and without EPA- approved LUST programs, and (4) included states with and without LUST assurance funds. This process resulted in our including California rather than Ohio. While both states had the same quantitative score, including California, in our opinion, increased geographic diversity and added a different regional office to our selected states. To provide information on the history and cleanup status of one leaking underground storage tank site in each of the 5 states, we identified the 5 sites—Coca-Cola Enterprises, Yuba County, California; Henry Fruhling Food Store, Harford County, Maryland; Bob's Marathon, Grand Ledge, Michigan; R.C. Anderson Trust, Nash County, North Carolina; and Tranguch Tire Service, Incorporated, Luzerne County, Pennsylvania—interviewed state program officials responsible for overseeing or managing the cleanup at these sites, reviewed case files, and visited each site. In identifying these sites, we first selected the Tranguch site because of congressional interest. For comparison with Tranguch, we selected the remaining 4 sites primarily on the basis of the following similarities to that site: Cleanup was either completed in 2004 or is relatively close to completion based on total estimated costs; Remediation costs significantly exceeded the average cost of cleanup of about $125,000; and The risk priority ranking for remediation was above normal. Specifically, we selected each of the 4 remaining sites as follows: Coca-Cola Enterprises n Yuba County, California: From California’s GeoTracker database, we obtained a list of about 3,400 closed sites— that is, sites where cleanup has been completed—and identified the 10 most expensive cleanup sites for further investigation. The Coca-Cola site had the highest risk rating among those sites closed since January 2004, and while all of the remaining 9 sites were more costly than the Coca-Cola site, the cost differential was not significant and all had the same or lower risk ratings. Henry Fruhling Food Store in Harford County, Maryland: For site selection, we obtained a list of 39 sites from Maryland’s Oil Control Program, administrating the state’s underground storage tank program. Because Maryland does not track cleanup costs for sites being remediated by responsible parties, these were all sites with state-lead cleanups. In addition, they were sites still undergoing remediation, because the state has not closed a site within the last 2 years. From this list we selected the Fruhling Food Store site primarily based on its high cost and high risk. This site was the most costly to clean up and had a high risk ranking, having impacted residential water supplies. Bob's Marathon in Grand Ledge, Michigan: We obtained a list of 68 sites with remediation costs of at least $250,000 from Michigan’s Department of Environmental Quality, administrating the state’s underground storage tank program. Because Michigan does not track cleanup costs for sites being remediated by responsible parties, these were all sites with state-lead cleanups. We selected the Bob’s Marathon site primarily based on its high cost and high risk. Bob’s Marathon was the most costly of sites that were either completed or nearly completed and was high risk because it impacted municipal water supplies and the community. R.C. Anderson Trust in Nash County, North Carolina: We obtained a list of 108 sites with remediation costs of at least $250,000 from North Carolina’s Department of Environment and Natural Resources, administrating the state’s underground storage tank program. We selected the 4 sites with the highest remediation costs for further analysis. From these 4, we selected the R.C. Anderson Trust site primarily based on high risk and high cost. This site was the only site ranked high risk because of its potential impact on nearby water wells and on the community. While this site was the least costly of the four high-cost sites, the cost differential between this site and the highest cost one was only about $24,000. We conducted our review from August 2004 through November 2005 in accordance with generally accepted government auditing standards. In addition, Vincent P. Price, Michael J. Rahl, and Michael S. Sagalow made key contributions to this report. Important contributions were also made by John W. Delicath, Wilfred B. Holloway, and Richard P. Johnson. | Leaking underground storage tanks that contain hazardous products, primarily gasoline, can contaminate soil and groundwater. To address this problem, the Environmental Protection Agency (EPA), under its Underground Storage Tank (UST) Program, required tank owners to install leak detection equipment and take measures to prevent leaks. In 1986, the Congress created a federal trust fund to assist states with cleanups. Cleanup progress has been made, but, as of early 2005, cleanup efforts had not yet begun for over 32,000 tanks, many of which may require state and/or federal resources to address. GAO identified (1) data on the number and cleanup status of leaking tanks, (2) funding sources for tank cleanups, and (3) processes used by five states with large numbers of leaking tanks--California, Maryland, Michigan, North Carolina, and Pennsylvania--to identify, assess, and clean up sites. Data submitted to EPA by the states show that, as of March 31, 2005, more than 660,000 tanks were in use and about 1.6 million were no longer in use. In addition, states identified about 449,000 tank releases (leaks) and about 416,000 initiated cleanups, with almost 324,000 of those cleanups completed. States also compile limited data on abandoned tanks--tanks whose owners are unknown, or unwilling or unable to pay for their cleanup--but EPA does not require states to provide separate data on all of their known abandoned tanks. Without this separate data, EPA cannot effectively determine the number and cleanup status of these tanks, or how to most efficiently and effectively allocate federal cleanup funds to the states. Tank owners and operators are primarily responsible for paying to clean up their own sites, but abandoned tanks are cleaned up using state resources, that may be limited, and federal trust funds. EPA estimates that the average remediation costs per site have been about $125,000, but costs sometimes have exceeded $1 million. Officials from two of the five states we contacted reported that their state funds may be inadequate to address contamination at abandoned tank sites. In this regard, Michigan and North Carolina officials told GAO that, because of resource constraints, they let contamination at abandoned tank sites attenuate (diminish) naturally once immediate threats are addressed. Furthermore, due to limited resources, states must sometimes find other options for cleaning up sites. For example, Pennsylvania officials asked EPA to take over the cleanup work at the abandoned Tranguch site in 1996 because the owner was bankrupt and the state could not pay the expected cleanup costs. The five states that GAO contacted identify, assess, and clean up leaking tank sites using similar processes. Generally, owners and operators are responsible for conducting these activities under state oversight. Leaking tanks are identified when tank owners report leaks; land redevelopment activities uncover unknown tanks; or state agencies investigate contamination complaints or inspect tanks for regulatory compliance. While regular tank inspections can detect new leaks and potentially prevent future ones, as of early 2005, only two of the five states GAO contacted--California and Maryland--consistently inspected all the state's tanks at least once every 3 years, the minimum rate of inspection that EPA considers adequate. The Energy Policy Act, enacted in August 2005, among other things, requires inspections at least once every 3 years and provides federal trust funds for this and other leak prevention purposes. EPA and some state officials told GAO that increasing inspection frequency could require additional resources. Being able to use trust fund allocations for this purpose will help in this regard. The five states GAO contacted, once they become aware of leaking tanks, identify responsible parties and require them to hire consultants to conduct site assessments and plan and implement cleanup work. The states generally prioritize sites for cleanup according to the immediate threat they pose to human health, safety, and/or the environment. |
As we move further into the 21st century, it becomes increasingly important for the Congress, OMB, and executive agencies to face two overriding questions: What is the proper role for the federal government? How should the federal government do business? GPRA serves as a bridge between these two questions by linking results that the federal government seeks to achieve to the program approaches and resources that are necessary to achieve those results. The performance information produced by GPRA’s planning and reporting infrastructure can help build a government that is better equipped to deliver economical, efficient, and effective programs that can help address the challenges facing the federal government. Among the major challenges are instilling a results orientation, ensuring that daily operations contribute to results, understanding the performance consequences of budget decisions, coordinating crosscutting programs, and building the capacity to gather and use performance information. The cornerstone of federal efforts to successfully meet current and emerging public demands is to adopt a results orientation; that is, to develop a clear sense of the results an agency wants to achieve as opposed to the products and services (outputs) an agency produces and the processes used to produce them. Adopting a results-orientation requires transforming organizational cultures to improve decisionmaking, maximize performance, and assure accountability—it entails new ways of thinking and doing business. This transformation is not an easy one and requires investments of time and resources as well as sustained leadership commitment and attention. Based on the results of our governmentwide survey in 2000 of managers at 28 federal agencies, many agencies face significant challenges in instilling a results-orientation throughout the agency, as the following examples illustrate. At 11 agencies, less than half of the managers perceived, to at least a great extent, that a strong top leadership commitment to achieving results existed. At 26 agencies, less than half of the managers perceived, to at least a great extent, that employees received positive recognition for helping the agency accomplish its strategic goals. At 22 agencies, at least half of the managers reported that they were held accountable for the results of their programs to at least a great extent, but at only 1 agency did more than half of the managers report that they had the decisionmaking authority they needed to help the agency accomplish its strategic goals to a comparable extent. Additionally, in 2000, significantly more managers overall (84 percent) reported having performance measures for the programs they were involved with than the 76 percent who reported that in 1997, when we first surveyed federal managers regarding governmentwide implementation of GPRA. However, at no more than 7 of the 28 agencies did 50 percent or more of the managers respond that they used performance information to a great or very great extent for any of the key management activities we asked about. As I mentioned earlier, we are now moving to a more difficult but more important phase of GPRA—using results-oriented performance information on a routine basis as a part of agencies’ day-to-day management and for congressional and executive branch decisionmaking. GPRA is helping to ensure that agencies are focused squarely on results and have the capabilities to achieve those results. GPRA is also showing itself to be an important tool in helping the Congress and the executive branch understand how the agencies’ daily activities contribute to results that benefit the American people. To build leadership commitment and help ensure that managing for results becomes the standard way of doing business, some agencies are using performance agreements to define accountability for specific goals, monitor progress, and evaluate results. The Congress has recognized the role that performance agreements can play in holding organizations and executives accountable for results. For example, in 1998, the Congress chartered the Office of Student Financial Assistance as a performance- based organization, and required it to implement performance agreements. In our October 2000 report on agencies’ use of performance agreements, we found that although each agency developed and implemented agreements that reflected its specific organizational priorities, structure, and culture, our work identified five common emerging benefits from agencies’ use of results-oriented performance agreements. (See fig. 1.) Strengthens alignment of results-oriented goals with daily operations Fosters collaboration across organizational boundaries Enhances opportunities to discuss and routinely use performance information to make program improvements Provides results-oriented basis for individual accountability Maintains continuity of program goals during leadership transitions Performance agreements can be effective mechanisms to define accountability for specific goals and to align daily activities with results. For example, at the Veterans Health Administration (VHA), each Veterans Integrated Service Network (VISN) director’s agreement includes performance goals and specific targets that the VISN is responsible for accomplishing during the next year. The goals in the performance agreements are aligned with VHA’s, and subsequently the Department of Veterans Affairs’ (VA), overall mission and goals. A VHA official indicated that including corresponding goals in the performance agreements of VISN directors contributed to improvements in VA’s goals. For example, from fiscal years 1997 through 1999, VHA reported that its performance on the Prevention Index had improved from 69 to 81 percent. A goal requiring VISNs to produce measurable increases in the Prevention Index has been included in the directors’ performance agreements each year from 1997 through 1999. The Office of Personnel Management recently amended its regulations for members of the Senior Executive Service requiring agencies to appraise senior executive performance using measures that balance organizational results with customer, employee, and other perspectives in their next appraisal cycles. The regulations also place increased emphasis on using performance results as a basis for personnel decisions, such as pay, awards, and removal. We are planning to review agencies’ implementation of the amended regulations. Program evaluations are important for assessing the contributions that programs are making to results, determining factors affecting performance, and identifying opportunities for improvement. The Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) provides an example of how program evaluations can be used to help improve performance by identifying the relationships between an agency’s efforts and results. Specifically, APHIS used program evaluation to identify causes of a sudden outbreak of Mediterranean Fruit Flies along the Mexico-Guatemala border. The Department of Agriculture’s fiscal year 1999 performance report described the emergency program eradication activities initiated in response to the evaluation’s findings and recommendations, and linked the continuing decrease in the number of infestations during the fiscal year to these activities. However, our work has shown that agencies typically do not make full use of program evaluations as a tool for performance measurement and improvement. After a decade of government downsizing and curtailed investment, it is becoming increasingly clear that today’s human capital strategies are not appropriately constituted to adequately meet current and emerging needs of the government and its citizens in the most efficient, effective, and economical manner possible. Attention to strategic human capital management is important because building agency employees’ skills, knowledge, and individual performance must be a cornerstone of any serious effort to maximize the performance and ensure the accountability of the federal government. GPRA, with its explicit focus on program results, can serve as a tool for examining the programmatic implications of an agency’s strategic human capital management challenges. However, we reported in April 2001 that, overall, agencies’ fiscal year 2001 performance plans reflected different levels of attention to strategic human capital issues. When viewed collectively, we found that there is a need to increase the breadth, depth, and specificity of many related human capital goals and strategies and to better link them to the agencies’ strategic and programmatic planning. Very few of the agencies’ plans addressed succession planning to ensure reasonable continuity of leadership; performance agreements to align leaders’ performance expectations with the agency’s mission and goals; competitive compensation systems to help the agency attract, motivate, retain, and reward the people it needs; workforce deployment to support the agency’s goals and strategies; performance management systems, including pay and other meaningful incentives, to link performance to results; alignment of performance expectations with competencies to steer the workforce towards effectively pursuing the agency’s goals and strategies; and employee and labor relations grounded in a mutual effort on the strategies to achieve the agency’s goals and to resolve problems and conflicts fairly and effectively. In a recent report, we concluded that a substantial portion of the federal workforce will become eligible to retire or will retire over the next 5 years, and that workforce planning is critical for assuring that agencies have sufficient and appropriate staff considering these expected increases in retirements. OMB recently instructed executive branch agencies and departments to submit workforce analyses by June 29, 2001. These analyses are to address areas such as the skills of the workforce necessary to accomplish the agency’s goals and objectives; the agency’s recruitment, training, and retention strategies; and the expected skill imbalances due to retirements over the next 5 years. OMB also noted that this is the initial phase of implementing the President’s initiative to have agencies restructure their workforces to streamline their organizations. These actions indicate OMB’s growing interest in working with agencies to ensure that they have the human capital capabilities needed to achieve their strategic goals and accomplish their missions. Major management challenges and program risks confronting agencies continue to undermine the economy, efficiency, and effectiveness of federal programs. As you know, Mr. Chairman, this past January, we updated our High-Risk Series and issued our 21-volume Performance and Accountability Series and governmentwide perspective that outlines the major management challenges and program risks that federal agencies continue to face. This series is intended to help the Congress and the administration consider the actions needed to support the transition to a more results-oriented and accountable federal government. GPRA is a vehicle for ensuring that agencies have the internal management capabilities needed to achieve results. OMB has required that agencies’ annual performance plans include performance goals for resolving their major management problems. Such goals should be included particularly for problems whose resolution is mission-critical, or which could potentially impede achievement of performance goals. This guidance should help agencies address critical management problems to achieve their strategic goals and accomplish their missions. OMB’s attention to such issues is important because we have found that agencies are not consistently using GPRA to show how they plan to address major management issues. A key objective of GPRA is to help the Congress, OMB, and executive agencies develop a clearer understanding of what is being achieved in relation to what is being spent. Linking planned performance with budget requests and financial reports is an essential step in building a culture of performance management. Such an alignment infuses performance concerns into budgetary deliberations, prompting agencies to reassess their performance goals and strategies and to more clearly understand the cost of performance. For the fiscal year 2002 budget process, OMB called for agencies to prepare an integrated annual performance plan and budget and asked the agencies to report on the progress they had made in better understanding the relationship between budgetary resources and performance results and on their plans for further improvement. In the 4 years since the governmentwide implementation of GPRA, we have seen more agencies make more explicit links between their annual performance plans and budgets. Although these links have varied substantially and reflect agencies’ goals and organizational structures, the connections between performance and budgeting have become more specific and thus more informative. We have also noted progress in agencies’ ability to reflect the cost of performance in the statements of net cost presented in annual financial statements. Again, there is substantial variation in the presentation of these statements, but agencies are developing ways to better capture the cost of performance. Virtually all of the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. There are over 40 program areas across the government, related to a dozen federal mission areas, in which our work has shown that mission fragmentation and program overlap are widespread, and that crosscutting federal program efforts are not well coordinated. To illustrate, in a November 2000 report, and in several recent testimonies, we noted that overall federal efforts to combat terrorism were fragmented. These efforts are inherently difficult to lead and manage because the policy, strategy, programs, and activities to combat terrorism cut across more than 40 agencies. As we have repeatedly stated, there needs to be a comprehensive national strategy on combating terrorism that has clearly defined outcomes. For example, the national strategy should include a goal to improve state and local response capabilities. Desired outcomes should be linked to a level of preparedness that response teams should achieve. We believe that, without this type of specificity in a national strategy, the nation will continue to miss opportunities to focus and shape the various federal programs combating terrorism. Crosscutting program areas that are not effectively coordinated waste scarce funds, confuse and frustrate program customers, and undercut the overall effectiveness of the federal effort. GPRA offers a structured and governmentwide means for rationalizing these crosscutting efforts. The strategic, annual, and governmentwide performance planning processes under GPRA provide opportunities for agencies to work together to ensure that agency goals for crosscutting programs complement those of other agencies; program strategies are mutually reinforcing; and, as appropriate, common performance measures are used. If GPRA is effectively implemented, the governmentwide performance plan and the agencies’ annual performance plans and reports should provide the Congress with new information on agencies and programs addressing similar results. Once these programs are identified, the Congress can consider the associated policy, management, and performance implications of crosscutting programs as part of its oversight of the executive branch. Credible performance information is essential for the Congress and the executive branch to accurately assess agencies’ progress towards achieving their goals. However, limited confidence in the credibility of performance information is one of the major continuing weaknesses with GPRA implementation. The federal government provides services in many areas through the state and local level, thus both program management and accountability responsibilities often rest with the state and local governments. In an intergovernmental environment, agencies are challenged to collect accurate, timely, and consistent national performance data because they rely on data from the states. For example, earlier this spring, the Environmental Protection Agency identified, in its fiscal year 2000 performance report, data limitations in its Safe Drinking Water Information System due to recurring reports of discrepancies between national and state databases, as well as specific misidentifications reported by individual utilities. Also, the Department of Transportation could not show actual fiscal year 2000 performance information for measures associated with its outcome of less highway congestion. Because such data would not be available until after September 2001, Transportation used projected data. According to the department, the data were not available because they are provided by the states, and the states’ reporting cycles for these data do not match its reporting cycle for its annual performance. Discussing data credibility and related issues in performance reports can provide important contextual information to the Congress. The Congress can use this discussion, for example, to raise questions about the problems agencies are having in collecting needed results-oriented information and the cost and data quality trade-offs associated with various collection strategies. | This testimony discusses the Government Performance and Results Act (GPRA) of 1993. During the last decade, Congress, the Office of Management and Budget, and executive agencies have worked to implement a statutory framework to improve the performance and accountability of the executive branch and to enhance executive branch and congressional decisionmaking. The core of this framework includes financial management legislation, especially GPRA. As a result of this framework, there has been substantial progress in the last few years in establishing the basic infrastructure needed to create high-performing federal organizations. The issuance of agencies' fiscal year 2000 performance reports, in addition to updated strategic plans, annual performance plans, and the governmentwide performance plans, completes two full cycles of annual performance planning and reporting under GPRA. However, much work remains before this framework is effectively implemented across the government, including transforming agencies' organizational cultures to improve decisionmaking and strengthen performance and accountability. |
The term “broadband” commonly refers to Internet access that is high speed and provides an “always-on” connection, so users do not have to reestablish a connection each time they access the Internet. Broadband service may be “fixed”—that is, providing service to a single location, such as a customer’s home—or “mobile,” that is, providing service wherever a customer has access to a mobile wireless network, including while on the move, through a mobile device, such as a smartphone. Broadband providers such as cable companies (e.g., Comcast) and telecommunications companies (e.g., AT&T) sell broadband services to individual consumers. Broadband provides Internet connectivity at various speeds. In 2016, the FCC reported that fixed services typically provide greater speeds than mobile services. In 2015, FCC set a benchmark speed of 25 megabits per second (Mbps) download and 3 Mbps upload— ”25 Mbps/3 Mbps”—for fixed service to be considered as providing Americans with access to advanced telecommunications capability, but FCC has not set a similar benchmark for mobile broadband. We use FCC’s benchmark speed of 25 Mbps/3 Mbps for purposes of identifying whether a fixed Internet service provides broadband. We identify a mobile service as broadband if it uses the LTE standard, an industry standard that is part of the fourth generation of wireless telecommunications technology, which is currently in common use. At present, some mobile service providers are testing the fifth generation (5G) of wireless technology. Broadband providers extensively deploy and maintain infrastructure for fixed and mobile broadband. Fixed service generally requires that wires or cables be installed from infrastructure close to the consumer’s location. This process can require attachment to utility poles or installation beneath roadways. Fixed service can also be provided by non-wired means, such as via satellites. This infrastructure connects to service providers’ linkages with the Internet. The process of gaining access to such infrastructure or installing wires or cable can require permits from local or other government entities or utility companies. Figure 1 illustrates several different types of fixed services through which consumers can access broadband. Mobile service requires the installation of antennas that provide service to consumers within a coverage area and may require the construction of a tower on which to place the antenna. To install antennas, providers must obtain permits from government entities with jurisdiction over an antenna’s location or permission from public utilities to deploy antennas on utility poles. Like fixed service providers, mobile service providers must extensively deploy wires or cables to connect their antennas to the Internet—the final connection with the consumer from the antenna is wireless. A key difference between mobile and fixed service is that mobile service provides connectivity to consumers wherever they are covered by service, including while on the move, while fixed service provides connectivity to consumers in a static location, such as a home. Mobile service also requires radio frequency spectrum (spectrum), which mobile service providers use to transmit data. FCC regulates interstate and international communications, including broadband service. It is directed by five commissioners, including one who serves as chairman. FCC is tasked with developing and enforcing regulations; reviewing transactions, such as mergers involving telecommunications companies; licensing spectrum to commercial users, such as broadband providers; and issuing reports on topics related to broadband. FCC’s regulatory authority covers a variety of issues that can affect broadband deployment, such as rates that certain utilities can charge broadband providers for access to utility poles. When FCC develops regulations or issues certain reports, it solicits comments and input from the public, which can include stakeholders, such as broadband providers, consumer advocates, and industry experts. FCC reviews mergers and other transactions that involve the transfer of FCC licenses, such as for commercial use of spectrum. Before a company may assign an FCC license to another company or acquire a company that is already holding a license, FCC is required to approve of the merger or other transaction. FCC is responsible for licensing spectrum for commercial use, which it does through auctions in which prospective users can bid for the rights to spectrum licenses. FCC collects data and issues reports on several topics related to broadband service. Twice a year, it collects data on broadband subscription, deployment, and service quality. It collects data from providers on deployment of fixed broadband in census blocks and data on mobile broadband coverage in discrete geographic areas. These data provide information regarding the number of fixed and mobile broadband providers reporting that service is deployed in at least a part of any given census block. These data also provide information regarding speed, such as the highest upload and download speeds of fixed broadband services that a service provider advertises in a census block. FCC also collects some fixed broadband price data as part of its annual Urban Rate Survey. FCC issues reports on broadband-related topics, including its annual Broadband Progress reports, Mobile Wireless Competition reports, and Measuring Broadband America reports. FCC’s data, as of December 2015, indicate that about 39 percent of the population resides in a census block where two or more fixed broadband providers report that service is deployed at 25 Mbps / 3 Mbps or higher speeds in at least part of the census block (see fig. 2). Its data, as of December 2015, indicate that approximately 99 percent of the population has LTE coverage from at least two mobile broadband providers and that approximately 89 percent have LTE coverage from at least four such providers (see fig. 3). Experts and stakeholders told us that access and associated costs related to infrastructure, spectrum, and video content are barriers to entry in the broadband market. As discussed later, some of these barriers exist in areas where FCC has taken actions, such as infrastructure access and spectrum licensing. Experts and stakeholders we spoke to told us that the costs of deploying broadband infrastructure are barriers to entry for any potential new entrant. These costs can vary by area. For instance, an expert from a broadband provider said that the cost of deploying infrastructure is a more significant barrier to entry in rural areas than in urban areas. Rural areas tend to have conditions such as low population-density or difficult terrain that can increase a provider’s cost of deploying and maintaining broadband networks. For example, mountains in some rural areas may physically block mobile providers’ signals from reaching consumers. Furthermore, in rural areas, the cost of deploying broadband infrastructure is higher on a per-subscriber basis because rural areas have fewer potential subscribers from whom providers can recoup expenses than urban areas. According to a representative from a broadband association, in an urban area, a fixed provider can run cable to an apartment complex that may house hundreds of consumers, whereas in some rural areas, the population is too low to support a single fixed provider given the need for these providers to install wires or cables to each consumer’s property. Consequently, there is often a higher level of competition among fixed service providers in urban areas and progressively less competition away from these areas. Figure 4 of the Dallas-Fort Worth, Texas urban area illustrates an example of more service providers in an urban center and fewer providers as distance from the urban center increases. Experts and stakeholders also told us that regulations and rules regarding permitting, pole attachments, and rights-of-way are barriers to deploying broadband infrastructure. For example, a broadband provider told us providers must obtain permits from utilities, municipalities, and other government officials before they can install antennas and other necessary equipment. An expert from a broadband provider noted that the processes for acquiring these permits are sometimes tailored for smaller deployments of infrastructure. This can favor existing broadband providers that are making relatively small additions to their network, but can create delays for a new provider since establishing a network requires a larger scale deployment of infrastructure. An expert from a broadband association added that fees associated with these permits can be costly. Aside from fees, regulations associated with permitting processes can result in delays, which also increase costs for potential market entrants. For example, an expert from a broadband provider described how getting access to poles for the installation of broadband infrastructure can take 2 to 3 months due to state regulations specifying the amount of time that existing companies have to make room on the poles for new providers. This expert explained that these delays become more significant when building out fixed connections to consumers’ homes due to the large number of poles—sometimes thousands per week—that service providers must access to effectively deploy their infrastructure. While costs and delays can arise, a representative of a utility association noted that installing wires or cables under a road can decrease that road’s lifespan, leading to increased costs for the municipality. Further, this representative told us that wires or cables can sometimes run through a municipality without providing that community with service, ultimately leading to increased costs for the municipality without any offsetting benefits. Mobile providers rely on spectrum to transmit broadband service through the air, but according to an economist, acquiring spectrum licenses can be very expensive. Furthermore, spectrum is a finite resource. According to a representative of a consumer advocacy organization, much of the most valuable spectrum is already licensed to existing broadband service providers. Some of this spectrum is also used by the federal government. The representative also said spectrum at lower frequencies is valuable because lower frequencies are able to travel greater distances. This allows companies that hold licenses to lower frequency spectrum bands to use fewer antennas (high-frequency spectrum bands require more antennas, as discussed later). According to a representative of an association for mobile broadband providers, because much of this low frequency spectrum is already licensed, potential competitors may be at a disadvantage since there is little such spectrum left for them to license. An expert in mobile technology told us that greater sharing of spectrum already assigned to commercial and government users may reduce the extent to which spectrum is a barrier to entry and facilitate more competition. For example, this expert told us that sharing such spectrum can help potential entrants to the broadband market by allowing them to lease spectrum from these existing users at a lower cost than purchasing it through an FCC auction. As discussed later, FCC has taken some actions to facilitate spectrum sharing, including addressing some regulatory barriers, and to provide additional spectrum for the provision of wireless service. In the fixed broadband market, experts and stakeholders identified the ability to offer video content from television networks as an often important factor in determining a service provider’s ability to be a viable competitor. Many broadband providers are also television providers, offering packages with multiple television networks for paid subscription. These broadband providers “bundle” their broadband service with their television service. An economics and antitrust expert told us that the practice of bundling such video content with broadband service confers a competitive advantage and that it is generally more expensive for consumers to purchase those services separately. A representative from a broadband provider added that customers often expect video content service in addition to broadband service. Acquiring video content from television networks can be expensive for smaller broadband providers. Television networks charge providers fees for delivering their content to consumers. According to the economics and antitrust expert, a provider with more subscribers has more bargaining ability and can therefore negotiate lower fees for programming. A representative of a broadband provider association noted that this negotiating pattern favors larger incumbent providers that have larger subscriber bases. The relatively higher programming fees for any potential smaller or new competitor are, therefore, a barrier to entry to the broadband market. The economics and antitrust expert added, however, that in the future consumers may drop television service subscriptions due to the increasing availability of video content on the Internet, such as through services like Netflix. The experts and stakeholders we spoke to told us that there has been consolidation in the broadband industry marked by several horizontal and vertical mergers in recent years, and some experts expect more in the years ahead. Figure 5 below illustrates a horizontal and a vertical merger. A horizontal merger represents the consolidation of two companies that offer the same services, such as the merger of two providers of cable- based broadband. A horizontal merger may reduce competition in a market because the merger is the union of two prior competitors, resulting in a decrease in the number of competitors in a market. Such mergers are reviewed by FCC for potential effects on consumers, as discussed later. Vertical mergers—mergers that involve companies in a buyer-seller relationship, such as broadband and televisions providers (buyers) and television networks (sellers)—can also affect competition. An expert in antitrust litigation said that vertical mergers have the potential to limit competition if new services acquired by a broadband provider can affect the business of its competitors. For example, a fixed broadband and television provider that acquires a television network important to its rival competitors may increase the costs that those competitors pay for that network’s content. FCC cited this as a concern in the proposed merger between Comcast, a television and broadband provider, and NBCUniversal, a television network and video content producer. An expert from a consumer advocacy group told us he expects more future vertical mergers between broadband and television providers and video content producers because video content is an expensive input for broadband and television providers. According to experts and stakeholders we spoke to, fixed and mobile broadband services are not fully substitutable for one another, but may be in the future. An expert from a broadband association noted that these services are becoming increasingly similar to one another and that this similarity is likely to become more pronounced. Competition may increase in such a scenario because increasing similarity could result in the two services becoming substitutes, and therefore lead to fixed providers facing competitive pressure from mobile providers. A number of factors demonstrate the similarity between fixed and mobile services, for example: Increasingly similar infrastructure: Industry experts said that the infrastructure for the 5G wireless network in higher-frequency spectrum bands will require high-density deployments of small antennas because 5G will use spectrum that transmits data over shorter distances than existing mobile technology. Consequently, such 5G networks will rely on an extensive installation of fiber-optic cables to provide high speed Internet connections to these antennas that service relatively small coverage areas. According to an economist with expertise in the broadband industry, this reliance on building out fiber-optic cables is similar to fixed broadband infrastructure deployment in that it relies heavily on the installation of wires or cables. This same economist noted that this might result in less competition because some mobile companies may be unable to keep up with the costs required to install and maintain additional wires or cables and other infrastructure necessary for some 5G service. Just as mobile broadband providers are deploying infrastructure that is similar in some respects to fixed providers, an expert from a broadband infrastructure association said that fixed providers are deploying infrastructure similarly to mobile providers. For example, one fixed broadband provider told us it is building out Wi-Fi hotspots— areas that allow its customers to remotely connect their devices to the Internet through Wi-Fi—with the goal of providing subscribers connectivity away from home while they remain in range of those hotspots. Increasingly similar speed: Experts and stakeholders said that, if providers successfully deploy it, 5G is likely to provide customers with speeds comparable to those typically received via fixed broadband. However, a broadband industry expert noted that consumers’ demand for speed may continue to grow, particularly due to technologies that require fast Internet speeds such as higher video resolutions and virtual reality. Further, an industry expert told us that mobile providers’ future speed increases will likely be surpassed by fixed providers’ future speed increases. Under such a scenario, improvements in fixed providers’ speeds could possibly limit the degree to which mobile speeds become comparable to those provided by fixed service. Increasingly similar video content: According to an industry expert, media transmitted via the Internet, such as Netflix, is becoming more popular with consumers. According to a separate expert from a broadband provider association, mobile broadband providers are increasingly looking to offer unlimited data service plans, which could make mobile providers more competitive with fixed providers. According to an industry stakeholder, while subscribers have been able to stream video on their mobile devices, streaming video typically counts against a subscriber’s monthly data allowance. This stakeholder noted that generally subscribers to fixed broadband service do not have such monthly data allowances. Recently, however, national mobile broadband providers have begun offering subscribers unlimited video content that would not count against their monthly allowance. One industry expert told us that unlimited plans may make mobile service more substitutable with fixed service. The expert said that these unlimited plans, however, do not offer the same level of video resolution available on fixed connections, making the service more appropriate for smaller mobile devices than larger televisions. FCC has used its rulemaking process and other actions to develop regulations and rules intended to reduce costs and delays associated with deploying broadband infrastructure, for example: In 2014, FCC issued new regulations that, among other things, aimed to quicken environmental and historic reviews related to deployment of wireless infrastructure and clarify FCC’s timelines for states and municipalities to complete review of wireless applications. In 2016, to further support those new regulations, the agency entered into an agreement with the National Conference of State Historic Preservation Officers and the Advisory Council on Historic Preservation to exclude some types of small wireless infrastructure from certain historic review processes. In 2015, FCC revised its regulations to address a disparity in the rates that utilities could charge telecommunications carriers versus cable providers for attaching their equipment to poles. According to FCC, keeping pole attachment rates low and consistent through these revised rules would support broadband deployment and competition. In March 2017, FCC established a Broadband Deployment Advisory Committee to identify regulatory barriers to infrastructure investment and make recommendations to the commission on how to reduce or remove such barriers in order to accelerate broadband deployment. In April 2017, FCC sought public input on rules to: (1) accelerate broadband deployment by removing barriers to wireline infrastructure investment at the federal, state, and local level; (2) speed the transition from copper and other older infrastructure to fiber-optic cables and other infrastructure that supports broadband; and (3) reform FCC regulations that increased costs and slowed broadband deployment. In April 2017, FCC also sought comment on additional ways to expedite wireless facility deployment, including expediting state and local processing of wireless facilities siting applications and potential modifications to the processes for historic preservation and environmental reviews of such applications. Stakeholders we spoke to told us that FCC’s rulemakings and other actions to reduce infrastructure costs and delays are helpful in supporting broadband deployment and, thus, competition. For example, representatives of an industry association told us that FCC’s 2016 agreement with the National Conference of State Historic Preservation Officers and the Advisory Council on Historic Preservation will reduce costs and time frames associated with deploying wireless infrastructure and subsequently promote greater competition. Representatives of another industry association added that FCC’s rules to keep pole attachments rates low and consistent would help reduce costs and uncertainties that providers experience when deploying wires or cable on utility poles. Other stakeholders told us they believe additional FCC efforts are needed to address barriers to deploying broadband service. For example, representatives of a company that provides mobile broadband service noted that FCC’s efforts to streamline access to utility poles were a step in the right direction but that additional efforts were needed, including steps to require timely access to utility poles for providers to deploy infrastructure for broadband service. In April 2017, the agency proposed additional changes to its pole attachment rules in two proceedings that may address some stakeholder concerns, including steps to require utility companies to provide more timely access to utility poles. Among other things, FCC proposed actions to speed broadband provider access to utility poles and establish a 180-day period for FCC resolution of pole access complaints by providers. The agency also sought comment on improving state and local infrastructure reviews, such as zoning requests, and how the FCC’s rules and procedures for complying with the National Historic Preservation Act and National Environmental Policy Act can be modified to minimize costs and delays. FCC has auctioned spectrum and taken other actions to facilitate wider access to spectrum, for example: In 2015, FCC adopted a new bidding preference for rural telephone companies to help them to acquire spectrum. In 2015, FCC also adopted new rules to facilitate greater spectrum sharing, including removing barriers to commercial use of some spectrum that was previously reserved for federal use. In 2016, FCC adopted its “Spectrum Frontiers” proposal in which FCC has started to identify and make available new spectrum capable of supporting advances in 5G technologies. From 2016 to 2017, FCC also conducted its first “incentive auction” designed to repurpose spectrum currently used for broadcast television for use in providing mobile broadband. Among other things, the auction allowed broadcast TV providers to give up spectrum in return for payment and, in doing so, allowed broadband providers to use this spectrum for broadband service. Industry stakeholders told us that FCC’s actions to facilitate spectrum access were helpful to increasing competition in the broadband market. For example, representatives of a fixed broadband provider association noted that FCC’s Spectrum Frontiers initiative freed up spectrum that could enable a fixed provider to deploy mobile service and thus compete with existing mobile broadband providers. Representatives of an industry association for mobile providers added that FCC had helped smaller companies compete for spectrum by setting smaller geographic license sizes in some spectrum auctions. The representatives said that this has allowed smaller companies to more effectively compete against larger companies because it can be difficult for smaller companies to compete for spectrum licenses when such licenses cover larger geographic areas. Representatives of a consumer advocacy group added, however, that FCC’s efforts have not been effective at helping smaller companies compete for access to spectrum because, according to these representatives, it can still be too costly for some companies to acquire spectrum, regardless of the agency’s efforts. As discussed, FCC is required to review mergers and other transactions between telecommunications companies. In doing so, FCC is required to determine whether the proposed transaction, such as a merger between two companies, would serve the public interest, convenience, and necessity, and preserve and promote competition. According to FCC officials, the agency examines not only whether competition would be harmed by a transaction but also whether it would be enhanced. FCC may approve such a transaction outright or with conditions. For example, in 2015, it reviewed and approved a transfer of licenses between AT&T, a telecommunications company, and DIRECTV, a satellite-based television provider. In reviewing and approving the proposed transfer, FCC required the new combined company to build out fiber-optic cables for broadband to 12.5 million locations to help offset what the agency determined would be reduced competition as a consequence of the merger. Representatives of an industry association told us that FCC’s transaction review process has helped competition by restricting transactions among large companies that would make it more difficult for smaller broadband providers to compete. Representatives from a consumer advocacy group similarly noted that the agency’s transaction review process has supported competition by keeping large broadband providers from merging and, thus, reducing the number of options for consumers. However, as previously discussed, some experts and stakeholders told us they believe that more industry mergers may be inevitable given the high costs, such as for video content. Although FCC has taken a number of actions to promote broadband competition, it has not assessed how well these actions have been working toward that end. FCC officials told us that the agency’s actions, including regulations, spectrum auctions, and merger reviews were either ongoing or too recent for FCC to be able to fully evaluate for their effect on competition. Further, FCC officials noted that evaluating the effectiveness of its actions on competition can be difficult because it often takes several years before such actions can have a measurable effect, and that during that time, factors beyond the agency’s influence can affect competition, such as changes in consumer demand for broadband. Stakeholders’ views varied regarding the effectiveness of FCC’s actions to promote competition. For example, some stakeholders said that FCC had taken helpful steps to address barriers providers face in deploying broadband infrastructure, while others noted that additional efforts were needed. A broadband provider added that FCC needed to do more to help ensure that its actions were keeping pace with the quickly evolving market. Further, as indicated by FCC’s broadband data, competition does not exist in all areas. As discussed above, about half of Americans have access to only one fixed broadband provider, and although most Americans have access to multiple choices for mobile broadband service, FCC and experts acknowledge that fixed and mobile service are not fully substitutable for one another. While challenges may exist to a full evaluation of the effect of FCC’s actions in promoting competition, the agency has other ways through which it could obtain input on its actions and assess how well they are working. Specifically, FCC regularly solicits input from stakeholders and others on a variety of issues, such as how to benchmark speed, to inform its annual broadband progress reports. FCC has sought input for these reports from stakeholders on actions it should consider taking moving forward to promote broadband competition, but it has not sought such input on how well its actions are working to promote broadband competition. Having additional input on the effectiveness of the agency’s actions could help FCC better understand whether its range of approaches are successful in promoting competition, as well as whether those actions remain relevant in the face of emerging factors that could affect competition. Factors that we previously discussed, including industry consolidation and the development of 5G technologies, have the potential to significantly change the broadband market and thus have implications for competition. For example, an expert from a broadband provider told us that 5G technology may be too costly for some providers to remain competitive, leading to a potential reduction in the number of mobile broadband providers. Federal standards for internal control, which provide a framework for identifying and addressing major performance and management challenges facing agencies, stress the importance of obtaining information from external sources that may have a significant effect on an agency achieving its goals. Without input from stakeholders and others affected by these actions, FCC may be missing key information to help it determine if any changes are needed in its approach for promoting competition. FCC has reported that competition can help consumers get lower prices and higher service quality from their broadband providers; however, the agency has not identified an approach to regularly examine how competition affects broadband prices and service quality. A stated purpose of the Telecommunications Act, which amends the Communications Act of 1934, is securing lower prices and higher quality services for consumers and encouraging the rapid deployment of new telecommunications technologies through FCC action to promote competition and reduce regulation. Specifically, the Act requires FCC to annually assess whether advanced telecommunications capability—a term that, as discussed, encompasses broadband—is being deployed to all Americans in a reasonable and timely fashion. In 2011, FCC considered collecting broadband price and service quality data from providers as part of its biannual collection of data on broadband deployment but decided against doing so. According to FCC officials, the agency did not pursue collection of these data given the response to its inquiry, including providers’ concerns about the burden to submitting such data. For example, representatives of an association for broadband providers stated that broadband price data are highly variable because of promotion pricing, such as temporary lower introductory rates, and that clearly identifying the price of broadband is challenging when a consumer is paying for a bundled package with video content or other services. FCC collects data and issues reports on broadband deployment, which can help FCC and congressional decision-makers understand where consumers have broadband service and how many service providers they have to choose from, among other metrics related to consumers’ experience with broadband. For example, FCC collects broadband deployment and subscription data from certain broadband providers. FCC publishes some of this and other information in its annual Broadband Progress Report, providing some information on the extent of fixed broadband deployment and speeds in given areas of the country. This report shows that the number of broadband providers varies considerably depending on where a consumer is located, with urban areas generally having more provider options than rural areas. FCC also analyzes subscription data on Internet access mainly for fixed service in its Internet Access Services reports. For mobile broadband, the agency annually reports industry and financial data in its Mobile Wireless Competition Reports, including assessments of deployment, subscribership, and price metrics. Further, FCC collects actual speed data and annually compares fixed providers’ speed data with their advertised speeds in its Measuring Broadband America reports. FCC collects some fixed broadband price data as well through a survey of urban broadband service providers. FCC’s data and reports, as discussed, provide information on the extent of broadband deployment and other indicators of consumer experience with broadband service, but these data and reports do not show how broadband prices and service quality vary based on the number of choices that consumers have for broadband service. FCC officials told us that it is difficult to assess the effect of competition on broadband price and service quality without data showing prices and service quality indicators by the number of providers in a given area. Stakeholders we spoke to did not have consistent views about whether having more or fewer providers serving selected markets had effects on price and service quality in all markets. For example, representatives of a broadband provider noted that when it entered a market in which there was previously only one broadband provider, the other provider lowered its prices and offered higher quality service to customers. In contrast, some stakeholders noted that competition in a market does not necessarily mean that consumers will pay lower prices or have higher quality service. For example, representatives of one broadband provider told us that some providers use national or regional pricing and service plans and that it may not be practical to change these plans in areas with more or less competitors. Further, an industry expert told us that the high cost of deploying fixed broadband infrastructure may prevent a provider from offering lower prices or improving its service when faced with competition because the provider has to recoup its initial investment. As discussed earlier, experts and stakeholders noted the potential for further industry consolidation and increasing similarity of fixed and mobiles services. While some experts and stakeholders noted that the increasing similarity of fixed and mobile services could lead to more competition because fixed and mobile providers would compete with one another, others told us that these developments could also lead to fewer choices for consumers and, possibly, higher prices and less pressure to improve service quality. For example, as discussed, industry consolidation could lead to fewer broadband choices for consumers. Further, representatives of a consumer advocacy group noted that the costs of deploying 5G technology may lead to either consolidation or the exit of some existing mobile providers, which the expert added could lead to higher prices due to the smaller number of providers that remain in the market. As noted earlier, federal standards for internal control stress the importance of obtaining information from external sources that may have a significant impact on an agency achieving its goals. While additional data collection may not be a viable approach, given challenges such as isolating prices for broadband from prices for other services in a bundled package, FCC has alternative methods of information collection that could help it regularly examine the effects of competition on price and service quality for consumers. Specifically, FCC seeks comments from stakeholders and others on a number of topics to inform its annual broadband progress reports. FCC reviews and includes reference to these comments in its annual reports. However, the agency has not sought comments for these reports on how the number of broadband providers affects the prices and service quality that consumers experience with broadband service. Such information could inform FCC’s actions to promote competition in an effort to secure lower prices and higher quality broadband services for consumers. The broadband industry is subject to ongoing and emerging developments that may include industry consolidation and increasing similarity of fixed and mobile service options. FCC provides a wealth of information on broadband, including annual reports that describe the state of the broadband market and present opportunities for FCC to solicit feedback on its actions from stakeholders and others in the public. Despite FCC’s efforts, about half of Americans have access to only one fixed broadband provider. While most Americans have several choices for a mobile broadband provider, fixed and mobile service do not provide the same experience. Moving forward, FCC could take steps to better understand how well its actions to promote broadband competition are working. In particular, by using its established process for soliciting public input as part of its annual reporting on the broadband market, FCC could gain useful insight on whether its actions are working as anticipated or, if not, how they might be corrected. Further, while FCC collects a variety of data related to broadband and reports on a variety of issues related to consumers’ broadband experience, it does not examine how broadband competition affects the prices and service quality that consumers experience. FCC’s past experience demonstrates that additional collection of these data may not be viable. As noted, however, FCC has an established process for seeking public input that could help the agency better understand the effect of competition on broadband prices and service quality. Such information could help FCC and other decision makers better prioritize and focus FCC’s various efforts to promote broadband competition to secure lower prices and higher quality service for consumers in a rapidly evolving market. We are making the following two recommendations to the Chairman of the FCC: As part of its annual reporting on the broadband market, FCC should solicit and report on the views of stakeholders and others on how well FCC’s actions promote broadband competition. (Recommendation 1) As part of its annual reporting on the broadband market, FCC should solicit and report on the views of stakeholders and others on how varying levels of broadband deployment affect broadband prices and service quality. (Recommendation 2) We provided a draft of this report to FCC for review and comment. FCC concurred with our recommendations and provided technical comments, which we incorporated, as appropriate. FCC’s comments are reprinted in appendix III. We are sending copies of this report to the appropriate congressional committees and the Chairman of the FCC. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Jonathan Adelstein, Wireless Infrastructure Association Phillip Berenbroick, Public Knowledge Richard Clarke, AT&T Gerald Faulhaber, the University of Pennsylvania George Ford, the Phoenix Center for Advanced Legal and Economic Shane Greenstein, Harvard Business School Russell Hanser, Wilkinson Barker Knauer (moderator) How should “competition” in the broadband market be defined and measured? What is known about the extent of competition in the residential broadband market today? How have consumers been affected by the current level of competition in the fixed broadband market? How have consumers been affected by the current level of competition in the mobile broadband market? How has the content market been affected by the current level of competition in broadband? How are broadband providers affected by competition? What factors most hinder competition in broadband markets? What factors attract competitors to a broadband market? Which factors can government most affect? Which can government least affect? How can industry support competitive broadband markets? How is the state of competition in the broadband market likely to change in the next 5 years? What will be the likely effects of these changes on the broadband market? What is the appropriate role, if any, for the Federal Communications Commission (FCC) with regard to broadband competition? What should be FCC’s top priorities with regard to broadband competition from your perspective? This report covers (1) selected experts’ and stakeholders’ views on the factors currently affecting broadband competition and the factors that may affect it in the future and (2) the actions FCC has taken to promote broadband competition and assess the effectiveness of its actions, as well as to examine consumers’ experience with broadband competition. To obtain expert and stakeholder views on factors that affect competition in broadband, we convened a meeting of 19 experts and interviewed 23 stakeholders. Our meeting of experts was held at the National Academy of Sciences (NAS) in February 2017 over one-and-a half days. Staff from NAS assisted us in identifying experts for the meeting. To identify the experts appropriate for this meeting, NAS relied on staff experience and professional judgment drawn from its Board on Science, Technology, and Economic Policy. Experts were selected by us with the goal of ensuring that a broad spectrum of views was represented from multiple broadband- related areas, such as those of broadband providers, academia, and consumer and industry groups. The range of the experts’ knowledge included both fixed and mobile broadband services. The meeting was moderated by one individual who guided the other 18 experts though a series of 14 questions that served as the basis for discussion. We developed these questions for the meeting of experts in consultation with NAS staff. This meeting of experts was planned and convened with the assistance of NAS to better ensure that a breadth of expertise was brought to bear in the meeting’s preparation; however, all final decisions regarding its substance and expert participation were our responsibility. The meeting was recorded and transcribed to ensure that we accurately captured the experts’ statements, and we analyzed the transcripts to identify the experts’ key statements regarding factors that affect competition in the broadband market or that may do so in the future. Specifically, we developed categories for expert statements and then coded key portions of the transcript into those categories based on the consensus of multiple analysts. We selected the 23 stakeholders that we interviewed based on our prior telecommunications work, other broadband competition literature, and recommendations from stakeholders we interviewed. We selected broadband providers to include companies that offer broadband via a variety of methods, such as satellite, fiber-optic cables, and coaxial cable, among others. We interviewed these stakeholders about their knowledge of factors affecting broadband competition. Stakeholders were from: 8 broadband providers, 7 associations representing broadband providers and utilities, 4 financial services firms, and 4 consumer advocacy groups. With respect to experts and stakeholders, because we asked for their opinions and did not conduct a survey in which every expert and stakeholder could provide a response as to whether a certain issue was relevant for them, we do not enumerate responses in the report. Instead, we analyzed the responses and reported on common themes that arose from our expert meeting and stakeholder interviews. Because we selected a non-generalizable sample of stakeholders and experts to discuss factors that affect broadband competition, the information cannot be used to make inferences about a population. To identify and examine the actions FCC has taken to promote broadband competition, we reviewed statutes and regulations pertaining to FCC’s role with regard to broadband and federal standards for internal control, which provides a framework for improving accountability in achieving an entity’s mission, and interviewed FCC officials about actions taken by FCC to promote competition. We reviewed FCC documentation on actions it has taken to promote competition, including orders, notices of proposed rulemaking, and FCC comments on proposed mergers. We interviewed FCC officials about these actions and how FCC assesses their effectiveness. Further, we asked stakeholders, as identified above, about the effectiveness of FCC’s actions to promote broadband competition. We also reviewed information that FCC collects and reports on related to consumer experience with broadband, including its twice yearly collection of broadband deployment data from broadband providers, as described below, the 2016 Broadband Progress Report, 19th Mobile Wireless Competition Report, and other reports. We assessed broadband deployment using FCC’s fixed and mobile broadband deployment data collected through its Form 477, which broadband providers complete and submit to FCC. We used FCC’s fixed speed benchmark for advanced telecommunications capability of 25 megabits per second (Mbps) download and 3 Mbps upload to classify fixed services as broadband, and Long Term Evolution (LTE) coverage to classify the mobile service as broadband because LTE is used by the mobile industry to identify service as broadband.The data we used presented broadband deployment as of December 2015, the most recent period for which both fixed and mobile data are available. These data include, among other types of information, the names of fixed and mobile providers, the census blocks in which fixed providers offer service, geographic areas covered by mobile providers, whether the fixed service is for residential consumers, the maximum advertised download and upload bandwidth offered by fixed providers, and the type of technology offered by mobile providers. We combined FCC’s data with 2010 U.S. Decennial Census of Population data to determine approximate numbers of U.S. residential consumers who received fixed and mobile broadband service in a given census block and the number of different companies that offer service in those blocks. A census block is the smallest geographic unit used by the Census Bureau for the collection of data; census blocks have an average population of about 28 persons. We interviewed FCC officials and reviewed relevant documentation to determine the appropriateness and reliability of these data for the purpose of summarizing the deployment of broadband service. To assess the reliability of 2010 census data, we reviewed census documentation. Based on this information, we concluded that these data were reliable for the purpose of creating summary statistics and illustrations of broadband availability by number of U.S. residents. We acknowledge that FCC’s broadband data collected as part of FCC’s Form 477 overstate fixed broadband availability by counting an entire census block as served by providers who serve some, but not necessarily all, of that block. This limitation could be particularly problematic in areas with large census blocks. Despite this limitation, we believe these data represent the best snapshot of fixed broadband availability. Regarding FCC’s Form 477 mobile broadband data, we acknowledge that service quality can vary depending on weather and other interference, as well as the amount of demand being placed on a mobile network at any given time. Despite this limitation, we believe these data provide the best snapshot of mobile broadband availability. We conducted this performance audit from June 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Andrew Huddleston (Assistant Director), James Leonard (Analyst-in-Charge), Melissa Bodeau, Kristen Farole, Camilo Flores, Terence Lam, John Mingus, Malika Rice, Sean Standley, and Walter Vance made key contributions to this report. | FCC has a role in promoting competition in the market for broadband, which provides consumers with high-speed Internet through fixed service at home and mobile service through devices such as smartphones. FCC data indicate that about 90 percent of Americans had access to fixed service as of December 2015, but that less than half had more than one choice for such service. As of that time, FCC reported that multiple providers offered mobile broadband coverage to most Americans. Mobile service increasingly allows access to Internet content that was previously accessed primarily through fixed service. GAO was asked to examine factors affecting broadband competition. This report covers (1) selected experts' and stakeholders' views on factors affecting broadband competition and (2) how FCC promotes broadband competition and examines consumers' experience with it. GAO analyzed FCC data as of December 2015; reviewed relevant statutes and FCC documentation; interviewed FCC officials and 23 stakeholders selected to include various types of broadband providers and associations representing industry and consumers; and convened a meeting of 19 experts from academia, industry, and consumer groups with assistance from the National Academy of Sciences. Selected experts and stakeholders told GAO that infrastructure costs and other factors can limit broadband deployment and the extent of broadband competition. Factors these individuals identified included providers' costs to deploy antennas, install wires or cables, and obtain permits to access existing infrastructure. Such infrastructure includes utility poles needed for deploying wired components of broadband networks. These costs can limit competition, particularly in non-urban and less populated areas, where providers' return on investment can be lower due to fewer potential customers. Experts and stakeholders also identified industry consolidation and increasing similarity of fixed and mobile broadband as factors that are likely to affect broadband competition moving forward. The Federal Communications Commission (FCC) has undertaken rulemakings, spectrum auctions, and merger reviews to help promote competition, but lacks information on how well these actions promote competition. Despite such actions, about half of Americans have access to only one fixed provider (see figure). FCC has a process for seeking stakeholders' and others' input on broadband-related topics and annually reporting on these views, but does not solicit such input on its actions to promote competition. Such input could help FCC determine if any changes are needed to its actions to support competition relative to current and emerging factors in the broadband market. Further, FCC's annual reports contain some information on consumers' experience with broadband competition, such as the number of provider options. However, these reports do not include stakeholder input on how the number of provider options affects prices and service. Some stakeholders said that competition was important to securing lower prices and better service, while others said competition does not necessarily lead to these benefits because some providers offer the same pricing and service quality everywhere regardless of whether they face competition in a particular location. Regularly seeking stakeholder input on how varying levels of broadband deployment affect price and service quality, could help FCC to better focus its efforts to secure lower prices and higher service quality service for consumers. FCC should annually solicit and report on stakeholder input regarding (1) its actions to promote broadband competition and (2) how varying levels of broadband deployment affect prices and service quality. FCC concurred with GAO's recommendations. |
Established in 1934, Ex-Im operates as an independent agency of the U.S. government and is the official export credit agency of the United States. In 1983, Congress required Ex-Im to make available for fiscal year 1986 and thereafter not less than 10 percent of its aggregate loan, guarantee, and insurance authority for financing exports by small businesses. In 2002, Congress established several new requirements for Ex-Im relating to small business, including increasing from 10 to 20 percent the proportion of Ex- Im’s aggregate loan, guarantee, and insurance authority that must be made available for the direct benefit of small businesses. When reauthorizing the bank’s charter in 2006, Congress again established new requirements for Ex-Im, including a small business division with an office of financing for socially and economically disadvantaged small business concerns and small business concerns owned by women, designating small business specialists in all divisions, creating a small business committee to advise the bank president, and defining standards to measure the bank’s success in financing small business. Ex-Im has taken steps to meet these requirements. Ex-Im uses the Small Business Administration methodology to determine whether a company qualifies as a small business. To apply this methodology, Ex-Im obtains company information through its application process. Ex-Im also subscribes to Dun and Bradstreet, a commercial information vendor, which provides information about companies, including Standard Industrial Classification (SIC) codes. Ex-Im uses the SIC codes provided by Dun and Bradstreet to determine a company’s small business standing by obtaining the corresponding North American Industry Classification System (NAICS) code through the Small Business Administration website. Ex-Im offers a variety of financing instruments, including loan guarantees, export credit insurance, and working capital guarantees. Ex-Im provides its insurance either directly to exporters (non-bank-held insurance) or to banks which in turn finance U.S. exporters (bank-held insurance). For the bank-held insurance policies, Ex-Im authorizes the policy for the bank, which does not know at the time it applies for the financing which exporters will eventually use the export credit insurance. Between fiscal years 2002 and 2007, Ex-Im increased the percentage of its financing for small businesses and continued to finance most small business transactions through insurance or working capital guarantees. Ex-Im met the Congressional requirement to make available not less than 20 percent of its financing authority for small businesses in 2006 and 2007. In fiscal year 2006, Ex-Im’s small business financing was 26.2 percent of its total financing and in fiscal year 2007 it increased to 26.7 percent. In fiscal years 2002 through 2005, Ex-Im did not reach the goal, with its small business financing share ranging from 16.9 percent to 19.7 percent. (See fig. 1.) The percent of Ex-Im financing directly benefiting small business depends on the value of small business financing compared to the value of non- small business financing. (See fig. 2.) While the small business financing value slowly increased between fiscal years 2001 and 2007, the value for non-small business financing was noticeably lower in 2006 and 2007, compared to 2005. Ex-Im has primarily used three types of tools to finance small business transactions: non-bank-held insurance, working capital guarantees, and bank-held insurance (see fig. 3). In 2007, each tool was used to finance about 30 percent of the $3.4 billion Ex-Im made available for small business transactions. The remaining 8 percent of small business financing was through medium- and long-term loans and guarantees. This pattern contrasts with non-small business financing, where the largest share is through medium- and long-term loans and guarantees. Ex-Im’s use of bank-held insurance has posed some challenges for accurately calculating the small business financing share, in part because Ex-Im does not know who the exporter will be prior to authorizing the bank-held insurance transaction and therefore cannot make a small- business designation at that time. For bank-held insurance and credit guarantee facilities, Ex-Im estimates the share of the financing benefiting small business based on data regarding previous shipments under those types of transactions. These estimates of the small business share of authorized transactions can differ significantly from the small business amounts actually shipped under the authorizations. For example, in 2005 Ex-Im authorized a $10 million short-term insurance policy under which no shipments had been reported prior to our March 2006 report. In contrast, in 2005 Ex-Im also authorized a $50 million short-term insurance policy where shipments under the policy exceeded $87 million for a 6- month period (or $174 million on an annualized basis). In our 2006 report, we found weaknesses in Ex-Im’s data and data systems for tracking small business financing and made recommendations for improvement, and Ex-Im has taken steps to address those weaknesses. We reported that, while Ex-Im generally classified companies’ small business status correctly, weaknesses in its data and data systems limited its ability to accurately determine its small business financing amounts and share. In implementing “Ex-Im Online” and certain internal control measures, Ex-Im has improved its ability to accurately measure small business financing. Based on our review of independent data and Ex-Im’s paper transaction files, GAO reported in 2006 that Ex-Im’s classification of companies’ small business status was generally correct. From our review of Ex-Im’s electronic databases and Dun and Bradstreet data on companies’ sales and employment, we estimated that, 83 percent of the time, Ex-Im’s small business designation matched the designation based on Dun and Bradstreet data. Based on a review of Ex-Im’s official paper transaction files in instances where Ex-Im and Dun and Bradstreet’s designations differed, we determined that Ex-Im’s designation was justified in most instances. In our 2006 report, we identified weaknesses in Ex-Im’s process for calculating its small business financing and made some corresponding recommendations for improvement. The weaknesses ranged from internal control weaknesses that may affect only a few transactions a year to more significant weaknesses in Ex-Im’s system for estimating about one-third of its small business support. We reported two internal control weaknesses in Ex-Im data systems used to calculate and report on Ex-Im’s small business financing; by implementing its interactive database, Ex-Im Online, the bank has largely addressed those weaknesses. First, we found that Ex-Im’s electronic data systems used to calculate its small business support did not contain complete or up-to-date information on companies’ small business status. As a result, to obtain the most current information for these companies, Ex-Im officials needed to identify and locate paper transaction files. While Ex-Im’s paper files generally supported its small business designation, we found a significant number of discrepancies between Ex-Im’s paper and electronic files. Second, we found that Ex-Im’s data systems sometimes contained conflicting information for the same company. Ex-Im maintained information about insurance transactions and participants in one data system and information about loans and guarantee transactions and participants in another data system. According to Ex-Im, updating information in a company’s record (including its small business designation) in one database did not update the company’s record in the other database. As a result, the two databases could, and in some cases did, have conflicting information about the same company. GAO recommended that Ex-Im improve the completeness, accuracy, and consistency of its transaction data. Since the issuance of the GAO report, Ex-Im Bank has implemented a number of controls to enhance and reinforce the bank’s methodology for capturing relevant information for reporting small business statistics. Most notably, Ex-Im replaced its previous data systems with Ex-Im Online, an interactive, web-based process that allows exporters, brokers, and financial institutions to transact with Ex-Im electronically. According to Ex-Im, more than seventy-five percent of all applications are now submitted online, omitting the need to transfer information from paper copies to the bank’s electronic files. Ex-Im officials stated that Ex-Im Online also includes a direct feed from Dun and Bradstreet, which provides current demographic information about a company so that Ex-Im can make an accurate assessment of the company’s small business status. In addition to initiating Ex-Im Online, Ex-Im changed its internal procedures to require documented dual signoff on the small business determination for each transaction. We reported two weaknesses in Ex-Im’s system for estimating small business financing when the exporter is not known at the time Ex-Im authorizes the transaction, which applied to about one-third of Ex-Im’s total small business financing for fiscal year 2004. First, we found that Ex- Im’s estimates might not accurately reflect the amount of small business financing under bank-held insurance policies because of large differences between the amount of financing authorized and the amount of financing used to actually ship goods. For both fiscal years 2004 and 2005, the value of shipments under bank-held insurance policies was a fraction of the total authorized value of the bank-held insurance policies. For example, according to Ex-Im records, it authorized $3.4 billion of bank-held insurance transactions for fiscal year 2004, but there were only $280 million in shipments under bank-held insurance policies in the first 6 months of the fiscal year. Ex-Im applied its estimate of the small business share of transactions, based on these shipments, to the $3.4 billion of bank-held insurance policies it authorized during the year, and determined that about $720 million of the authorized value of bank-held insurance policies during the year directly benefited small business. Thus, the method resulted in estimates of small business shares for the authorized value of these types of transactions based on a very small share (about 8 percent) of the total authorized value. Also, we found that Ex-Im classified the small business status of a significant portion of the companies making shipments as “unknown” and excluded them from its calculation of the estimate of its small business support. Of the $280 million of shipments under bank-held insurance for 2004, for example, an Ex-Im official classified about $128 million (or nearly half) as shipments by companies whose small business status was “unknown” and excluded these shipments from its calculation of total shipments. GAO recommended that Ex-Im improve its system for estimating the value and proportion of direct small business support for those transactions where the exporter is not known at the time Ex-Im authorizes the transaction. According to Ex-Im, its implementation of Ex-Im Online improves these estimates because borrowers can now enter their shipment reports directly into Ex-Im Online. According to Ex-Im, two- thirds are being entered in this manner. Ex-Im officials stated that such automated submission of shipment information has significantly reduced the amount of shipments by exporters whose small business status is unknown. They stated that only 3 percent of the fiscal year 2007 shipments under bank-held insurance were by exporters whose small business status was unknown. They also stated that, for credit guarantee facilities, no shipments were recorded by exporters whose small business status was unknown. GAO also recommended that Ex-Im engage an external auditor to audit its annual, legislatively mandated report on its direct support for small business. Ex-Im engaged Mayer Hoffman McCann P.C., its internal auditor, to perform the audit. With respect to credit guarantee facilities, bank-held policies, and non-bank-held insurance (i.e., single buyer/multi-buyer) policies, the auditors found that Ex-Im’s process to obtain and calculate eligible small business counts operates in accordance with its policy and approved methodology. However, the auditors found exceptions to stated policy during their review of the working capital guarantee and non-credit guarantee facilities programs. For example, in the working capital guarantee program, the auditors noted a number of exceptions related to the completion of data fields that would have “flagged” these accounts as small business. The auditors stated that they believed that Ex-Im management was taking action to strengthen supervisory edit controls over these processes. Ex-Im is statutorily required to report on the number of its authorized transactions that directly benefit small business; in our 2006 report we found that Ex-Im’s method of determining this number included some transactions that did not directly benefit small business. Ex-Im has frequently reported that about 85 percent of its authorized transactions directly benefit small business. For instance, in fiscal year 2004, it reported that 2,572 (or 83 percent) of its authorized transactions directly supported small businesses. This count was based on crediting all 698 bank-held insurance policies as directly benefiting small business. We reported that while many of these transactions directly benefit small business, they may not all directly benefit small business, as evidenced by the fact that Ex- Im’s own estimate showed that about 20 percent of the value of bank-held insurance policies directly benefited small business during 2004. GAO recommended that Ex-Im more accurately determine and clearly report the number of transactions that directly benefit small business; however Ex-Im officials disagreed with this recommendation and have not changed their methodology. Ex-Im officials stated that they reviewed their process and believe that it is appropriate. A senior official also noted that since the methodology has been used for a number of years, the bank can confidently report trends. The bank also believes that their methodology provides a conservative estimate. Since GAO’s last report on small business financing in March 2006, Ex-Im has made a number of changes. It also surpassed the target of allocating 20 percent of its financing to small business for both 2006 and 2007. While this is partly due to a drop in the overall level of financing provided to other customers by the bank, Ex-Im has shown increases in the level of business with small firms over several years. In addition, Ex-Im has made changes in its data systems which allow Congress to have a greater level of confidence in its reporting on small business and other matters, and it has instituted new internal controls to further increase accuracy in categorizing firms’ small business status. Managing its resources going forward to respond to ongoing Congressional interest in the composition of Ex-Im’s financing will, undoubtedly, entail new challenges for the bank. We look forward to working with Ex-Im further on issues related to evaluation of its small business financing efforts, including those directed at businesses owned by disadvantaged individuals and minorities, as mandated by the Congress with the strong support of this Committee. Madam Chairwoman, this concludes my prepared remarks. I would be pleased to respond to any questions you or other members of the committee may have at this time. Should you have any questions about this testimony, please contact Loren Yager at (202) 512-4347 or [email protected]. Celia Thomas, Miriam A. Carroll and Jason Bair also made major contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Export-Import Bank (Ex-Im) provides loans, loan guarantees, and insurance to support U.S. exports. Its level of support for small business has been a long-standing issue of congressional interest. In 2002, Congress increased the proportion of financing Ex-Im must make available for small business to 20 percent. In 2006, Congress directed Ex-Im to make organizational changes related to small business and to better evaluate its small business efforts. This statement discusses (1) trends in Ex-Im's small business financing since fiscal year 2000 and (2) the weaknesses GAO found in the tracking and reporting of Ex-Im's small business financing and the steps Ex-Im has taken to address them. This testimony is based primarily on GAO's March 2006 report (GAO-06-351) concerning Ex-Im's small business program. In that report, we recommended that Ex-Im (1) improve the data it maintains on its customers with regard to their small business status; (2) improve its system for estimating the value and proportion of direct small business support for those transactions where the exporter is not known at the time of authorization; (3) more accurately determine and clearly report the number of transactions that directly benefit small business; and (4) have its auditor audit Ex-Im's reporting of its direct support for small business. Ex-Im agreed with three of the four recommendations. We discuss the actions Ex-Im has taken to implement our suggestions in this statement. The share of Ex-Im financing directly benefiting small business has increased over recent years, surpassing the required 20 percent in 2006 and 2007. The percentage increase reflects a slow increase in Ex-Im financing for small businesses, while financing for non-small businesses was noticeably lower in 2006 and 2007 compared to 2005. Ex-Im continues to finance most small business transactions through insurance or working capital guarantees. In our 2006 report, we found weaknesses in Ex-Im's data and data systems for tracking small business financing and made recommendations for improvement, and Ex-Im has taken steps to address those weaknesses. We reported that while Ex-Im generally classified companies' small business status correctly, weaknesses limited its ability to accurately determine small business financing values. For transactions where Ex-Im can identify the exporter at the time it authorizes the transaction, we found that internal control weaknesses in Ex-Im's data systems limited its ability to accurately determine small business financing amounts and share. For transactions where Ex-Im cannot identify the exporter up-front, we found that weaknesses in its system for estimating small business financing also limited its ability to accurately measure and report on such financing. We also reported some limitations in Ex-Im's calculation of the number--as opposed to the value--of transactions benefiting small business. GAO made four recommendations. Ex-Im has taken several steps in response to those recommendations. Most notably, Ex-Im replaced its previous data systems with "Ex-Im Online," an interactive, web-based process that allows exporters, brokers, and financial institutions to transact with Ex-Im electronically. According to Ex-Im, this has resulted in more timely and accurate information on Ex-Im's financing. |
Transportation-disadvantaged populations, including those that cannot provide their own transportation due to age, disability, or income constraints, may face challenges in accessing transportation, such as lack of access to public transportation or a private vehicle. For example, according to a 2011 report by the National Council on Disability, people with disabilities are more likely than people without disabilities to report that they have inadequate transportation (34 percent versus 16 percent, respectively). We have previously reported that people in need of transportation often benefit from greater and higher quality services when transportation providers coordinate their operations. In addition, we have reported that coordination has the potential to reduce federal transportation program costs by clustering passengers; using fewer one- way trips; and sharing the use of personnel, equipment, and facilities. Federal agencies, including USDA, Education, HHS, HUD, Interior, DOL, DOT, and VA, play an important role in helping transportation- disadvantaged populations access federal programs by providing funds to state and local grantees. Federal programs that provide funding for transportation cover a variety of services, including education, job training, employment, nutrition, health, medical care, or other human services. As we have previously reported, many federally funded programs purchase transportation services from existing private or public providers. This includes contracting for services with private transportation providers or providing transit passes, taxi vouchers, or mileage reimbursement to program participants, or some combination of these methods. Some programs may use federal funds to purchase and operate their own vehicles. DOT and HHS formed the Coordinating Council on Human Services Transportation (Coordinating Council) in 1986 to improve the efficiency and effectiveness of human service transportation by coordinating related programs at the federal level and promoting the maximum feasible coordination at the state and local levels. In 2003, we reported that coordination efforts at the federal, state, and local levels varied greatly, and while some coordination efforts showed promising results, obstacles continued to impede coordination. As a result, we recommended that, among other things, the Coordinating Council be expanded to include additional federal agencies. The Coordinating Council was expanded to 11 federal agencies in 2004 by Executive Order 13330 and renamed the Interagency Transportation Coordinating Council on Access and Mobility. The expanded Coordinating Council was charged with, among other things, promoting interagency cooperation and establishing appropriate mechanisms to minimize duplication and overlap of federal programs and services so that transportation-disadvantaged persons have access to improved transportation services. More recently, in 2011, we reported that reducing or eliminating duplication, overlap, and fragmentation among government programs and activities could save tax dollars and help agencies to provide more efficient and effective services. With regard to transportation services for the transportation disadvantaged, we found that, while some federal agencies were developing guidance and technical assistance for transportation coordination, federal departments still had more work to do in identifying and assessing their transportation programs, working with other departments to identify opportunities for additional coordination, and developing and disseminating policies and grantee guidance for coordinating transportation services. As we have previously reported, many federal efforts transcend more than one agency, yet agencies face a range of challenges and barriers when they attempt to work collaboratively. Both Congress and the executive branch have recognized this, and in January 2011, the GPRA Modernization Act of 2010 was enacted, updating the almost two-decades-old Government Performance and Results Act (GPRA). This act establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. As we reported in February 2012, effective implementation of this act could play an important role in clarifying desired outcomes; addressing program performance spanning multiple organizations; and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. In recent years, Congress has supported increased transportation coordination, as reflected in the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Enacted in 2005, SAFETEA-LU amended several human services transportation coordination provisions sharpening the focus on transportation services for persons with disabilities, older adults, and individuals with lower incomes. Currently, the law requires the establishment of a locally developed, coordinated, public transit-human services transportation plan for all of DOT’s human service transportation programs administered by the Federal Transit Administration (FTA). Further, it requires the plan to be developed by a process that includes representatives of public, private, and nonprofit transportation and human services communities, including the public. Federal law also has promoted coordinated funding for non-DOT programs to be used as matching funds for specific transportation programs. More recently, FTA’s fiscal year 2013 budget request proposed consolidating some existing programs to give communities more flexibility in designing and coordinating FTA-sponsored human service programs. We identified 80 federal programs that fund a variety of transportation services for transportation-disadvantaged populations (see fig. 1). Thirty- one of these programs are administered by HHS. The Departments of Education and HUD each administer 12 programs; DOT administers 7 programs; and DOL, VA, Interior, and USDA administer 18 programs combined. Out of the 80 federal programs identified, 4 programs focus expressly on supporting transportation services for transportation- disadvantaged populations, including DOT’s Capital Assistance Program for Elderly Persons and Persons with Disabilities, Job Access and Reverse Commute Program, Capital and Training Assistance Program for Over-the-Road Bus Accessibility, and the New Freedom Program. A full list of programs is in appendix II. Transportation is not the primary mission for the vast majority of the programs we identified. Except for the 7 DOT programs, where all funds are used to support public transportation, the remaining 73 programs we identified primarily provide a variety of human services, such as job training, employment, education, medical care, or other services, which incorporate transportation as an eligible program expense to ensure participants can access a service. In addition, the types of transportation services provided to the transportation-disadvantaged population through these federal programs vary, and may include capital investments (e.g., purchasing vehicles), reimbursement of transportation costs (e.g., transit fares, gas, bus passes), or direct provision of transportation service to program clients (e.g., operating vehicles). Examples of transportation services authorized for funding include the following: HHS’s Medicaid program reimburses states that provide Medicaid beneficiaries with bus passes to access eligible medical services, among other transportation options. DOL’s Workforce Investment Act-funded programs can provide funding for transportation services so that recipients can access employment and participate in required work activities. Types of transportation services include bus passes and cab fare. DOT’s Job Access and Reverse Commute Program allows for grantee agencies to purchase vehicles such as vans to improve access to transportation for employment-related services. VA’s Beneficiary Travel Program, as part of Veterans Medical Care Benefits, can provide mileage reimbursement to low-income or disabled veterans for travel to receive medical services at their VA hospital. Total spending on transportation services for the transportation disadvantaged remains unknown because, in many cases, federal departments do not separately track spending for these services. Of the 80 programs we identified, roughly two-thirds of the programs were unable to provide spending information for eligible transportation services. However, total expenditures and obligations for the 28 programs that do track or estimate transportation spending were at least $11.8 billion in fiscal year 2010 (see table 1). DOT’s 7 programs accounted for about $9.5 billion of this total amount. Of the non-DOT programs, HHS’s Medicaid program and VA’s Veterans Medical Care Benefits program each reported spending over $700 million in fiscal year 2010. Most of the programs we identified do not separately track transportation spending. According to federal officials, transportation spending may not be tracked for several reasons, including the following: Some programs allow for transportation spending as an optional service, but it is not required so they do not ask grantees to provide spending information. For example, HHS’s Head Start program, which provides comprehensive child development services to low-income children and their families, reported that many of its grantees may provide transportation, but the agency does not collect specific data on transportation spending. Some federal programs give states and localities broad flexibility to administer program funds, and the program structure may not lend itself to tracking transportation expenses. For example, Education provides grants to states under the Individuals with Disabilities Education Act (IDEA) for special education and related services to children with disabilities. State education agencies allocate most of these grant funds to local education agencies, usually school systems, to provide these services.on the amount of funds expended by local education agencies for specific services, including transportation services. Education does not collect data Some agencies may consider transportation services to be an administrative expense, and may include transportation spending with other eligible administrative expenses. As a result, transportation- specific spending is not fully known. For example, HHS’s Medicaid program has two allowable methods for states to report the costs of transportation services to the program—as expenditures for nonemergency medical transportation benefits or as an administrative expense, which is combined with other nontransportation expenses.As a result, HHS does not fully capture the total transportation costs provided under its Medicaid program. Resources necessary to track this information in some federal departments may outweigh the potential benefits, according to HUD officials who told us that for some HUD programs, requiring grantees to report transportation expenses would require a new reporting effort and that the resulting information may not be analyzed due to resource constraints. The interagency Coordinating Council, chaired by DOT, has been charged with leading governmentwide transportation coordination efforts since 2003. The Coordinating Council launched the “United We Ride” initiative in fall 2003, designed to establish an interagency forum for communication and help states and communities overcome obstacles to coordination. The Coordinating Council undertook a number of activities through its United We Ride initiative, largely between 2003 and 2007. Coordinating Council actions included issuing publications such as policy statements and progress reports on efforts taken, providing funding through FTA to help states and localities promote coordinated services and planning, and supporting technical assistance efforts (see table 2). For example, the Coordinating Council’s 2005 Report to the President outlined the council’s action plan for implementing the 2004 executive order, reported on the council’s accomplishments, and made specific recommendations to improve human services transportation coordination. The Coordinating Council is structured in several levels, including the Secretary-level members, an Executive Council consisting of senior-level appointees from each member agency, and interagency working groups (seven in fiscal year 2011) that cut across issue areas at the programmatic level. The Coordinating Council is staffed by officials from FTA. The Secretary-level members of the Coordinating Council last met in 2008.Council efforts have taken place at the working group level. However, according to DOT officials, more recent Coordinating identifies agency roles and responsibilities, measurable outcomes, or required follow-up. According to agency officials, the Coordinating Council is drafting a strategic plan, but officials were unable to provide an estimate for when the plan might be finalized. As previously discussed, the executive order contained reporting and recommendation requirements, resulting in the 2005 Report to the President and the 2007 Progress Report. However, since those reports, no other guidance document has been created, or is required, to report on actions taken or to plan additional actions. We have previously reported that defining and articulating a common outcome, agreeing on agency roles and responsibilities, and reinforcing agency accountability through agency plans and reports are important elements for agencies to enhance and sustain collaborative efforts. Further, we have reported that federal agencies engaged in collaborative efforts need to create the means to monitor, evaluate, and report on their efforts to enable them to identify areas for improvement. There are several practices involved in strategic planning that could be useful to help the Council determine and communicate its long-term goals and objectives. However, without a plan to help reinforce agency goals and responsibilities, the Coordinating Council may be hampered in articulating a strategy to help strengthen interagency collaboration and lack the elements needed to remain a viable interagency effort. Cost-sharing policy: A joint cost-sharing policy has not been endorsed by all Coordinating Council members, even though development of a cost allocation policy was one of the recommendations of the Coordinating Council in its 2005 Report to the President. According to the 2005 report, a major obstacle to sharing transportation resources has been the difficulty of reaching agreements at the local level about the appropriate allocation of costs to each agency. Federal, state, and local agency officials that we spoke with noted that this continues to be a significant impediment. Further, as part of a discussion hosted by the National Academy of Public Administration in 2009, which brought together key stakeholders to discuss ways to improve access to reliable transportation for the transportation disadvantaged, explicit and clear guidance for cost sharing was said to be needed in order to address significant federal policy barriers to coordination. Coordinated transportation planning: Coordinating Council members pledged to take actions to accomplish federal program grantee participation in locally developed, coordinated planning processes as part of their 2006 Coordinated Human Service Transportation Planning Policy Statement, but it is unclear if the Coordinating Council’s members have consistently followed through on their 2006 pledge. According to the Coordinating Council’s 2006 policy statement, federal grantees’ participation in their local human services transportation planning process is necessary to reduce duplication of services, increase service efficiency, and expand access for transportation-disadvantaged populations. However, the discussion hosted by the National Academy of Public Administration in 2009 indicated that the process for creating coordinated transportation plans continues to need improvement and recommended that Coordinating Council members with grant programs create incentives for their grantees to participate in coordinated planning at the state and local levels. According to participants, while the Coordinating Council has issued a joint policy on coordinated planning, challenges remain to fully engage agencies that are not funded by DOT in the planning process at the local levels. DOT’s FTA is the only agency that has adopted a coordinated human services transportation planning requirement, which has resulted in broadened participation in the transportation planning processes. Coordination of services is also challenging due to differences in federal program requirements and perceived regulatory or statutory barriers, according to officials. For example, coordinated planning is generally only a requirement for FTA-funded human service transportation programs, and while a handful of programs may encourage coordination, other federal program rules are unclear about coordination of transportation services between programs. Also, programs may have perceived or actual statutory or regulatory barriers related to sharing costs, or have differences in service requirements and eligibility. For example, HHS’s Medicaid program is the largest source of federal funds for nonemergency medical transportation for qualified low-income beneficiaries; however, the Centers for Medicare & Medicaid Services (CMS) officials expressed concern about coordinating transportation services due to concerns about commingling federal program funds and the potential for fraud. CMS has issued rules that allow states to contract with one or more transportation brokers to manage their Medicaid transportation to, among other things, reduce costs. However, these rules could result in fragmented transportation services at the state and local levels because some brokers transport only Medicaid-eligible beneficiaries, and may not coordinate their transportation services with other programs. In another example, VA officials explained that VA only has the authority to provide transportation at the agency’s expense to certain qualifying veterans and nonveterans in relation to VA health care, but has no legal authority to transport nonbeneficiaries. State and local officials in the five states we selected used a variety of coordinated planning and service efforts to serve the transportation disadvantaged. One way that states facilitate coordination efforts is through statewide coordinating bodies—some created by legislative actions and others by executive order or initiative—to oversee the implementation of coordinated transportation for transportation- disadvantaged populations in their states. State coordinating bodies can help to facilitate collaboration between federal, state, and local agencies by providing a venue for agencies to discuss and resolve transportation issues to better coordinate transportation activities related to the provision of human services and enhance services for transportation- disadvantaged populations. Three of the five states we selected had state coordinating bodies in 2010. In addition to state coordinating councils, efforts include regional and local planning, one-call centers, mobility managers, and vehicle sharing (see table 3). Several state and local agency officials said that federal requirements for the establishment of locally developed, coordinated public transit-human services transportation plans for FTA’s human service transportation programs have had a positive impact on transportation coordination in their state. According to officials, these planning efforts help to bring relevant stakeholders to the table to discuss needs for the transportation disadvantaged and to resolve problems. For example, in Virginia, the Department of Rail and Public Transportation has taken the lead in implementing this requirement, assisting 21 planning district commissions to formulate human services transportation coordination plans for their districts, and formulating a statewide plan which draws from these local plans. According to officials of a regional planning commission in Virginia, transportation coordination in the state would not be at the same point it is currently without these requirements. These officials said that the federal requirements created one place for people to come together to learn what programs are available, raise awareness, and avoid duplication. Also, a Virginia Regional Transit official told us that the increased communication among agencies due to coordinated planning efforts made it possible for providers to transport more people, including those who were not currently being served, thus opening access to larger and broader groups of people. State and local entities’ efforts to coordinate services for the transportation disadvantaged are not without challenges. According to officials, challenges include insufficient federal leadership, changes to state legislation and policies, and limited financial resources in the face of growing unmet needs. Several state and local officials told us that there is not sufficient federal leadership and guidance on how to coordinate transportation services for the transportation disadvantaged and that varying federal program requirements may hinder coordination of transportation services. State and local officials in four out of the five states we selected said that with the exception of DOT, other federal agencies were not actively encouraging transportation coordination. For example, Texas Department of Transportation officials told us there is a disconnect between human services and transportation agencies and that the general perception is that other human services programs, such as some of those funded by HHS, are exempt from coordination. These officials also said that federal leadership is needed to promote buy-in for transportation coordination among human services agencies and transportation agencies at the state level. Officials in each of the five states that we selected said that the federal government could provide state and local entities with improved guidance on transportation coordination—especially as it relates to instructions on how to share costs across programs (i.e., determining what portion of a trip should be paid by whom). State and local officials in Virginia, Texas, and Washington identified a fear of losing federal funding if they improperly shared funding with other federal programs. These officials said that federal cost-sharing guidance would help facilitate transportation coordination between programs. Further, state Medicaid officials said that their main priority is to make sure they are following Medicaid requirements, and some officials expressed concerns about their ability to ensure Medicaid funds are being appropriately spent and properly accounted for if they coordinated with other programs. For example, Medicaid officials in one state said that they would need to obtain approval from CMS before adopting any cost-sharing strategies with other programs to ensure the appropriateness of their state program’s expenditures. When we spoke with CMS officials, they told us that CMS is not opposed to coordinating transportation services; however, the agency does have concerns that coordination would result in Medicaid funds being improperly commingled with other federal program funds. Several state and local officials said that varying federal government program requirements may hinder the provision of transportation services and act as barriers to coordination. A regional planning official in Washington told us that varying program requirements may discourage transportation coordination as one program’s requirements may not be suitable for another program’s clients. For example, if two different program clients were to share school vehicles for special needs populations, each program might have a separate set of rules and requirements. Determining whether drivers meet drug and alcohol testing requirements for both programs could be a challenge, according to this official. Similarly, an official from the Florida Department of Transportation told us that the federal government could do more to identify standards and requirements that act as barriers to coordination. In Wisconsin, a Department of Transportation study found that key challenges to coordinating transportation services in the state include program regulations or requirements that impede coordination, including different guidance and restrictions on how federal funding could be spent. Officials we interviewed in four states identified recent changes in state legislation or state policies as potential challenges to coordinating services for the transportation disadvantaged in their states. According to these officials, such changes have caused some uncertainty in their efforts to coordinate human services transportation in the future. For example, some state coordinating bodies’ authority has not been renewed or is about to expire: Executive order not renewed: In Wisconsin, the governor charged a group of individuals from a number of state agencies to form a state coordinating council in 2005—the Interagency Council on Transportation Coordination (ICTC). In addition to sponsoring a statewide coordination conference in 2007, ICTC contracted with a national consultant to develop a Wisconsin Model of Coordination with implementation strategies. Intended outcomes of this model included increasing the quantity and quality of existing transportation resources, supporting and encouraging local coordination efforts, and improving transportation service for users. However, due to a downturn in the economy and a change in the state’s administration after the model’s completion, its findings were not implemented. Because the new administration did not renew the executive order establishing ICTC’s authority, ICTC has been inactive since January 2011. Enabling legislation to expire: In Washington, the state legislature created the Agency Council on Coordinated Transportation (ACCT) in 1998 to coordinate with state and local agencies and organizations to provide affordable and accessible transportation choices for the transportation disadvantaged. Over the years, ACCT has facilitated coordination by helping to form transportation coalitions that include human services representatives, transit services, and community transportation providers. These coalitions plan regional public transportation, evaluate and prioritize project proposals, and implement local coordination strategies. However, enabling legislation for ACCT expires in June 2012 and officials do not expect the legislation to be renewed. In some states, officials were uncertain about how recent developments may affect their state Medicaid program’s participation in state and local efforts to coordinate transportation services for the transportation disadvantaged. For example, in an effort to control program costs, state legislation was signed into law in Florida in June 2011 that moves the responsibility for Medicaid nonemergency medical transportation from the coordinated transportation system run by the Florida Commission for the Transportation Disadvantaged to a private managed care system. An official with the Florida Commission for the Transportation Disadvantaged said that it is not known whether the managed care system will choose to operate within the state’s coordinated transportation system or contract with private transportation brokers outside of the coordinated system, which could result in duplication of transportation services. Similarly, officials in Texas and Wisconsin told us that, in an effort to control costs, their state Medicaid program is moving to a transportation brokerage system. According to some state and local officials, these brokers typically only transport Medicaid-eligible clients and do not often coordinate their transportation services with other federally funded programs. CMS maintains that their brokerage rule does not preclude state Medicaid agencies from coordinating transportation services, as long as they comply with all applicable Medicaid policies and rules and ensure that Medicaid funds are only used for Medicaid services provided to eligible beneficiaries. A number of state and local officials in our five selected states told us that limited financial resources and growing unmet needs were challenges for them. In Texas, state and local officials told us that although it is believed that coordination will save costs in the mid- to long-term, state budgets are being reduced in transit and social services agencies, as well as in municipal programs and nonprofit organizations. According to these officials, some agencies and their potential partners find it difficult to come up with funding, even when it is a modest local match for grants. Similarly, state and local officials in Virginia told us that state and local match requirements may preclude some entities from applying for federal funds. State and local officials also mentioned that limited financial resources often promote turf battles—or a mistrust and unwillingness to share resources for fear of losing control of them. Conversely, some officials told us that limited resources were an incentive to coordinate because coordination made the best use of limited resources. In the face of limited financial resources, state and local officials are also concerned about growing disadvantaged populations and unmet needs— both now and in the future. As part of the discussion hosted by the National Academy of Public Administration in 2009, participants identified continuing transportation gaps in programs across the federal government. Several state and local officials that we spoke with also expressed concern about their ability to adequately address expected growth in elderly, disabled, low-income, and rural populations. A local transit agency official in Virginia, for example, told us that there is a great need for transportation services for the elderly and disabled and that the need is increasing. This agency official questioned whether transportation providers will have adequate funding and resources to meet this growing demand. In a presentation before the state Senate in 2011, Florida’s Commission for the Transportation Disadvantaged reported that, statewide, 3.75 million trips had been denied to passengers in the coordinated transportation system during the past 5 years due to a lack of funding or for other reasons. Nevertheless, the commission expects the state’s transportation-disadvantaged population to undergo steady growth over the next decade. In addition, a number of state and local entities were concerned about populations in rural areas—primarily because public transportation availability was limited in these areas. The Coordinating Council was created to, among other things, promote interagency cooperation and minimize duplication and overlap of federal programs providing transportation services to transportation- disadvantaged populations. While some member agencies, including DOT, have remained active in pursuing these goals, sustained interagency activity through the Coordinating Council has lost momentum in recent years. The 11 Coordinating Council members have not met since 2008 and the Executive Council designees have not met since 2007. According to some federal officials, this lack of leadership at the Coordinating Council poses challenges to federal-, state-, and local-level coordination efforts. Further, the Coordinating Council has been operating without a strategic plan to help determine and communicate its long-term goals and objectives. While Executive Order 13330 spurred Coordinating Council activity beginning in 2004, sustained agency commitment has proved challenging. We have previously reported that articulating a common outcome, agreeing on agency roles and responsibilities, and reinforcing agency accountability through agency plans and reports are important elements for agencies to sustain and improve collaborative efforts. A collaborative interagency strategic planning effort could help to provide the direction and momentum the Coordinating Council needs at this time. Finally, we have previously reported that federal agencies engaged in collaborative efforts need to create the means to monitor, evaluate, and report on their efforts to enable them to identify areas for improvement. It is difficult to fully assess activities of the Coordinating Council, in part, because the council has not reported on its activities or reported on progress implementing its own recommendations since 2007. At that time, the Coordinating Council reported that it was working to establish cost-sharing principles for transportation coordination that federal human service and transportation agencies could endorse; however, we found that the council had not accomplished this goal as of June 2012. Also in 2007, the council reported it had issued a policy statement encouraging federally assisted grantees involved in human services transportation to participate in local coordination planning processes. As part of that policy statement, members of the Coordinating Council agreed to take action to implement the policy within 6 months of council adoption; however, it is unclear what implementation actions agencies have taken to date. Despite recent actions that some agencies have taken to encourage coordination and provide technical assistance, without any means to monitor, evaluate, or report on interagency efforts, the Coordinating Council may face barriers to identifying areas for improvement and pursuing its goal of improving transportation services for transportation- disadvantaged populations. To promote and enhance federal, state, and local coordination activities, we recommend that the Secretary of Transportation, as the chair of the Coordinating Council on Access and Mobility, and the Secretaries of the Departments of Agriculture, Education, Health and Human Services, Housing and Urban Development, Interior, Labor, and Veterans Affairs, as member agencies of the Coordinating Council, should meet and take the following actions: Complete and publish a strategic plan for the Coordinating Council, which should, among other things, clearly outline agency roles and responsibilities and articulate a strategy to help strengthen interagency collaboration and communication. Report on the progress of Coordinating Council recommendations made as part of its 2005 Report to the President on Implementation of Executive Order 13330 and develop a plan to address any outstanding recommendations, including the development of a cost- sharing policy endorsed by the Coordinating Council and the actions taken by member agencies to increase federal program grantee participation in locally developed, coordinated planning processes. We provided USDA, Education, HHS, HUD, Interior, DOL, DOT, and VA with a draft of this report for their review and comment. In commenting on a draft of this report, Education and VA generally agreed with our conclusions and recommendations. Education also provided technical and written comments, which appear in appendix III. HHS, HUD, and DOT neither agreed nor disagreed with the report and provided technical comments. In their technical comments, DOT officials stated that, as chair of the Coordinating Council, they have been working with the council to refocus its efforts away from policy discussions to the coordination of on- the-ground services, such as through the Veterans Transportation and Community Living Initiative Grant Program, which is discussed in this report. USDA, Interior, and DOL did not comment on our report. We incorporated the technical and clarifying comments that we received from the agencies, as appropriate. We are sending copies of this report to interested congressional committees and the Secretaries of Agriculture, Education, Health and Human Services, Housing and Urban Development, Interior, Labor, Transportation, and Veterans Affairs. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Wise at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To identify federal programs that provide funding for transportation services for the transportation disadvantaged, we examined prior GAO work on the topic, conducted an online search of the Catalog of Federal Domestic Assistance, and requested program information from federal agency officials for the programs identified. We included only federal programs that provide nonemergency, nonmilitary, surface transportation services of any kind, targeted to transportation-disadvantaged populations. We then asked program administrators to review and verify the programs identified and the program information collected, including the general target population, types of transportation services and trips typically provided, and program spending on transportation services in fiscal year 2010. We supplemented and modified the inventory based on this information. In addition, we reviewed the relevant federal laws governing these programs including their popular title or original source of program legislation and U.S. Code or other provision cited as authorizing transportation. To determine what federal coordination efforts have taken place since we last fully reported on this issue in 2003 and what challenges remain, we conducted interviews with program officials from eight federal agencies— the Departments of Agriculture, Education, Health and Human Services, Housing and Urban Development, Interior, Labor, Transportation, and Veterans Affairs—and reviewed relevant documentation provided by agency officials. We chose these agencies because they administered programs that were authorized to provide funding for transportation services for the transportation disadvantaged in fiscal year 2010 and were identified by executive order to participate in coordination. We also interviewed officials from the National Resource Center for Human Service Transportation Coordination and interviewed or corresponded with transportation researchers and representatives from relevant industry and advocacy groups, including the following: American Public Transportation Association Association of Metropolitan Planning Organizations Easter Seals Project ACTION National Conference of State Legislatures To identify the types of coordination that have occurred at the state and local levels, we conducted interviews with state and local officials from five states—Florida, Texas, Virginia, Washington, and Wisconsin. We based our selection of these states on a variety of characteristics, including size of target populations per state, geographic diversity, existence of a state coordinating body, and states deemed notable for their transportation coordination efforts. As part of our state and local interviews, we spoke with officials from state and local human services and transportation agencies, metropolitan planning organizations, transportation providers, interest and advocacy groups, and others and reviewed relevant documentation. Because we used a nongeneralizable sample of states, our findings cannot be used to make inferences about other states. However, we determined that the selection of these states was appropriate for our design and objectives and that the selection would generate valid and reliable evidence to support our work. Table 4 provides more detailed information about the state and local entities we interviewed. We also interviewed the appropriate United We Ride Regional Ambassadors for each state. In addition, we reviewed relevant literature and prior GAO and Congressional Research Service reports, as appropriate. We conducted this performance audit from June 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Food Stamp Act of 1977 7 U.S.C. § 2015(d)(4)(I)(i) (I) Routine medical appointments, shopping, entertainment, etc. Individuals with Disabilities Education Act 20 U.S.C. §§ 1411(a)(1) and 1401(26) No actual data or estimate available from the federal agency 29 U.S.C. § 723(a)(8) To access vocational rehabilitation services $81,000,000(estimate) 29 U.S.C. §§ 796f-4(b)(2) and 705(18)(xi) Fiscal year 2010 federal spending on transportation 29 U.S.C. §§ 796e-2(1) and 705(18)(xi) Individuals with Disabilities Education Act 20 U.S.C. §§1419(a) and 1401(26) No actual data or estimate available from the federal agency 29 U.S.C. § 796k(e)(5) Individuals with Disabilities Education Act 20 U.S.C. §§1433 and 1432(4)(E)(xiv) No actual data or estimate available from the federal agency 29 U.S.C. §§ 795g and 705(36) No actual data or estimate available from the federal agency 42 U.S.C. § 11433(d)(5) Fiscal year 2010 federal spending on transportation 29 U.S.C. §§ 741(a) and (b)(1)(B) and 723(a)(8) Elementary and Secondary Education Act of 1965 20 U.S.C. § 7173(a)(10) Elementary and Secondary Education Act of 1965 20 U.S.C. § 7225a(a) Department of Health and Human Services Older Americans Act of 1965 42 U.S.C. § 3030d(a)(2) Adults age 60 and older $73,585,717(expended) Older Americans Act of 1965 42 U.S.C. §§ 3057, 3030d(a)(2) Public transportation, mileage reimbursement, GSA lease, etc. $7,318 (estimate) To access health care services $26,300,000(expended) Indian Health Care Improvement Act: Balanced Budget Act of 1997 42 U.S.C. § 254c-3 Public transportation, mileage reimbursement, etc. To access diabetes prevention and cardiovascular disease services $419,247 (estimate) Persons with substance use or mental disorders $3,800 (estimate) Personal Responsibility and Work Opportunity Reconciliation Act of 1996 42 U.S.C. § 604(a), (k) To access work, employment training, and child care providers $445,118,725 (expended) 8 U.S.C. §§ 1522(b)(7)(D), 1522(c) Fiscal year 2010 federal spending on transportation 8 U.S.C. §§ 1522(b)(7)(D), 1522(c) No actual data or estimate available from the federal agency 8 U.S.C. §§ 1522(b)(7)(D), 1522(c) No actual data or estimate available from the federal agency 8 U.S.C. §§ 1522(b)(7)(D), 1522(c) Fiscal year 2010 federal spending on transportation No actual data or estimate available from the federal agency 42 USCA § 9835(a)(5)(B) No actual data or estimate available from the federal agency 42 U.S.C. § 1397a(a)(2)(A) Access services, or obtain medical care or employment $22,863,512(estimate) No actual data or estimate available from the federal agency 42 U.S.C. § 1397jj(a)(26), (27) (partial) Target population as defined by program officials 42 U.S.C. §§ 1396a, 1396n(e)(1)(A) Fiscal year 2010 federal spending on transportation $786,966,682(partial) Persons with HIV or AIDS $10,750,025(estimate) Ryan White Comprehensive AIDS Resources Emergency Act of 1990 42 U.S.C. §§ 300ff-21, 300ff- 23(a)(2)(B) Persons with HIV or AIDS $5,598,234(estimate) ADAMHA Reorganization Act of 1992 42 U.S.C. § 300x-1(b)(1) No actual data or estimate available from the federal agency 42 U.S.C. § 701(a) Department of Housing and Urban Development 12 U.S.C. 1701g(g)(1) To access supportive services, such as medical treatment, employment, or job training, etc. Accessible taxis, local transportation programs, buses, etc. Housing and Community Development Act of 1974 42 U.S.C. § 5305(a)(8) social services, medical services, jobs, etc. Low- and moderate-income persons, mobility-impaired persons, and jobseekers (expended) Housing and Community Development Act of 1974 42 U.S.C. § 5305(a)(8) social services, medical services, jobs, etc. Low- and moderate-income persons, mobility-impaired persons, and jobseekers $0 (expended) Housing and Community Development Act of 1974 42 U.S.C. § 5305(a)(8) social services, medical services, jobs, etc. Low- and moderate-income persons, mobility-impaired persons, and jobseekers $19,211 (expended) No actual data or estimate available from the federal agency 42 U.S.C. § 12907(a)(3) To access supportive services, such as medical treatment, employment, job training, etc. Low to extremely low- income persons living with HIV/AIDS $1,530,187 (expended) No actual data or estimate available from the federal agency 42 U.S.C. § 1437v(d)(1)(L), (i)(3) Native American Housing Assistance and Self Determination Act of 1996 25 U.S.C. § 4132(3) No actual data or estimate available from the federal agency 42 U.S.C. § 1437v(d)(1)(L), (i)(3) No Child Left Behind Act of 2001 25 U.S.C. § 2001(b)(8)(C)(v) To access school and educational activities $373,368 (estimate) No Child Left Behind Act of 2001 25 U.S.C. § 2001(b)(8)(C)(v) To access school, educational activities, and for use in emergency situations $52,637,635 (obligated) Workforce Investment Act of 1998 29 U.S.C. § 2864(d)(2) At-risk youth, ages 16-24, who meet low- income criteria $24,100,000 (estimate) Older Americans Act of 1965 42 U.S.C. § 3056(c)(6)(A) (iv) Fiscal year 2010 federal spending on transportation 19 U.S.C. § 2296(b) Workforce Investment Act of 1998 29 U.S.C. § 2864(d)(2) Workforce Investment Act of 1998 29 U.S.C. § 2854(a)(4) Workforce Investment Act of 1998 29 U.S.C. § 774 (3)(A), 29 U.S.C. § 2912 (d) Workforce Investment Act of 1998 29 U.S.C. § 2911(d)(2) Workforce Investment Act of 1998 29 U.S.C. §§ 2801(46) Funding for bus and bus facilities, new fixed guideway and modernization, and other capital expenses $3,566,689,946(obligated) General public in urbanized areas $4,849,410,834 (obligated) General public and federally recognized tribes $624,837,418(obligated) Elderly individuals and persons with disabilities $176,237,261 (obligated) Low-income individuals and reverse commuters $163,976,876 (obligated) General public and individuals with disabilities $544,261 (obligated) To enhance transportation systems and access to those systems $90,140,813 (obligated) Mileage reimbursement; special mode (ambulance, wheelchair van); common carrier (air, bus, train, boat, taxi) To access VA or VA- authorized non-VA health care $745,315,000(obligated) Homeless Veterans Comprehensive Service Programs Act of 1992 38 U.S.C. §§ 2011(b)(1)(B), 7721 Note and transportation of homeless veterans by community- based providers (obligated) No actual data or estimate available from the federal agency Spending was reported by program officials, and we did not verify the information. Amounts obligated or expended on transportation are given, depending upon the information available. When actual information was not available, agency officials provided estimates. Figure was amount obligated in fiscal year 2010 for 32 van grants, and grantees have 5 years to spend these funds, according to program officials. In addition to the individual named above, other key contributors to this report were Heather MacLeod, Assistant Director; Rebekah Boone; Brian Chung; Jennifer Clayborne; Jean Cook; Bert Japikse; Delwen Jones; and Sara Ann Moessbauer. | Millions of Americans are unable to provide their own transportation or have difficulty accessing public transportation. Such transportation-disadvantaged individuals may include those who are elderly, have disabilities, or have low incomes. The Departments of Education, Health and Human Services (HHS), Labor (DOL), Transportation (DOT), Veterans Affairs (VA), and other federal agencies may provide funds to state and local entities to help these individuals access human service programs. As requested, GAO examined (1) federal programs that may fund transportation services for the transportation disadvantaged; (2) federal coordination efforts undertaken since 2003; and (3) coordination at the state and local levels. GAO analyzed information from the Catalog of Federal Domestic Assistance; interviewed federal officials; and interviewed state and local officials in five states, chosen based on a variety of characteristics, including geographic diversity. Eighty federal programs are authorized to fund transportation services for the transportation disadvantaged, but transportation is not the primary mission of most of the programs GAO identified. Of these, the Department of Transportation administers 7 programs that support public transportation. The remaining 73 programs are administered by 7 other federal agencies and provide a variety of human services, such as job training, education, or medical care, which incorporate transportation as an eligible expense in support of program goals. Total federal spending on transportation services for the transportation disadvantaged remains unknown because, in many cases, federal departments do not separately track spending for these services. However, total funding for the 28 programs that do track or estimate transportation spending, including obligations and expenditures, was at least $11.8 billion in fiscal year 2010. The interagency Coordinating Council on Access and Mobility, which the Secretary of Transportation chairs, has led governmentwide transportation coordination efforts since 2003. The Coordinating Council has undertaken a number of activities through its United We Ride initiative aimed at improving coordination at the federal level and providing assistance for state and local coordination. For example, its 2005 Report to the President on Human Service Transportation Coordination outlined collective and individual department actions and recommendations to decrease duplication, enhance efficiencies, and simplify access for consumers. Key challenges to federal interagency coordination efforts include a lack of activity at the leadership level of the Coordinating Council in recent yearsthe Coordinating Council leadership has not met since 2008and the absence of key guidance documents for furthering agency coordination efforts. For example, the Coordinating Council lacks a strategic plan that contains agency roles and responsibilities, measurable outcomes, or required follow-up. GAO has previously reported that defining and articulating a common outcome and reinforcing agency accountability through agency plans and reports are important elements for agencies to enhance and sustain collaborative efforts. State and local officials GAO interviewed use a variety of planning and service coordination efforts to serve the transportation disadvantaged. Efforts include state coordinating councils, regional and local planning, one-call centers, mobility managers, and vehicle sharing. For example, state coordinating councils provide a forum for federal, state, and local agencies to discuss and resolve problems related to the provision of transportation services to the transportation disadvantaged. In other examples, one-call centers can provide clients with transportation program information and referrals for appropriate service providers and mobility managers may serve many functionsas policy coordinators, operations service brokers, and customer travel navigators. However, state and local governments face several challenges in coordinating these servicesincluding insufficient federal leadership, changes to state legislation and policies that may hamper coordination efforts, and limited financial resources in the face of growing disadvantaged populations. To promote and enhance federal, state, and local coordination activities, the Secretary of Transportation and the Coordinating Council should meet to (1) complete and publish a strategic plan; and (2) report on progress of recommendations made by the Council in its 2005 Report to the President and develop a plan to address outstanding recommendations. Education and VA agreed with GAOs recommendations. HHS, DOL, DOT, and other federal agencies neither agreed nor disagreed with the report. Technical comments were incorporated as appropriate. |
DOD policy defined an operational range as an area used to conduct research, develop and test military munitions, or train military personnel. Operational ranges were considered active when regularly used for range activities, and inactive when not currently used but still under military control and available for use as a range. Once a range is closed, DOD is required to identify, assess, and clean up or take other appropriate action in response to contamination by military munitions. As such, DOD’s current inventory of operational ranges represents a potential liability for future cleanup. Figures 1, 2, and 3 show examples of the types of ordnance and explosives that can be found on operational ranges. Section 313(a)(1) of the National Defense Authorization Act for Fiscal Year 2002 required DOD to provide Congress with a comprehensive assessment of unexploded ordnance, discarded military munitions, and munitions constituents at current and former DOD facilities. The law required the assessment to include an estimate of the aggregate projected cost of remediation (or cleanup) at operational ranges, to be presented as a range of costs including a low and high estimate, and delivered to Congress in 2003 in DOD’s report on the Defense Environmental Restoration Program. In April 2003, DOD reported its estimate for the total cost to address the potential liability associated with unexploded ordnance, discarded military munitions, and munitions constituents at operational ranges to be between $16 billion and $165 billion. To provide Congress with estimated costs to clean up operational ranges, DOD used inventory data available at the time of its April 2003 report, which counted 10,444 operational ranges located in the United States and its territories. At the direction of Congress, only operational ranges in the United States and its territories were to be considered for the purpose of estimating cleanup costs. According to DOD, these cost estimates were supported by individual service estimates, which in turn were supported by summary information on the number of operational ranges and acreage assumed to contain a high density of unexploded ordnance and munitions constituents—such as target areas, detonation sites, and demolition areas—and the percentage of acreage assumed to contain a low density of contamination from unexploded ordnance and munitions constituents, such as buffers, training areas, and maneuver areas. The services continued to inventory operational ranges under section 366 of the National Defense Authorization Act for Fiscal Year 2003, which required DOD to inventory operational ranges to address training range sustainment and encroachment concerns and submit the inventory to Congress as part of the President’s fiscal year 2005 budget request early in calendar year 2004. The scope of this inventory effort addressed operational range training and testing capacities and capabilities, and specific constraints on the use of operational ranges, but did not specifically include data on the cleanup of unexploded ordnance, discarded military munitions, or munitions constituents. We previously reported that the two key data needed to develop operational range cleanup costs were (1) an accurate and complete operational range inventory and (2) a consistent methodology for estimating costs. Reliable cost estimates can be critical information for DOD and Congress when considering the potential benefits of closing operational ranges or entire installations versus the potentially very high cost of cleaning up such sites. However, such estimates must be based on accurate data that, in the case of operational ranges, begins with a complete and accurate operational range inventory. The costs for cleaning up ranges can be extensive. For example, DOD estimates it will cost $22.6 million to clean up Fort McClellan in Alabama, recommended for closure under DOD’s base realignment and closure program in 1995, and $247 million to clean up Fort Ord in California, closed in 1994. DOD officials explained that wide variations in cost can be attributed to a number of factors, such as future land use, technical complexities, and the high level of difficulty to locate, recover, and destroy ordnance located beneath the ground surface. DOD’s operations at military installations and operational ranges in the United States are subject to laws and regulations governing a variety of environmental concerns, from water quality to the treatment and disposal of hazardous wastes. These laws include the Safe Drinking Water Act, the Clean Water Act, RCRA, the Federal Facility Compliance Act, and CERCLA. DOD is also generally required to comply with state and local environmental statutory and regulatory requirements on its installations and operational ranges. DOD has proposed that Congress specifically exempt it from requirements to clean up unexploded ordnance, munitions, and munitions constituents on operational ranges under RCRA and CERCLA. The Safe Drinking Water Act authorizes EPA to issue national primary drinking water regulations setting maximum contaminant level standards for drinking water that must be met by public water systems. EPA may authorize states to carry out primary enforcement authority for implementing the Safe Drinking Water Act if, among other things, the state adopts drinking water regulations that are no less stringent than the national primary drinking water regulations. EPA has set standards for approximately 90 contaminants in drinking water, including microorganisms, organic chemicals, inorganic chemicals, disinfectants, disinfection byproducts, and radioactive substances. None of the more than 200 chemical contaminants associated with munitions use are currently regulated under the Safe Drinking Water Act. The 1996 amendments to the Safe Drinking Water Act required EPA to establish criteria for a monitoring program for unregulated contaminants (where a maximum contamination level has not been established) and to publish a list of contaminants—chosen from those not currently monitored by public water systems—to be monitored. EPA’s regulation, referred to as the Unregulated Contaminant Monitoring Regulation, was issued in 1999 and supplemented in 2000 and 2001. The purposes of the regulation are to determine whether a contaminant occurs at a frequency and in concentrations that warrant further analysis and research on its potential effects and to possibly establish future drinking water regulations. The first step in the current program required public water systems serving more than 10,000 customers (and a sample of 800 small public water systems serving fewer than 10,000) to monitor drinking water for perchlorate and 11 other unregulated contaminants over a consecutive 12- month period at any point between 2001 and 2003, and report the results to the EPA. Under this regulation, some DOD installations were required to monitor drinking water for perchlorate and other munitions-related contaminants and to report the results. The Clean Water Act authorizes EPA to regulate the discharge of pollutants into waters in the United States. EPA may authorize states to carry out a state program in lieu of the federal program if the state program is at least equivalent to the federal program and provides for adequate enforcement. Under the Clean Water Act’s National Pollution Discharge Elimination System (NPDES) program, facilities discharging pollutants into waters of the United States are required to obtain an NPDES permit from EPA or authorized states. NPDES permits include specific limits on the quantity of pollutants that may be discharged and require monitoring of those discharges to ensure compliance. EPA’s list of the toxic pollutants subject to regulation under the Clean Water Act includes nitrobenzene, a chemical that is on DOD’s list of 20 constituents of greatest concern. RCRA requires owners and operators of facilities that treat, store, and dispose of hazardous waste, including federal agencies, to obtain a permit specifying how their facilities will safely manage the waste. Under RCRA’s corrective action provisions, facilities seeking or holding RCRA permits can be required to clean up their hazardous waste contamination. The corrective actions can be specified in the facility’s operating permit, in a separate corrective action permit, or through an enforcement order. EPA also has authority under RCRA to order a cleanup of hazardous waste when there is an imminent and substantial endangerment to public health or the environment. EPA may authorize states to administer their own programs in lieu of the federal program, as long as these programs are equivalent to and consistent with the federal program and provide for adequate enforcement. EPA’s regulations define hazardous wastes to include those that are specifically listed in the regulations as well as those that are “characteristic wastes.” Characteristic hazardous wastes are defined as wastes that are ignitable, corrosive, reactive, or toxic. A federal district court in California recently ruled, in part, that perchlorate is a hazardous waste under RCRA because it is ignitable. Under section 107 of the Federal Facility Compliance Act of 1992, EPA was required, in consultation with DOD and the states, to issue a rule identifying when military munitions become hazardous waste under RCRA, and to provide for protective storage and transportation of that waste. Under the rule issued by EPA, military munitions are subject to RCRA when, among other things, (1) unexploded munitions or their constituents are buried or otherwise disposed of, or (2) when used or fired munitions are taken off-range. CERCLA governs the cleanup of releases or threatened releases of hazardous substances, pollutants, or contaminants. CERCLA’s definition of a hazardous substance includes substances regulated under various other environmental laws, including RCRA, the Clean Air Act, the Clean Water Act, and the Toxic Substances Control Act. Under section 120 of CERCLA, the federal government is subject to and must comply with CERCLA's requirements to the same extent as any nongovernmental entity. DOD’s cleanup under CERCLA section 120 is interrelated with its environmental restoration program under section 211 of the Superfund Amendments and Reauthorization Act of 1986. According to DOD, there are more than 200 chemicals associated with military munitions, and of these, 20 are of great concern due to their widespread use and potential environmental impact. TNT, Propanetriol trinitrate (nitroglycerin), Royal Demolition Explosive, and perchlorate are among the 20. Perchlorate is the primary oxidizer in propellants, present in varying amounts in explosives, and is highly soluble. According to EPA, an estimated 90 percent of the perchlorate produced in the United States is manufactured for use by the military and the National Aeronautics and Space Administration. Typical production quantities average several million pounds per year. Nonmilitary uses for perchlorate include fireworks, flares, fertilizer, and automobile airbags. As of 2004, EPA reported that 34 states confirmed perchlorate contamination in ground and surface water, and in states where EPA determined the source of the contamination, it attributed a significant portion to defense manufacturing and test sites. EPA has not established a federal drinking water standard for perchlorate. However, in 1999, EPA established a provisional reference dose for perchlorate in drinking water of between 4 and 18 parts per billion. A reference dose is an estimate of the daily exposure to a human that would not pose a significant risk of harmful effects. In October 2003, the National Academy of Sciences (Academy) began a study of the best scientific model to use for determining a drinking water standard or reference dose for perchlorate, if any. According to EPA, the Academy’s study will take about one year to complete. Based on recommendations from the Academy, EPA will decide whether to regulate the contaminant and will have up to 2 years after making an affirmative determination to propose a national primary drinking water regulation for perchlorate. An EPA official told us that updating drinking water standards can take 2 to 3 years and predicted that a perchlorate standard will likely not be available until 2006 or 2008. In the meantime, some states that detected perchlorate in various media, such as groundwater, have established state guidance or advisory levels for the contaminant. As of February 2004, seven states have established interim perchlorate advisory levels. Of those states, Maryland and Massachusetts have the lowest perchlorate advisory level of 1 part per billion. On March 12, 2004, California revised its advisory action level for perchlorate from 4 parts per billion to 6 parts per billion. DOD’s estimate that it would cost between $16 billion and $165 billion to clean up unexploded ordnance, discarded military munitions, and munitions constituents on operational ranges is questionable. To determine the costs of operational range clean up, DOD had to first inventory its operational ranges and obtain data such as the type of range and munitions used. However, the military services used inventory data that were collected for different purposes over different periods of time and verified with varying degrees of analytical rigor. Next, the costs of operational range cleanup were calculated using a mix of unvalidated assumptions provided by DOD and assumptions provided by the individual services, as well as actual service data, where available. Consequently, DOD’s overall cost estimates were based on assumptions, estimates, and actual data that differed across the services and that raise questions about the reliability of DOD’s estimated costs to clean up operational ranges. Each service inventoried its operational ranges and collected data on range acreage and munitions used, using various methodologies over different periods of time. (See table 1 for the starting and ending dates of the service’s inventories.) Services also conducted inventories for different reasons, such as to respond to pending legislation on ranges, public concern about military use of ranges, or simply to gather data to calculate cleanup cost estimates. The rigor of the analysis and the degree of the validity of the inventory results varied by service. The inconsistencies in how DOD collected and analyzed data on operational ranges raise questions about the reliability of DOD’s inventory. The Air Force inventory of operational ranges in the United States and its territories was based on a survey sent to field command levels to estimate costs to clean up operational ranges. Service officials said survey data was validated during on-site field inspections or, in some cases, brief desk reviews to assure surveys were complete and free of obvious errors. As of December 2002, the Air Force counted 222 active ranges, 23 inactive ranges, and 23 ranges that were not categorized as either active or inactive. Together, Air Force operational ranges covered 6,423,161 acres. The Army’s inventory of operational ranges was conducted concurrently with an inventory of nonoperational ranges and was based on field surveys. According to Army officials, the Army initiated an inventory of its ranges primarily in response to anticipated legislation on the use of ranges, which required a comprehensive inventory of DOD ranges as well as a collection of descriptive data about each range, such as the acreage and types of munitions used on the range. The Army’s inventory was also conducted in response to DOD directives issued in August 1999 that required the services to establish and maintain an inventory of operational ranges and data on munitions and ordnance. To inventory ranges, the Army used contract support staff who requested data from field commands and installations, and then sought to validate the data through on-site visits. Army officials said the Army’s inventory of operational ranges was completed in December 2002, and encompassed 9,427 active ranges, 377 inactive ranges, and 4 ranges not designated active or inactive. In total, Army operational ranges covered 14,991,072 acres in the United States and its territories. Similar to the Army, the Marine Corps conducted an inventory of its operational ranges primarily in response to anticipated legislation on the use of ranges and DOD directives that required the services to establish and maintain an inventory of operational ranges and data on munitions and ordnance. The Marine Corps developed its inventory from an archive data search and surveys sent to installations. Headquarters’ officials reviewed the surveys to assure that submitted data agreed with data in the archive search. As of December 2002, the Marine Corps counted 216 operational ranges totaling 1,980,119 acres in the United States and its territories. According to headquarters’ officials, the Marine Corps did not distinguish between active and inactive ranges but designated all ranges as operational. The Navy’s inventory of operational ranges in the United States and its territories was conducted at the request of the Navy’s Environmental Readiness Division. The Navy’s inventory was prepared in response to the anticipated inventory requirements of DOD’s proposed range rule and because of increased public and regulatory scrutiny of military ranges, Navy officials said. The inventory was conducted through surveys sent to the installations. As of December 2002, Navy operational ranges totaled 121 active ranges and 31 inactive ranges on 1,284,374 acres. As of April 2003, when DOD reported its estimated cost to clean up operational ranges, DOD’s inventory included 10,444 operational ranges totaling 24.6 million acres in the United States and its territories. (See table 2 for a breakout of operational ranges by service and status and total acres.) DOD continued to inventory its operational ranges. The National Defense Authorization Act for Fiscal Year 2003 required DOD to develop a plan to address training range issues, such as range sustainment and encroachment and, as part of this plan, to develop a range inventory system that included all available operational training ranges. In January 2003, DOD provided the services with an inventory framework and data definitions to ensure reporting consistency and required the services to complete detailed inventories of all of their operational ranges. DOD revised its existing inventory of operational ranges to meet this new requirement. Because the revised inventory was conducted for different purposes, using a scope and set of assumptions that were different from the inventory data used to estimate cleanup costs, it identified a different number of operational ranges. For example, the inventory for developing the cost estimates used actual operational range acreage, whereas the revised inventory used actual and potential operational range acreage. Further, the revised inventory is divided into range complexes and individual ranges, and includes operational ranges outside the United States and its territories not included in the inventory DOD used to estimate cleanup costs. In February 2004, DOD released the results of its training range plan and revised inventory. The revised inventory listed 353 range complexes and 172 individual ranges on 26 million acres worldwide. These numbers differ from the inventory data DOD used to estimate cleanup costs, which counted 10,444 operational ranges on 24.6 million acres in the United States and its territories primarily because of the aggregation of individual ranges into complexes and the inclusion of ranges outside the United States and its territories. For example, under the prior inventory, Fallon Naval Air Station, in Nevada, reported it had 150,365 acres of rangeland, but under the new inventory, Fallon reported it had just 103,300 acres of actual and potential rangeland. Also under the prior inventory, the Marine Corps reported that Camp Lejeune, in North Carolina, had 95,872 acres of rangeland, while under the new inventory, Camp Lejeune reported it had 152,000 acres of actual and potential rangeland (even though the entire installation encompasses just 153,000 acres). The Marine Corps also reported that Camp Pendleton, in California, had 39,084 acres of rangeland under the old inventory, but under the new inventory, Camp Pendleton reported it had 114,000 acres of actual and potential rangeland, almost a threefold increase. While the 2003 and 2004 inventories are not readily comparable because of the varying scope and definitions used to develop the revised inventory, the difference between the two highlights the difficulty in understanding the basis for, and the results of, DOD’s cost estimates. Finally, we believe the differences in the two inventories may further complicate efforts of Congress to identify the potential liabilities that may exist if operational ranges or installations are closed and require cleanup. In 2002, DOD provided guidance to the services on how to estimate costs for cleaning up operational ranges. This guidance specified the scope for estimating costs but allowed for variation across the services. According to DOD officials, because the requirement to estimate cleanup costs was a one-time congressional requirement, DOD directed the services to limit their data gathering efforts by using certain costing assumptions and a computer-costing model in combination with already existing data. Examples of the scope and some of the assumptions DOD used to estimate costs include the following: The scope of the inventory was limited to operational ranges within the United States and its territories because DOD believed that was what Congress intended. The scope excluded certain operational ranges, such as water ranges, because DOD did not have a model for estimating costs associated with such ranges and did not have any significant historical experience on which to base an estimate. DOD also did not develop cost estimates for several types of airspace, such as warning areas and restricted areas. DOD directed the services to use both a computer-costing model that automatically assigned certain values for the cleanup costs of unexploded ordnance and discarded military munitions and an electronic worksheet to estimate costs to clean up munitions constituents. DOD provided cost assumptions to the services based on operational range acreage and other variables. For example, the services were directed to divide range acreage into areas assumed to have a high density of contamination and a low density of contamination and, on that basis, calculate individual cleanup costs. DOD also provided specific assumptions to calculate costs for various cleanup activities. For example, to estimate the cost to remove unexploded ordnance from a highly contaminated range area, the services were told to assume they would need to remove ordnance from 50 percent of that area to calculate the high cost estimate and 5 percent of that area to calculate the low cost estimate. DOD said its assumptions were based on discussions with the services and developed through consensus. DOD could not provide any documentation that the assumptions they asked the services to use were validated—a confirmation of the reasonableness and justification for assumptions used—and a senior DOD official told us that, in fact, the assumptions were not validated. Furthermore, DOD instructions to the services allowed them to use additional assumptions or site-specific data so that cost estimates were calculated based on a mix of actual data and assumptions. Based on our review of DOD’s 2003 report to Congress, and discussions with service officials on their methodologies to estimate costs, we found DOD did not fully explain the mix of assumptions and data used and how this mix affected the cost estimates, so that the usefulness of DOD’s overall cleanup cost estimates to Congress is questionable. The inconsistencies in how the services developed their cost estimates are evident in areas such as how the services calculated high-density acreage (that is, the area of a range containing a high density of ordnance) and the costs for cleaning up these acres. For example, although DOD guidance directed the services to estimate what proportion or percentage of operational range acreage contained a high density of unexploded ordnance and munitions constituents, and specified how various types of ranges were to be treated for cost estimating purposes, each service performed this calculation differently. If site-specific data was unavailable, the Marine Corps used varying percentages based on the characteristics of similar ranges to determine those that were highly contaminated. Our analysis showed that for about two-thirds of its operational ranges, the Marine Corps assumed 10 percent of its nonsmall arms or multipurpose range acreage was highly contaminated. However, based on a review of the Marine Corps’ total cleanup cost estimates for operational ranges, GAO determined that the Marine Corps calculated its costs assuming that an average 53 percent of range acreage has highly contaminated. In contrast, the Air Force and the Army used estimated data to determine that 44 percent and 60 percent of their acreage was highly contaminated, respectively. Further, the Air Force did not designate a percentage of each operational range with a high density of contamination and a low density of contamination, but rather defined each operational range as either 100 percent high density or 100 percent low density. The Navy used actual data to determine that 11 percent of its operational range acreage was highly contaminated. (Figure 4 shows the high and low density acreage by service used to estimate cleanup costs.) Based on the data provided by the services, the model calculated four totals for each operational range: a low and high estimated cost to clean up the portion of the range assumed to have a low level of contamination, and a low and high estimated cost to clean up the portion of the range assumed to be highly contaminated. Low estimates for low and high contamination areas were combined to calculate a total low estimate, and high estimates for low and high contamination areas were combined to calculate a total high estimate. (See table 3 for low and high estimates by service.) In general, using the model and standardized assumptions should have produced estimates with some variation across the services because of differing missions, operational practices, and types of munitions used. However, as reflected in table 4, a tenfold difference in the average cost to clean up an acre of highly contaminated rangeland calls into question the mix of different assumptions and data used by the services to estimate costs. For example, the Air Force’s average cost to clean up an acre with a high density of contamination was $755, whereas the Army’s estimate was $7,577. As a result, the services cost estimates are not comparable. (Table 4 shows the total and average cost per acre estimates by service.) DOD does not have a comprehensive policy requiring sampling or cleanup of the more than 200 chemical contaminants associated with military munitions on operational ranges. However, DOD installations have sampled for and cleaned up munitions-based constituents when directed by state regulatory authorities. With regard to perchlorate, DOD has issued sampling policies but does not provide specific funding for such sampling. Nevertheless, we found some installations have sampled and monitored for perchlorate to meet the requirements of environmental laws and regulations, such as RCRA and the Unregulated Contaminant Monitoring Regulation. During visits to six installations that reported high levels of perchlorate, we found that none were cleaning up perchlorate contamination. At six of the seven installations we visited, perchlorate contamination was largely the result of researching, manufacturing, testing, and disposing of munitions, and not the use of munitions during training. According to EPA, of the more than 200 chemicals associated with military munitions, which include 20 that DOD considers to be of greatest concern due to their widespread use and potential environmental impact, none are specifically regulated under the Safe Drinking Water Act. Further, except in some specific instances, EPA does not generally use its authority under other environmental laws, such as RCRA and CERCLA, to require DOD to conduct cleanups on operational ranges. An EPA official told us that although EPA is concerned with constituents associated with military munitions such as perchlorate and Royal Demolition Explosive, and the migration of plumes (pollutants that drain or flow through soil and water) from military ranges to groundwater, the agency generally does not interfere with DOD’s operation of its operational ranges. Recently, DOD proposed that Congress specifically exempt it from requirements to clean up unexploded ordnance, munitions, and munitions constituents that remain on operational ranges under RCRA and CERCLA. DOD policy does not generally require the services to clean up or sample for munitions contaminants because, according to DOD officials, these contaminants are deposited on operational ranges in the course of the normal and intended use of these munitions. Yet, DOD may be required by EPA or states to sample and clean up its munitions contaminants under various environmental laws and regulations on operational ranges. For example, under the Clean Water Act, facilities that discharge pollutants into surface water are required to obtain a NPDES permit from EPA or an authorized state agency. Several states have required some DOD installations to monitor for various contaminants associated with military munitions as part of the NPDES permit process. For example, the Regional Water Quality Control Board in San Diego and the Hampton Roads Sanitation District in Hampton Roads, Virginia, required Navy facilities to monitor their water discharges for various constituents that are on EPA’s list of toxic pollutants under the Clean Water Act. Under the Unregulated Contaminant Monitoring Regulation, EPA required some installations to sample for and report on 12 unregulated contaminants in drinking water during any 12-month period between 2001 and 2003. The list of contaminants included four munitions-related contaminants—perchlorate, 2,4 and 2,6 dinitrotoluene, and nitrobenzene. In April 2004, DOD reported that 36 installations had sampled for the presence of unregulated contaminants in drinking water, including perchlorate, under this regulation. Of these, 33 installations reported no perchlorate was detected or detection results were below the reporting limit of 4 parts per billion. Only three Air Force installations detected perchlorate above the reporting limit, ranging from just over 4 parts per billion to 46 parts per billion. In November 2002, DOD issued its first policy on perchlorate assessment that stated the services may sample and assess for perchlorate if there was a reasonable basis to suspect both a potential presence of perchlorate and a likely pathway that could lead to human exposure. The policy stated that the services could fund assessments using the operations and maintenance environmental compliance account, but specified that sampling should be considered a lower priority (Class II) environmental project and, as we found in a prior effort, was unlikely to be funded. Finally, the policy directed those installations that sampled for and found perchlorate to report to DOD on the location and amount of perchlorate found. On September 29, 2003, DOD issued a revised policy on perchlorate sampling that directed the services to (1) consolidate data on perchlorate detections, including data developed in response to environmental laws such as the Clean Water Act and Safe Drinking Water Act, and (2) sample any previously unexamined sites, including ranges, where a perchlorate release is suspected because of prior DOD activities and where human exposure is likely. The policy stated that the services should fund sampling using the same environmental compliance account specified in the previous policy, but elevated sampling to a higher (Class I) funding priority and thus made it more likely to be funded. However, when DOD issued its policy, funding had already been allocated to Class I requirements for fiscal year 2004. In future years, unless specific or additional funding is added, perchlorate sampling will have to compete with other high priority environmental requirements and may not be funded. In implementing the revised policy, the services added a third criterion requiring that installations coordinate with, or obtain written approval from, headquarters and the chain of command before sampling for perchlorate. However, if sampling is specifically required by an environmental law or state agency, the service policies do not require installations to request approval or notify headquarters before sampling. During visits we made to selected installations with reported perchlorate contamination between October 2003 and January 2004, we found installations were not sampling under the revised policy to determine the presence of perchlorate on operational ranges. More broadly, as of February 2004, Marine Corps and Navy officials said that no installations had requested permission to sample under this policy. According to the Air Force, three installations asked for permission to sample for perchlorate because EPA had asked that they sample. Air Force headquarters approved two of the requests but denied the third because, according to Air Force headquarters, there was not a reason to suspect the presence of perchlorate. Four Army installations have asked for approval to sample for perchlorate, and Army headquarters approved all four as of March 2004, an Army headquarters official said. Overall, this suggests that little sampling is being done under DOD’s revised perchlorate policy. Although none of the installations had begun sampling under DOD’s revised policy, during our visits we found a few installations had sampled and monitored for perchlorate to meet the requirements of certain environmental laws and regulations. Table 5 summarizes the perchlorate sampling that has been conducted at installations we visited as reported by DOD as of April 2004. During our visits, we found the following installations had sampled for and monitored perchlorate to meet the requirements of RCRA or the Safe Drinking Water Act: In 1999, as part of an application under RCRA to close an open burning and detonation facility used to destroy excess and obsolete ammunition, the state of New Mexico required White Sands Missile Range, in New Mexico, to sample for contaminants, including perchlorate. The former open burning and detonation facility is located on an operational range. Groundwater sampling detected high levels of perchlorate—up to 25,000 parts per billion. The Army installed 56 monitoring wells on the range to map the plume. Each well is sampled quarterly. After four years of quarterly sampling and monitoring, Army officials said the plume is stable and contained, which means it is isolated underground and not expected to move. Further, officials said there is no indication that perchlorate has migrated outside the identified plume. Under its RCRA closure permit with the state of New Mexico, the Army must continue monitoring the groundwater for up to 20 years. Three of the seven installations we visited tested for perchlorate under the Safe Drinking Water Act’s Unregulated Contaminant Monitoring Regulation program. Edwards Air Force Base, in California, sampled twice in 2002 and reported that none of the 12 chemicals listed on the EPA list of unregulated contaminants, including perchlorate, were detected in any of the groundwater samples collected from drinking water wells. (Under the regulation, EPA required surface water systems to be sampled quarterly and groundwater systems to be sampled semiannually for one consecutive 12-month period.) Redstone Arsenal, in Alabama, sampled quarterly for a 12-month period beginning June 2001. Two water intake sites were sampled (a drinking water source and a drinking water and industrial water source), both along the Tennessee River. Redstone Arsenal reported that perchlorate was not detected above the EPA sampling level of 4 parts per billion. Nearby Huntsville, Alabama, also sampled for perchlorate and detected no contamination, a Redstone Arsenal official said. Finally, although the requirements of the Unregulated Contaminant Monitoring Regulation did not apply to the Naval Air Weapons Station, China Lake, in California, because its water supply system was too small, installation officials volunteered to sample for perchlorate and other unregulated contaminants. Accordingly, in October 2003, officials at China Lake sampled 10 drinking water wells for perchlorate and other contaminants, but perchlorate was not detected. According to information provided by DOD and officials at the installations we visited, the services were generally not cleaning up known perchlorate contamination. DOD officials explained that perchlorate is not a regulated contaminant and, therefore, there is no requirement to clean up perchlorate contamination. (Current DOD policy is that DOD will clean up perchlorate if there is imminent and substantial endangerment to the public.) The exceptions we found were two installations that had cleaned up perchlorate under demonstration projects designed to demonstrate perchlorate cleanup technologies. At the installations we visited, perchlorate contamination was generally the result of research, manufacturing, testing, and disposal of munitions (such as rocket motors) that contained high levels of perchlorate. In one case, the perchlorate resulted from training with smoke munitions containing perchlorate. Although six of the seven sites we visited reported high levels of perchlorate contamination, none of these installations were conducting cleanup actions specifically directed at perchlorate. However, at two installations we visited—Edwards Air Force Base and the Naval Surface Warfare Center, Indian Head—officials said they conducted demonstration projects to develop perchlorate treatment and cleanup technologies in anticipation of future cleanup requirements. (See app. III for details on these demonstration projects.) At six of the seven installations we visited that had operational ranges and detectable levels of perchlorate, we found the perchlorate contamination was generally not due to training on operational ranges. Rather, we found that prior and ongoing research, manufacturing, testing, and disposal of rocket motors were primarily responsible for perchlorate contamination. (See app. IV for details of perchlorate contamination caused by such factors.) Only Aberdeen Proving Ground, in Maryland, reported that some perchlorate contamination was due to the use of perchlorate during training exercises on operational ranges. Further, Aberdeen was the only installation we visited where perchlorate had contaminated a neighboring municipal water supply. At Aberdeen, perchlorate concentrations of up to 5 parts per billion have been detected in drinking water supply wells and 24 parts per billion have been detected in groundwater. Between June and August 2002, Aberdeen Proving Ground sampled drinking water wells owned by the city of Aberdeen located in and along the northern border between the city and the installation, and detected perchlorate contamination in four wells ranging from 1.2 to 5 parts per billion. According to Aberdeen officials, the installation sampled for perchlorate because it was required to do so by the state of Maryland. Groundwater samples taken near the well field showed a large perchlorate plume with contamination levels up to 24 parts per billion. Aberdeen Proving Ground officials attributed the perchlorate contamination to intensive testing and training with smoke grenades and other obscurants. Until about mid-2002, during training exercises in the vicinity of the city of Aberdeen drinking water wells, Aberdeen Proving Ground trained troops using smoke grenades that contained perchlorate. After perchlorate was found in city drinking water, Aberdeen Proving Ground stopped all training with smoke grenades containing perchlorate. However, officials at Aberdeen Proving Ground are not cleaning up the perchlorate detected in city wells. Instead, both the city of Aberdeen and the installation sample finished water and production wells on an alternating monthly schedule: Finished water is sampled weekly, four production wells are sampled twice a month, and the remaining eight production wells are sampled monthly. Recent sampling has detected contamination below the EPA interim assessment guidance of 4 parts per billion, but in some cases, well samples have been above the Maryland Department of the Environment public health advisory for perchlorate, which is 1 part per billion for drinking water. In the event a sample is found to be above the Maryland state advisory limit, the city of Aberdeen blends well water without perchlorate with well water containing perchlorate to lower the concentration level to below 1 part per billion. The Army stated that it would not clean up the perchlorate contamination at Aberdeen until an EPA maximum contaminant level for perchlorate in drinking water is established. Because of DOD’s approach to how it inventoried its operational ranges for munitions and how it estimated the costs to clean up those ranges, both the inventory and the cost estimates are questionable. Further, DOD did not fully disclose to Congress the basis and limitations of its estimates, including identifying estimates based on direct observations and those based on assumptions, and the affect of assumptions on DOD’s cost estimates. Instead, DOD provided only general information to Congress on the assumptions and cost model used without specific details on how costs were developed or the effect of assumptions used on the resulting cost estimates. Consequently, we believe it is difficult for Congress to evaluate the cost estimates DOD provided and that it may be unwise to rely on them for assessing the potential liability associated with contamination on operational ranges. Reliable cost estimates can be a critical piece of information for DOD and Congress when considering the potential costs versus benefits of closing operational ranges or entire installations. However, such estimates must be based on accurate data that, in terms of ranges, begins with a complete and accurate operational range inventory. DOD installations have conducted little or no sampling for perchlorate under DOD’s perchlorate policy, and DOD has not provided specific funding to the services to conduct the sampling that is required by its policy. Available information indicates that testing for perchlorate on installations has been limited and is specifically needed at facilities that are or were involved in research, manufacturing, testing, and disposal of munitions. DOD’s decision not to provide specific funding to the services for sampling hampers the ability of DOD, as well as EPA and the states, to collect better data on the extent and nature of possible perchlorate contamination on military installations. Such information could be important to regulators when determining if there is a potential public health risk from perchlorate and deciding what, if any, actions might be warranted. Further, the lack of such information impedes DOD and congressional efforts for planning and budgeting future cleanup that may be required if federal or state standards regulating perchlorate are adopted. In order to the assist Congress, EPA, and state regulators in assessing and planning for the cleanup of contamination associated with military munitions at operational ranges, we are making the following two recommendations: To improve congressional oversight of DOD and its operational ranges, including providing Congress with more realistic estimates of the potential liability associated with cleaning up contamination related to the use of military munitions, we recommend that DOD, using a more consistent estimating methodology, use its most complete operational range inventory to revise its cost estimates for the cleanup of operational ranges. The revised estimates should include an explanation of the basis and scope on which the inventory was conducted, and how the cost estimates were calculated. The estimates should be accompanied by a detailed description of how costs were developed, such as where estimates and assumptions were used, the basis of and rationale for any assumptions used, and an explanation as to how such assumptions affected cost figures. To develop information needed by Congress, EPA, and the states, such as the location and amount of perchlorate contamination, when deciding what, if any, actions are warranted to address such contamination, we recommend that DOD, acting under its revised perchlorate sampling policy, provide specific funding for comprehensive sampling at sites where no prior sampling has been conducted, yet perchlorate contamination is likely and human exposure is possible based on the sites’ prior or current use. To help identify possible sites of perchlorate contamination, we recommend DOD consolidate and review sampling data previously collected by installations under environmental laws governing the release or disposal of various hazardous substances. In its May 6, 2004, letter, DOD disagreed with our findings and recommendations. DOD also provided technical comments and clarifications that we incorporated in the report, as appropriate. DOD disagreed with our conclusion that it did not have a comprehensive policy requiring sampling or cleanup of munitions constituents on operational ranges, and that it generally has not taken actions to clean up contaminants. In its letter, DOD cited specific policies in place requiring the services to address the release of munitions constituents. However, the guidance DOD cited pertains only to the migration of munitions constituents off-range and the reporting of environmental liabilities, but does not address the sampling or cleanup of munitions constituents found on operational ranges that are the subject of this report. Further, DOD’s letter states that it is responding to munitions constituents at 23 installations and ranges. We acknowledge that DOD is sampling for, and in some cases, cleaning up munitions constituents when directed to do so by EPA or a state environmental agency under various environmental laws. In reviewing the data provided by DOD, however, we found that only 2 of the installations they cited had operational ranges and both of those were being cleaned up because of EPA direction or a court order. At the 12 other installations with active ranges, half of them had sampled or were sampling for munitions constituents as a result of EPA or state environmental agency requests, RCRA requirements, or cleanup associated with Superfund hazardous waste sites. None of these installations, however, were cleaning up the munitions constituents found as a result of sampling. DOD disagreed with our assessment that its inventory data and cost estimates were questionable and said it was not necessary to revise its cost estimates because inventory data used to develop the estimates were “accurate within reason.” In its comments, DOD stated that it was not required to use validated costing assumptions, or a consistent estimating methodology, because the fiscal year 2002 National Defense Authorization Act provided that the standard for the report of liabilities did not apply to DOD’s cost estimates. Although the act allowed DOD to develop cost estimates that did not meet the same standards as required for the report of liabilities in DOD’s annual financial statement, we believe that DOD had a responsibility to provide Congress with useful information by making a reasonable attempt to prepare accurate and complete estimates, including assuring that its assumptions were valid. However, as our report sets out, the inconsistencies in how DOD collected and analyzed data on operational ranges raise questions about the reliability of DOD’s inventory. Specifically, DOD did not provide Congress with a detailed description of how the costs were prepared and an explanation of where site-specific data were used in place of assumptions, why specific and different assumptions were used by the services, and how assumptions affected the overall cost estimates. Without DOD’s supporting data and analysis, it is difficult for Congress to evaluate the accuracy or validity of the cost estimates DOD provided and it may be unwise to rely on them for assessing the potential liability associated with contamination on operational ranges. DOD disagreed with our recommendation that it needed to develop new cost estimates for cleaning up operational ranges, stating that it is developing a system for providing auditable data that meets the standards for the report of liabilities and is actively assessing its ranges for potential munitions constituent migration to off-range areas. However, the requirement in the National Defense Authorization Act for fiscal year 2002 was to report estimated costs to clean up operational ranges, not the costs to respond to or clean up constituent migration off-range. In its letter to GAO, DOD also questioned whether decision makers would find useful an estimated cost to clean up operational ranges and asserted that it is not required to develop cleanup estimates for operational ranges until such costs become probable and estimable by accounting standards. We disagree with DOD’s view of the information and its usefulness to Congress. Congress on two occasions asked DOD to provide just such information. Specifically, the 2002 Defense Authorization Act required that DOD report this information. In addition, the Senate Committee on Armed Services, in its report accompanying the National Defense Authorization Act for fiscal year 2000 (S. Rep. No. 106-50), directed DOD to provide to the congressional defense committees a report with a complete estimate of current and projected costs to clean up munitions constituents. In our opinion, the authorization act’s requirement and the committee’s direction provide ample evidence that congressional decision makers would find such information useful. DOD also disagreed with our recommendation that DOD provide specific funding for sampling for perchlorate. As our report points out, DOD’s current policy on perchlorate sampling designates funding for sampling as a Class I high priority environmental project. This means that perchlorate sampling is a priority for funding, along with all other high priority environmental projects. As a result, perchlorate sampling must compete with other high priority environmental priorities for funding and, due to limited funds, may not be funded. The result is that while DOD policy has designated a funding mechanism for perchlorate, DOD’s policy cannot assure that perchlorate sampling will be funded. Simply stated, if DOD wants to assure that installations conduct perchlorate sampling where appropriate, then it will need to provide specific funding for this sampling. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-3841, or Edward Zadjura at (202) 512-3841. Key contributors to this report are listed in appendix V. Military munitions can pose risks to public safety, human health, and the environment. Unexploded ordnance poses a risk of physical injury to those who encounter it. Military munitions may also pose a health and environmental risk especially on ranges located in ecologically sensitive wetlands and floodplains because their use and disposal may release constituents that may contaminate soil, groundwater, and surface water. More than 200 chemical munitions constituents are associated with ordnance and its use. When exposed to some of these constituents, humans potentially face long-term health problems, such as cancer and damage to the heart, liver, and kidneys. Of the more than 200 chemical munitions constituents associated with ordnance and its use, DOD considers 20 to be of greatest concern because of their widespread use and potential environmental impact. The 20 munitions constituents, taken from DOD’s Fiscal Year 2002 Defense Environmental Restoration Program Annual Report to Congress, are Trinitrotoluene (TNT), Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), 4-Nitrotoluene, Hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), 1,2,3-Propanetriol trinitrate (Nitroglycerine), Pentaerythritoltetranitrate (PETN), N,2,4,6-Tetranitro-N-methylaniline (Tetryl) (White Phosphorus). While many of these compounds have been an environmental concern to DOD for more than 20 years, the current understanding of the causes, distribution, and potential effect of constituent releases into the environment remains limited. The nature of the potential effect, and whether it poses an unacceptable risk to human health and the environment, depends upon the dose, duration, and pathway of exposure, as well as the sensitivity of the exposed populations. The link between constituents and their potential health effects is not always clear and continues to be studied. Table 6 describes some of the potential health effects of five of the munitions constituents of greatest concern. You asked us to determine (1) how DOD identified the location and last active use of all operational ranges and the basis for DOD’s cost estimates for cleaning up those ranges; and (2) DOD’s policy on sampling for contaminants linked to the use of ordnance on operational ranges and, where munitions-related contaminants have been detected, what corrective actions the services have taken. Specifically, you asked us to focus on DOD’s actions with regard to perchlorate. To determine how DOD identified the location and last active use of all operational ranges, we reviewed the services’ inventory data and interviewed service headquarters officials to determine how the inventories were conducted and the reliability of the data collected. We assessed the reliability of the services’ data (1) by reviewing existing information about the data and the processes that produced them and (2) by interviewing DOD officials knowledgeable about the data. We determined that data on the number of operational ranges and acreage were sufficiently reliable to include in our report; however, we determined that data on range characteristics were unreliable. Although we found the data on range characteristics to be unreliable, we present the data for informational purposes. To determine the basis for DOD’s cost estimates for cleaning up operational ranges, we reviewed the services’ estimated costs, supporting analyses, and calculations, and interviewed service and DOD officials on the scope and methodology used to develop cost estimates. To identify DOD’s policy on sampling for constituents linked to the use of ordnance on operational ranges, we reviewed DOD’s and the service’s policies related to the sampling and cleanup of potential contaminants and specifically their policies on perchlorate. We also interviewed officials at both headquarters and several installations on the implementation of DOD and service policies. To report on what actions the services have taken with regard to munitions constituents and perchlorate, we visited seven DOD installations where perchlorate had been detected and discussed what efforts have taken place, or were planned, to respond generally to munitions-related contaminants and specifically for perchlorate. We selected installations based on available data but were unable to determine the total number of installations reporting perchlorate contamination. We selected installations where generally high levels of perchlorate had been detected or, in one case, where perchlorate had contaminated a local municipal water supply. We also based our selection on the desire to include at least two installations from each military department and installations from different states or geographic locations in order to provide a mix of services and state agencies. (See table 7 for a listing of the installations we visited.) During our visits, where possible, we observed the areas of contamination as well as any cleanup demonstration projects under way. To identify what levels of contamination had been detected at DOD installations, we first obtained various summary schedules and lists of active and closed DOD and non-DOD sites with suspected or detected perchlorate contamination from both EPA and DOD. Because DOD has only recently begun to collect data on perchlorate, none of the listings we obtained included all installations. Further, most lists did not generally contain current data and were incomplete. Additionally, much of the data was redundant, with the same installations appearing on more than one list. Prior to selecting an installation to visit, therefore, we contacted service officials to verify that perchlorate contamination had, in fact, been detected. Our observations about perchlorate contamination and response actions at these installations are not generalizable to all military installations. Although we found no installations were cleaning up perchlorate, two installations we visited were conducting or had conducted demonstration projects of new technologies to clean up perchlorate, in anticipation of future cleanup requirements. In May 2003, Edwards Air Force Base, in California, began a demonstration project to remove perchlorate from groundwater. Edwards officials said the installation funded the project because the Air Force is DOD’s lead agency for perchlorate-related efforts and expected to help develop perchlorate treatment technologies. Edwards first detected perchlorate on the installation in 1997, while testing for other contaminants, and has detected perchlorate at 10 sites on the installation. Perchlorate contamination of 160,000 parts per billion was detected at one site where the source of the contamination is attributed to the use of perchlorate by various research facilities beginning about 1945. On this site the Air Force constructed a well field and project treatment facility. The demonstration project uses resin beads, which act like a magnet to pull perchlorate out of the water. Four wells extract groundwater that is discharged into a storage tank and then pumped through treatment equipment containing the resin. Treated groundwater is returned to the aquifer through five injection wells. Plans are to operate the project through July 2005. Currently, Edwards officials report that perchlorate continues to be removed to nondetectable levels, or less than 1 part per billion. In 2002, the Naval Surface Warfare Center, Indian Head, in Maryland, funded a field demonstration project using naturally occurring microorganisms, or bacteria, that break down or consume perchlorate. Navy officials first became concerned about perchlorate in 1998 when they learned of widespread perchlorate contamination at DOD sites in California. At that time, the installation regularly drained perchlorate- contaminated water into ditches and two bordering rivers. In 2001, Navy officials sampled and detected a shallow and well-defined plume of perchlorate contamination located in an area where the Navy once cleaned small rocket motors using a high-pressure wash. Perchlorate levels detected in the area ranged from 8,000 to 430,000 parts per billion. On this site in early 2002, the Navy installed two extraction wells, two injection wells, and nine groundwater monitoring wells. Groundwater was removed from the site, mixed with a lactate and a carbonate/bicarbonate liquid mixture, and then reinjected into the aquifer. After 20 weeks, perchlorate levels were reduced by more than 95 percent in eight of the nine monitoring wells. According to Navy officials, the mixture acted as an oxidizer to stimulate microorganisms that consumed the perchlorate. Officials said they plan to reuse the equipment to field test the technology at another site in an attempt to clean soil contaminated with perchlorate. We visited installations that had operational ranges and detectable levels of perchlorate but found the perchlorate contamination was generally not due to training, but rather due to prior and ongoing research, testing, manufacturing, and disposal of rocket motors and propellant waste on operational ranges and other parts of the installations. Perchlorate contamination was detected at 10 sites on Edwards Air Force Base beginning in 1997. Officials attributed all detected perchlorate contamination to rocket propellant manufacturing, research, development, and testing, and not to the use of munitions during training. The maximum perchlorate contamination level detected was 160,000 parts per billion in groundwater at one site. Officials said no live bombs had been exploded on Edwards Air Force Base ranges since 1952, and some ranges where bombs were exploded have been closed. Perchlorate is not used as part of current range activities, and Edwards Air Force Base does not test for contaminants on operational ranges, installation officials said. In March 1999, after a rainstorm, the U.S. Geological Survey sampled for perchlorate in a normally dry riverbed at Holloman Air Force Base, in New Mexico, and detected contamination of 16,000 parts per billion. During periods of rain, the river flows from the installation to the neighboring White Sands National Monument. The contamination was found on the installation near a former munitions operations site and rocket sled. Air Force officials said it was unlikely that the contamination was due to training. They attributed the perchlorate contamination to munitions research and testing in the 1960s and 1970s. Officials at White Sands National Monument attributed the perchlorate contamination to spent rocket motors stacked near the river and a high-speed rocket sled used to test the effects of acceleration. DOD and New Mexico’s state environmental agency sampled the riverbed and surrounding area again in 1999 and 2000 but did not find the high concentration of perchlorate previously detected. Air Force officials said they believed the 16,000 parts per billion detected in 1999 was an anomaly. At the time of our visit, officials at the Naval Surface Warfare Center, Indian Head, in Maryland, reported they detected perchlorate contamination at five sites on the installation, of which three were landfills, one was a metal parts disposal site, and one was a metal parts degreasing tank site. Indian Head detected maximum perchlorate concentrations between 88 and 450,000 parts per billion in the soil at two of the three landfills. At the third landfill, a perchlorate concentration of 2,000 parts per billion was detected in the groundwater. However, none of the contamination detected was attributed to the use of perchlorate during training exercises on operational ranges. As of March 2004, Redstone Arsenal, in Alabama, detected perchlorate contamination in the groundwater at 2 sites and in surface water and soil at 11 other sites. Redstone Arsenal officials attributed contamination to various past production, maintenance, and disposal activities at a number of sites, including at open burning areas used to incinerate waste rocket motor propellant, burning trenches used to incinerate solid material contaminated with rocket propellant, a rocket engine plant, motor degreasing and trimming areas, and a propellant waste storage area. Perchlorate contamination of about 20 parts per billion was also detected in ground and surface water outside the installation. The highest concentration levels detected in groundwater on Redstone installations have ranged from 106,000 to 160,000 parts per billion and the highest concentration levels detected in surface water have ranged from 377 to 1,700 parts per billion. Although Redstone Arsenal has conducted training on the installation, none of the contamination detected has been attributed to the use of perchlorate during training exercises. White Sands Missile Range, in New Mexico, detected perchlorate contamination at two sites beginning in 1999. At one site, sampling detected perchlorate concentrations up to 25,000 parts per billion. Officials said they were unsure of the precise cause of the contamination but said they initially believed it was due to an open burning and detonation facility used in the 1950s to incinerate rocket motors. However, officials also said that the contamination plume was uphill from where the open burning and detonation site is believed to have been, so that the contamination may be generally due to previous testing of hazardous materials in the area. Perchlorate was also found at the former site of a high-energy laser system test facility. Perchlorate contamination detected has been as high as 118 and 295 parts per billion in some wells. Officials said the contamination might be due to open ground burning of expended test items, or residue from actual tests, conducted prior to 1995. Officials said they planned to conduct more sampling to precisely identify the source of the perchlorate but did not attribute any of the perchlorate detected to training exercises on operational ranges. At the time of our visit in October 2003, the Naval Air Weapons Station, China Lake, in California, had detected perchlorate at five sites on the installation. Perchlorate contamination was predominantly found in drainage and waste disposal areas, most likely the result of research on propellants and explosives, and residue from the manufacture of propellants. Installation officials said that since the 1960s, thousands of pounds of perchlorate-based propellant have been stored and tested on China Lake. In July 2003, installation officials sampled and detected perchlorate concentrations of 778 and 921 parts per billion at two wells and at a drainage site. However, none of the contamination detected is attributed to the use of munitions during training exercises on operational ranges. In addition to those named above, Christine Frye, Roderick Moore, David Noguera, and Doreen Feldman made key contributions to this report. John Delicath and Amy Webbink also contributed to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | For decades, the Department of Defense (DOD) has tested and fired munitions on more than 24 million acres of operational ranges. Munition constituents such as lead, trinitrotoluene (TNT), and perchlorate may cause various health effects, including cancer. Concerned about the potential cost to clean up munitions, Congress required DOD to estimate the cost to clean up its operational ranges. Congress asked GAO to determine (1) how DOD identified the location and last use of operational ranges and the basis for DOD's cost estimates for cleaning up those ranges; and (2) DOD's policy to address contaminants linked to the use of munitions on operational ranges and, where contaminants such as perchlorate have been detected, what corrective actions the military services have taken. DOD identified the location and status of its operational ranges based on inventory data developed by the individual military services. However, the reliability of DOD's inventory is questionable because the services did not use a common framework to collect and analyze data on the number of existing operational ranges. Because DOD's cost estimates to clean up its operational ranges were based on individual service calculations that combined inventory data with unvalidated DOD cost assumptions, various service assumptions, and computer-generated cost rates, these cost estimates are also questionable. Specifically, GAO found that each service compiled inventory data using various methodologies over different time periods and developed cost estimates using a mix of differing assumptions and estimates, along with actual data. As a result, the services' estimates to clean up an acre of highly contaminated land vary from about $800 for the Air Force to about $7,600 for the Army. DOD does not have a comprehensive policy requiring sampling or cleanup on operational ranges for the more than 200 chemicals associated with military munitions. However, when required by the Safe Drinking Water Act or other environmental laws, DOD has sampled and cleaned up munitions and munitions constituents. With regard to perchlorate, DOD has issued sampling policies but cannot assure funding is provided for such sampling. In some cases, DOD has sampled for perchlorate when required under the Safe Drinking Water Act's Unregulated Contaminant Monitoring Regulation and for other contaminants when directed by state environmental agencies. However, DOD generally has not independently taken actions specifically directed at cleaning up munitions contaminants, such as perchlorate, on operational ranges when they have been detected. |
According to our review of congressional testimonies and national studies on professional boxing dating from 1994 through 2002, 15 fundamental elements are considered important in helping address the sport’s major problems. Six elements could help to protect the health and safety of professional boxers, four could help to protect their economic interests, and five could help to correct problems affecting the integrity of the sport. The six elements that could help to protect the health and safety of boxers would provide medical examinations, including neurological testing; monitoring of training injuries; assessments of medical risks; health and life insurance; the presence of appropriate medical personnel and equipment; and enforcement of suspensions for injuries. According to the testimonies and studies, these elements are important because, although the overall rate of injury is lower in professional boxing than in many other sports, the risk of severe or permanent brain injury is greater. Neurological testing may be needed to detect such injury. Furthermore, because injuries may occur during training and sparring as well as during boxing events, monitoring during training was recommended, and health and life insurance may be needed before and after as well as during events. In some instances, the treatment a fighter receives in the initial minutes after an injury determines whether the fighter recovers or sustains permanent damage or death. Having an ambulance and qualified medical personnel on-site, rather than on call, can be critical. Enforcement of suspensions imposed by boxing commissions in other states is important to prevent injured boxers from trying to fight outside the states in which they are registered before their injuries have healed. The four elements that could help to protect boxers’ economic interests would require pension plans for boxers, require full disclosure of purses and payments, require minimum uniform contractual terms between boxers and prohibit conflicts of interest. Without a union to represent their economic interests, boxers have often been exploited, and although the sport has generated enormous wealth for others, many professional boxers have been left penniless. Comprehensive pension plans for boxers are almost nonexistent, and boxers have sometimes been left to pay trainers out of their share of the fight purse when the financial responsibilities of promoters and managers were not disclosed in advance. Conflicts of interest between promoters and managers and long-term contracts with promoters have also disadvantaged boxers. The 5 elements that could help to correct problems affecting the integrity of the sport would require registration and training for judges, referees, and others; prevent sanctioning organizations from exercising undue influence in the selection of judges; establish uniform boxing and scoring rules; require reviews of sanctioning organizations’ rankings of boxers; and require knowledge of the sport for commission officials. Reports of unqualified officials, last-minute changes in the procedures for selecting judges, nonstandard boxing and scoring rules, fraudulent rankings that have resulted in injury and even death for weaker boxers, and political appointments to boxing commissions have undermined the integrity of the sport. Table 1 sets forth the 15 elements we identified. For more detailed information on the problems discussed in the testimonies and studies and the recommendations made to address these problems, see appendix II. The act’s provisions fully or partially cover 10 of the elements that we identified as important to address the health and safety, economic, and integrity problems facing professional boxing. Our analysis shows that one of the act’s provisions fully covers the element that requires evaluations of medical information on boxers and assessments of the risks involved in allowing them to fight before each match. The act’s provisions partially cover 9 elements. For example, one provision partially covers the element requiring medical examinations, including neurological testing, before and after a fight. (The provision requires prefight, but not postfight, examinations and no neurological testing.) Another provision partially covers the element requiring the presence of medical personnel and equipment at fights and the filing of postfight medical reports. (It requires the presence of medical personnel and equipment, but not the filing of postfight medical reports.) Table 2 sets forth our analysis of the extent to which the act’s provisions cover the fundamental elements we identified. On March 13, 2003, the Senate Committee on Commerce, Science, and Transportation approved S. 275, a bill that would further amend the act. If enacted, the proposed legislation would expand the act’s coverage of four fundamental elements—those dealing with the evaluation of medical information, minimum contractual terms, the selection of judges, and reviews of rankings. In addition, the proposed legislation would establish the United States Boxing Administration (USBA) within the Department of Labor and empower it to consider other fundamental elements in addressing professional boxers’ health, safety, and other concerns. USBA would be responsible for providing oversight, administering the federal boxing laws, and issuing minimum standards to protect the health, safety, and general interests of professional boxers. Its responsibilities would also include licensing boxers, promoters, managers, and sanctioning organizations and maintaining a registry of medical records and medical suspension information on all boxers. USBA would also be authorized to conduct investigations and to suspend or revoke licenses for misconduct after providing notice and hearing. The 8 state and 2 tribal boxing commissions that we reviewed varied in the extent to which they had documentation indicating compliance with the 10 provisions of the act related to the fundamental elements we identified. The act does not require the commissions to document their compliance. However, because documentation constituted the only verifiable evidence of compliance, we reviewed all available documentation in the commissions’ event files, including pre- and post-fight medical examination check sheets, insurance coverage forms, copies of contracts between boxers and promoters, event sheets identifying boxers’ registration numbers, promoters’ revenue reports to commissions, and statements of independence signed by ring officials. All 10 commissions had documentation indicating compliance at least 75 percent of the time for three provisions—those that require prefight medical examinations, disclosure of amounts paid to promoters, and registration of boxers—but only 2 commissions had documentation at least 75 percent of the time for the provision prohibiting conflicts of interest. (See fig. 1.) Five of the commissions said they usually complied with this provision but did not document their compliance. The 10 commissions’ documentation for the remaining six provisions varied within this range. (See table 4 in app. 3 for the results of our analysis of the commissions’ documentation.) When asked why they did not always document their compliance with the provisions, the commissions often did not provide a reason, but when they did, they generally pointed to privacy or liability concerns, said they were unaware of the federal provisions, or said they thought documentation was not needed. For details on the reasons the commissions provided for not documenting compliance, see appendix III. The eight states and two tribes that we reviewed vary in the extent to which their provisions cover health and safety and economic elements in addition to those covered in the act. Each of these states and tribes has some provisions that cover additional fundamental elements or portions of fundamental elements. The number of such provisions enacted by an individual Commission ranges from 10 (California) to 4 (Missouri). All 10 states and tribes have provisions fully covering the additional element that requires uniform boxing and scoring rules, and eight states or tribes have provisions fully covering the additional element that requires the filing of postfight medical reports. California was the only state with provisions fully covering 3 other additional elements—for monitoring injuries sustained during training, enforcing suspensions for debilitating training injuries, and providing pension plans for boxers. Four states or tribes have provisions that go beyond the act in requiring postfight medical examinations, but none of these states or tribes requires neurological testing. Similarly, three states or tribes have provisions that go beyond the act in requiring that boxers be provided with health insurance before and after, as well as during, each match, but none of these states or tribes requires life insurance. Figure 2 summarizes the results of our analysis. The primary reason provided by the states and tribes for not having provisions covering additional elements was that the provisions would be too costly to implement. For more details, see appendix IV. Actions taken by the Department of Justice under the act have been limited. Justice officials said the department does not prosecute cases unless they are referred to it by federal law enforcement agencies. There were no records of cases brought by U.S. Attorneys under the federal boxing legislation during fiscal years 1996 through 2002, and there were no referrals from law enforcement agencies. Because the act provides for state and civil remedies in addition to federal criminal prosecution, Justice officials said that cases could be referred to state authorities rather than to U.S. Attorneys. Furthermore, the officials said, violations of the act are misdemeanors, and U.S. Attorneys generally pursue only felony cases, although they would prosecute a misdemeanor if circumstances warranted. In commenting on a draft of this report, the president of ABC said that ABC had made two referrals to U.S. Attorneys’ offices. The first, made in October 2002, concerned the World Boxing Association’s ratings of a boxer. According to the ABC president, the referral was dismissed because the World Boxing Association provided the U.S. Attorney with a copy of its rating criteria and the boxers were well known. The ABC president said that the other referral, made to the Arkansas U.S. Attorney in 2001, reported that professional boxing was occurring in bars without the supervision of the Arkansas boxing commission. The ABC president said that ABC had not received a response to the referral and the case had not been prosecuted. The Federal Trade Commission’s (FTC) responsibility under the act is limited to making available to the public the information it receives from sanctioning organizations. FTC has no responsibility for enforcing compliance or verifying the accuracy of the information. FTC officials said they periodically check the sanctioning organizations’ Web sites to assess whether the required information has been made available to the public and has found the Web sites to be adequate. Our review of the Web sites of 14 sanctioning organizations found that this information was posted on the Internet. FTC officials also said they had not received any consumer complaints related to the boxing industry. In February 2003, legislation was introduced in the Senate that would amend the act by, among other things, creating a new organization within the Department of Labor to provide oversight and enforcement of the federal boxing laws. The purpose of this new federal organization is to facilitate more uniform enforcement of federal requirements designed to enhance boxers’ health, safety, and economic interests as well as the integrity of the sport. This organization would have the authority to issue regulations, including requirements for documentation; to monitor and oversee the commissions’ compliance with the existing federal protections for professional boxers; and to establish additional protections, if necessary. Although our review was limited to eight state and two tribal boxing commissions, the uneven documentation of compliance we found with the act’s provisions to protect the health, safety, and economic well-being of professional boxers does not provide adequate assurance that professional boxers are receiving the minimum protections established in federal law. Without complete and accurate information on the extent to which the act is being enforced and without a federal agency to proactively ensure nationwide compliance, there is little assurance of compliance. While the Justice Department has the authority to prosecute violations of the act, it focuses its limited resources on prosecuting felonies, is not responsible for monitoring compliance, and would prosecute a case only if it received a referral from a federal law enforcement agency. Since 1996, it has received no referrals from federal law enforcement agencies and pursued no cases of violation of the act. If enacted, the legislation would create a new organization within the Department of Labor that could address this gap in the oversight and enforcement of the federal boxing laws. We requested comments on a draft of this report from the Department of Justice, the Federal Trade Commission, and the Association of Boxing Commissions (ABC). The Department of Justice’s GAO liaison and the Federal Trade Commission’s GAO liaison and Office of General Counsel provided only oral technical comments, which we incorporated as appropriate. The president of ABC provided written comments, which are reproduced in appendix VI. We also provided the boxing commissions of the eight states and two tribes that we reviewed with the opportunity to review and comment on the facts in the report that related to their operations. We received written comments from the Missouri, Miccosukee, Mohegan Sun, Pennsylvania and Texas boxing commissions; these comments appear in appendixes VII through XI. As of July 16, 2003, we had received no comments from the California, Florida, Indiana, Michigan, and Nevada boxing commissions. In his written comments, provided on June 30, 2003, the president of ABC said that while ABC has had some successes, much work needs to be done to achieve uniformity in the regulation of boxing. He said that feedback from ABC’s membership on federal involvement in regulating professional boxing is mixed: many members regard such involvement as intervention, while others welcome it. He also said that some members believe that making certain types of testing (e.g., neurological testing) mandatory would have a negative impact on their jurisdictions because of the cost. According to the president, ABC is frustrated with the lack of enforcement of the Professional Boxing Safety Act of 1996. He said that violations of the act occur frequently, yet no government agency has been willing to enforce the current laws. The president said that he hopes the members can use the act’s 10 provisions as a starting point for standardizing the regulation of boxing. The Administrator of the Missouri Office of Athletics, who is also the president of ABC, provided written comments on the portion of a draft of this report applicable to Missouri on June 30, 2003 (see app. VII). While noting that the Missouri Office of Athletics encourages the standardized regulation of boxing, he said he also recognizes that any actions taken will have an economic impact on the sport that will have to be considered. In addition, he questioned who would enforce any new federal boxing provisions and stated the current law is not being enforced. He said that both state and tribal boxing commissions, through ABC, should work to standardize the regulation of boxing in the areas discussed in our report. He also made some technical comments, which we incorporated in the body of the report. The Miccosukee, Mohegan Sun, Pennsylvania and Texas boxing commissions also provided written comments, which appear in appendixes VIII, IX, X, and XI, respectively. In their comments, they expressed appreciation of our work, indicating, for example, that our report helps to clarify issues related to the protection of boxers’ health, safety, and economic interests. In addition, the Miccosukee and Pennsylvania boxing commissions cited tribal or state regulations that cover portions of some of 15 the elements we identified in the report as fundamental to protecting boxers’ health, safety, and economic interest and to enhancing the integrity of the sport. In some instances, the Miccosukee and Pennsylvania boxing commissions noted that it would be difficult for them to implement certain elements because of personnel and budgetary constraints or because of their limited jurisdiction. For example, the Miccosukee commission said that it could not monitor training injuries because it would not be feasible for the Miccouskee commission or any other boxing commission to send representatives to gyms throughout the United States and other countries to monitor real time training injuries. The Miccosukee commission also indicated that in the future it could complete and file checklists in event files to document its compliance with certain provisions, such as the one requiring the presence of appropriate medical personnel and equipment during and after events. The commission said that the lack of documentation in its files does not adequately reflect its compliance with this provision. The Pennsylvania commission noted the diversity among various boxing commissions in implementing the federal law. Finally, the Texas commission said it lacked authority to implement several of the 15 fundamental elements identified in the report. We recognize that boxing commissions vary in their approach to regulating boxing because of differences in their laws or regulations, local situations, and available budgetary and personnel resources. Furthermore, we recognize in our report that a lack of documentation does not necessarily mean that a requirement was not met. However, we had no other practical means to assess the extent to which the federal requirements were being addressed. Additionally, we agree with the Miccosukee commission that appropriately completed checklists would help to document compliance. Finally, we believe that our findings, along with the comments we received on our draft report, should provide Congress with useful information as it considers S. 275. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will provide copies of the report to the Ranking Minority Member, Senate Committee on Commerce, Science, and Transportation, and the Chairman and Ranking Minority Member, House Energy and Commerce. Copies of the report will also be sent to the Attorney General, the Chairman of the Federal Trade Commission, the Secretary of Labor, the Association of Boxing Commissions, the California State Athletic Commission, the Florida State Athletic Commission, the Indiana Boxing Commission, the Michigan Bureau of Commercial Services, the Missouri Office of Athletics, the Nevada Athletic Commission, the Pennsylvania Athletic Commission, the Texas Boxing and Wrestling Program, the Miccosukee Athletic Commission, and the Mohegan Tribal Gaming Commission Athletic Unit, and to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Key contributors to this report are listed in appendix XI. If you or your staff have any questions, please contact me on (202) 512-2834 or [email protected]. State or Indian tribal commission Miccosukee (FL) Mohegan Sun (CT) Mashantucket Pequot (CT) 1% (Continued From Previous Page) State or Indian tribal commission Washington, D.C. Saginaw Chippewa (MI) Pueblo de San Juan (NM) Oneida (NY) Yakahama Nation (WA) For each of the 15 fundamental elements that we identified, this appendix provides a summary of a major problem in professional boxing that the element is designed to address. The summaries are based on the congressional testimony and national studies—by the National Association of Attorneys General (NAAG) Task Force, the Department of Health and Human Services, and the Department of Labor—that we reviewed. The summaries also include recommendations made at the hearings and in the studies to address the problems. The problems are divided into three categories: health and safety, economic protection, and integrity of the sport. In June 1998, the U.S. Department of Health and Human Services reported the results of a study mandated by Congress on health, safety, and equipment standards for boxing. The study found that although the overall rate of injury is lower in professional boxing than in many other sports, the risk of sustaining a severe or permanent brain injury is greater in boxing because fighters are exposed to repeated blows to the head. Head injuries account for a significant portion of all boxing injuries. Factors such as poor boxing ability, reduced supervision, and small stature are thought to increase the likelihood of traumatic head injury. Similarly, the length of a boxer’s career and the total number of bouts in training, sparring, and competition combined have been linked to the severity of neurological damage. Because neurological damage is not always detected during routine medical examinations, neurological testing may be necessary to identify it. According to a professional boxing trainer with over 25 years of experience whom we interviewed, boxers are required to train and spar in the gym daily for months in preparation for a fight. He said that during the sparring sessions, many boxers sustain injuries that are not reported to the boxing commissions. As a result, some of the boxers participate in events with pre- existing injuries, exposing themselves to further injury or harm. In an effort to protect the health and safety of professional boxers, the NAAG task force recommended in 2000 that state inspectors inspect boxing gyms if adequate funding and staff are available. To help protect the health and safety of boxers, the NAAG task force recommended that all commissions implement a medical classification system that would establish risk levels for boxing injuries. For a fighter whose record included any element of a high-risk classification (e.g., repeated knockouts), the task force further recommended that commissions be required to impose a temporary suspension until the fighter received a medical clearance or required examination, such as a neurological examination conducted by a neurologist using magnetic resonance imaging and an electrocardiogram. In 2001, a representative of the Nevada Attorney General testified that although common sense dictates that an on-site ambulance is needed for all boxing matches and should be available to transport an injured boxer to a hospital, many promoters would prefer to call 911 if an ambulance is needed. The representative said that while this arrangement may be more cost-effective for the promoter, the treatment of a fighter in the initial minutes after an injury—whether waiting for an ambulance to arrive or receiving immediate and appropriate medial care—is critical in determining whether the fighter will recover or suffer permanent damage or death. Similarly, in 1983, the World Medical Association said that professional boxing events should be held in locations where adequate neurosurgical facilities are immediately available for emergency treatment of an injured boxer, a portable resuscitator with oxygen equipment and appropriate endotracheal tubes are available at ringside, and an ambulance is continuously on-site to transport any seriously injured boxer to a hospital immediately. The Professional Boxing Safety Act of 1996, as amended (the act), requires the continuous presence of a ringside physician and an ambulance or medical personnel with appropriate resuscitation equipment at each boxing event, unless equivalent protection is required by the boxing commission’s provisions. A Pennsylvania Athletic Commission official testified in May 2001 that boxing commissions should be required to develop criteria for licensing professional boxers, which should include reviews of boxers’ fight records (i.e., wins, losses, knockouts) and suspensions, and a centralized database of medical examination information on all licensed boxers. He said that the database should be accessible only to boxing commission officials and would provide boxing commissions with an additional screening mechanism to use in their license determination process. The president of the Association of Boxing Commissions (ABC) told us that current insurance provisions require promoters to provide health insurance coverage only during a boxing event. However, he said such coverage does not protect boxers in other instances when they may need medical treatment but do not have health insurance or the financial resources to pay for treatment. For example, boxers may sustain injuries during an event but not recognize until later that they have been injured and need treatment. Many boxers also sustain injuries during training or sparring. In 1996, a New Jersey Boxing Commission representative testified that boxers spend far more time sparring in gyms than competing in events; as a result, they are more likely to sustain injuries during this period. The representative said that to prevent injuries to the head and other parts of the body, the amount and intensity of sparring should be monitored. A trainer we interviewed said that boxers should have health and life insurance coverage throughout the training period, as well as before, after, and during an event, in order to address any medical conditions or injuries. However, he said many insurance companies do not offer boxers health and life insurance at affordable prices. In 2002, the president of ABC testified before the Senate Committee on Commerce, Science, and Transportation on the need for the uniform enforcement of all suspensions imposed by boxing commissions. Currently, such enforcement is applicable only to suspensions imposed on boxers for recent knockouts or for a series of consecutive losses and medical reasons. The president said that in some instances, commissions have suspended boxers for falsifying documents or other types of inappropriate behavior and that to avoid serving the imposed suspensions, some boxers have traveled to other states and obtained a license to continue boxing. Professional boxing offers no long-term financial protection for its participants, although purses for the big events are in the millions of dollars and televised worldwide, often on a pay-per-view basis. The New York State Attorney General testified in 1999 that the boxing industry has generated enormous wealth for virtually everyone except professional boxers. He added that over the decades, the interests of professional boxers have been ignored, leaving many penniless and medically at risk. In 1996, Congress mandated that the Secretary of Labor undertake a study on the feasibility of establishing a pension plan for professional boxers. According to the study, apart from programs run by the California commission and by the International Boxing Federation for its championship fights, comprehensive pensions for boxing are virtually nonexistent. The study concluded that a comprehensive program, if implemented for professional boxers, would consist of a charitable trust, a defined contribution plan, a defined benefit plan, and a disability income and survivor’s benefit program. In 1997, a reporter testified that in 1986 a boxer was guaranteed $300,000, with up to another $100,000 in training fees, for a fight. Out of a potential $400,000, the boxer was paid about $99,000. The manager did not pay the trainer and the boxer paid the trainer out of his share of the purse, leaving the boxer with $69,000. To address problems such as this, the NAAG task force study recommended that a model contract be developed to outline contractual disclosure requirements between the promoters, managers, and boxers. The model contract should specify the rights and responsibilities of all parties, such as the contest requirements, compensation (including a full accounting and disclosure of all deductions from a boxer’s purse), licenses, and remedies for lack of good faith, collusion, or breach of contract, including arbitration provisions. A Texas Boxing and Wrestling Program official cited reports of fights in which a manager managed both boxers and the manager and promoter were related. Such business arrangements limited the boxers’ chances of receiving fair payment. The official said that in theory, a manager is supposed to negotiate the most favorable economic terms for the fighter, while the promoter is supposed to make the largest possible profit on the event. There are frequent reports of boxers’ economic exploitation. For example, in January 2003, officials of the Mohegan Tribe Department of Athletic Regulations reported that a boxer had been fighting for more than a year and had never received payment for participating in events throughout the United States, although the manager was receiving the boxer’s fight purses. For this violation, the commission revoked the manager’s license for an indefinite period. In 2001, a Pennsylvania Athletic Commission official said that for years fighters have been contractually tied to promoters for a series of boxing events, limiting their ability and opportunities to pursue other promoters and to box in other events. The official said that the Muhammad Ali Boxing Reform Act, which limits the contracts between the boxer and promoter to 1 year, is a step in the right direction to correct this problem. A Nevada State Athletic Commission official testified in 1994 that boxing referees have to decide in a split second which fighter has won a bout. Accordingly, he said, judges should have the ability to closely observe the fighters and base their decisions on consistent scoring criteria. The NAAG task force made recommendations to help enhance the integrity of the sport, including the following: ABC should develop a standardized testing program to be administered to judges and referees. Judges and referees should be required to pass this examination before they receive their licenses. To be licensed as a referee, an individual should have prior experience officiating in amateur competition or in other states or jurisdictions. All referees should be required to receive training and attend a minimum of two medical training seminars each year. To be licensed as a judge, an applicant should be proficient in the rules and regulations of boxing and have prior experience officiating in amateur competition or in other ABC states or jurisdiction. To be licensed as a ringside physician, a physician should have a state medical license, be in good standing in the respective state, and have experience as a licensed physician for a minimum of 2 years. Ringside physicians should be required to receive training in ringside medicine. Promoters and managers should be licensed and regulated. The president of ABC testified in 2002 on the need for standards to prevent sanctioning organizations from interfering with boxing commissions’ selection of judges and referees. According to the president, that need was demonstrated during a nationally televised championship fight in 2001. He said that several weeks before the scheduled event, the sanctioning organization and the state boxing commission agreed that the sanctioning organization would designate the referee and one judge and the commission would designate the remaining two judges. However, less than 5 minutes before the event was to begin, a representative from the sanctioning organization threatened to withdraw the organization’s sanction—an action that would reduce the status of the fight to a nontitled event—if the commission did not agree to replace one of the judges selected by the commission with a judge designated by the sanctioning organization. The commission agreed to the sanctioning organization’s demands in order to retain the title status of the fight. Because the sanctioning organization was allowed to select two of the three judges, the president of ABC said the outcome of the event might have been compromised. In June 1996, a former Nevada Athletic Commission official testified that every boxing match in the United States should be conducted under the same boxing and scoring rules. While noting that ABC has established Unified Championship Rules for title bouts, he said that some commissions do not implement the same rules. The official said that standardizing boxing and scoring rules is important because fighters can have difficulty concentrating on protecting themselves in the ring when they are trying to remember whether a particular state uses a rule. Similarly, it is difficult for referees to focus on a bout if they are worrying about changes in the rules for different bouts. The NAAG task force study reported that sanctioning organizations’ rankings often are not based on objective assessments of talent or records of fighters’ wins and losses. Instead, according to the study, boxers associated with certain promoters may be highly ranked regardless of their skill and ability. The study reported that this creates fraud that can have deadly consequences. For example, a fight advertised as a major championship battle may turn out to be a mismatch, as was a bout held on November 13, 1982, between Ray “Boom-Boom” Mancini and Duk Koo Kim of South Korea. Mancini knocked out Kim, who never regained consciousness and died. The World Boxing Association had rated Kim as a top contender, even though he was not among Korea’s top 40 fighters. In 2002, an entertainment manager testified that state boxing commissions are generally underfunded and dominated by political appointees with limited knowledge of the sport. He said that many of these officials do not understand the boxing industry well enough to regulate it. This appendix presents the results of our analysis of the eight state and two tribal boxing commissions’ documentation of compliance with the act’s provisions and provides information on the reasons given by the commissions for not having documentation. Figure 3 summarizes the results of our analysis of the commissions’ documentation. The remainder of the appendix provides information on the extent to which the 10 boxing commissions had documentation indicating compliance with each of the act’s provisions related to a fundamental element. For the commissions that did not have or did not provide documentation for our review, the appendix also includes the reasons given by the commissions for not having or providing the documentation. When a reason is not specified, the commission did not provide a reason. Four of the 10 state and tribal boxing commissions (California, Indiana, Michigan, and Missouri) provided us with documentation of compliance less than 50 percent of the time with the act’s provision requiring the evaluation before each match of medical information and the assessment of risks involved in allowing a boxer to fight. The Missouri boxing commission said that it does not collect and maintain medical information because state law concerning confidentiality, disclosure, and civil liability issues prohibited them from doing so. The Michigan boxing commission said that they were advised by its legal counsel to limit the amount of medical information collected due to the commission’s limited authority to collect and protect such information. The Indiana Boxing Commission said it maintained medical information on professional boxers, but would not provide that information for review because of confidentiality and civil liability concerns. A Texas official said that the Texas Boxing and Wrestling Program used prelicense and prefight examinations, along with information obtained from Fight Fax, Incorporated, detailing a boxer’s record of wins and losses and medical suspensions, to assess the risks involved in allowing a boxer to fight before each match. This official added that when reported information indicated that a boxer’s physical condition was questionable, the commission might require the boxer to undergo additional medical tests to ensure that he or she was not participating in an event with a pre-existing injury. According to the official, the Texas Boxing and Wrestling Program does not disclose medical information it maintains on boxers to other commissions because of confidentiality and civil liability concerns. The California State Athletic Commission said it maintained medical information, such as the results of annual physicals and any neurological tests, on professional boxers registered in California, but it did not make this information available for review during our visit to the commission. Seven of the 10 state or tribal boxing commissions (Florida, Michigan, Nevada, Pennsylvania, Texas, Miccosukee, and Mohegan Sun) had documentation at least 75 percent of the time for the provision requiring minimum uniform contractual terms between boxers and promoters; the Indiana Commission had documentation 50 to 74 percent of the time; and the California and Missouri commissions had documentation less than 50 percent of the time. The Director of the Indiana Boxing Commission said that the commission’s representatives were responsible for obtaining copies of all contracts between promoters and boxers before an event and for ensuring that boxers were paid in accordance with the contractual terms. However, contracts between the promoters and boxers were missing from most of the commission’s event files. The director said that in some cases boxers forgot to forward their bout agreements to the commission after the matches. The Executive Officer of the California State Athletic Commission said that the contractual agreements between boxers and promoters were submitted to the commission before events and no events were held unless copies of the agreements were on file. However, many of the 2001 event files that we reviewed had no documentation of contractual agreements between boxers and promoters. No reason was given for the missing contracts. According to a Missouri Office of Athletics official, its legal counsel advised the commission against requiring boxing contracts because such agreements involved civil matters that were outside the jurisdiction of the Missouri Office of Athletics. Nine of the state and tribal boxing commissions (Florida, Indiana, Michigan, Missouri, Nevada, Pennsylvania, Texas, Miccosukee, and Mohegan Sun) that we reviewed had documentation at least 75 percent of the time for the provision requiring standards for rating boxers’, considering their records of wins and losses, weight differentials, caliber of opponents, and numbers of past fights, to protect against mismatches. The California State Athletic Commission was the only commission we reviewed that lacked documentation for this provision. According to the Executive Officer, the commission reviewed the reports of Fight Fax, Incorporated, and the commission’s chief inspector determined whether boxers were matched in accordance with their boxing skill levels, but the commission did not maintain any records on this process. All 10 of the state and tribal boxing commissions had documentation at least 75 percent of the time for the provision requiring medical examinations before fights. Six of the 10 state and tribal boxing commissions (California, Indiana, Missouri, Nevada, Pennsylvania, and Miccosukee) had documentation less than 50 percent of the time for the provision requiring the presence of appropriate medical personnel and equipment during and after each match. Officials from these 6 commissions said that no fight would proceed without emergency medical service and an ambulance on-site during events, but they did not document their compliance with this requirement. Furthermore, in commenting on a draft of this report, the Administrator of the Missouri Office of Athletics noted that the act does not require such documentation. The Florida, Michigan, Texas, and Mohegan Sun boxing commissions documented the presence of emergency medical personnel and equipment during the events at least 75 percent of the time. Three of the 10 state and tribal boxing commissions (Florida, Indiana, and Michigan) lacked documentation at least 75 percent of the time for the provision requiring health insurance for boxers during matches. The Florida State Athletic Commission said that it had not documented boxers’ health insurance because of a clerical mistake. The Director of the Indiana Boxing Commission said that many of the commission’s 2001 event files were missing documentation of health insurance coverage because in Indiana, a majority of the professional boxing events were organized by the same promoters, who usually secured an annual policy covering all of the events for the year. We asked the official for documentation of health insurance coverage for the events whose files were missing such documentation. However, this documentation was not made available during our review. Three of the 10 state and tribal boxing commissions (California, Michigan, and Nevada) had documentation less than 50 percent of the time indicating that they had enforced suspensions of boxers imposed by other commissions. Officials from these 3 commissions said that before approving fights, they reviewed the suspension information received from Fight Fax and the national suspension list to ensure that boxers were not participating in events while serving suspensions imposed by other commissions. The officials added that although this information was reviewed, they did not maintain a record of the information in the event files. All 10 of the state and tribal boxing commissions we reviewed had documentation at least 75 percent of the time for the provision requiring the disclosure of all purses and amounts paid to promoters. Two of the 10 commissions (Indiana and Missouri) had documentation less than 75 percent of the time for the provision requiring the disclosure of amounts paid to judges. The Director of the Indiana Boxing Commission said that the commission verified all forms of payment before events and ensured that all payments were made immediately after the events, but the commission did not make this information available during our review. The Missouri Office of Athletics said that the commission did not always document amounts paid to judges because Missouri law did not require the disclosure of such information. The official added that the promoters usually pay the judges by check through the Missouri Office of Athletics for tax purposes. The Miccosukee Athletic Commission was the only boxing commission with documentation at least 75 percent of the time for the provision that calls for ensuring that there are no conflicts of interest for boxers and promoters. Officials from the Michigan, Missouri, and Mohegan Sun boxing commissions said they were unaware that the provision had been enacted in federal law. The Director of the Indiana Boxing Commission said Indiana had not experienced any problems with boxers and promoters relating to conflicts of interest; therefore, the commission felt documentation for this provision was unnecessary. The Pennsylvania Athletic Commission was the only boxing commission with documentation at least 75 percent of the time for the provision that calls for ensuring that there are no conflicts of interest for boxers and commission representatives. Officials from the Michigan, Missouri, and Mohegan Sun commissions said they were unaware that the provision had been enacted in federal law, and officials from the California, Florida, Indiana, Nevada, and Texas commissions said they did not maintain documentation for this provision because they believed these issues were addressed through discussions. All 10 of the state and tribal boxing commissions we reviewed had documentation at least 75 percent of the time for the provision requiring boxers to be registered. Two of the 10 commissions (California and Indiana) had documentation less than 50 percent of the time for the provision requiring ring officials to be certified and approved. The Executive Officer of the California State Athletic Commission said the commission documented only current registrations and had purged the 2001 data from its files. During our review, the Director of the Indiana Boxing Commission said the commission was experiencing computer problems and could not provide us with the list of ring officials certified and approved in 2001. This appendix provides information on the extent to which the 10 states and tribes that we reviewed had provisions covering health, safety, economic, and integrity elements in addition to those covered by the act. The appendix also provides the states’ and tribes’ reasons for not having provisions covering certain elements. When reasons are not specified, the commissions did not provide them. None of the 10 state and tribal commissions we reviewed had provisions requiring postfight medical examinations, including neurological testing, for all boxers who participate in events outside their own jurisdictions. Three of the commissions said they did not have provisions requiring postfight medical examinations or neurological testing because they did not have the financial resources to administer such requirements and it would not be feasible to require small promoters or boxers to pay for them. However, the California, Indiana, Nevada, Texas, and Pennsylvania boxing commissions said they required postfight medical examinations when a commission requested that a previously injured boxer obtain a medical release before being allowed to fight. California was the only commission that required the monitoring of injuries sustained during training before events. Five of the state and tribal boxing commissions (Indiana, Missouri, Mohegan Sun, Pennsylvania, and Texas) agreed that from a safety perspective, monitoring boxers’ gym activities was a good concept, but they said they did not have the personnel or financial resources to monitor local gym activities. The Executive Director of the Pennsylvania Athletic Commission said that Pennsylvania did not require the monitoring of gym injuries before events, but he personally visited each local gym once or twice a year to monitor gym activities. Eight of the state and tribal commissions (California, Indiana, Michigan, Missouri, Nevada, Pennsylvania, Texas, and Mohegan Sun) required the filing of postfight medical reports. The Executive Director of the Florida State Athletic Commission said Florida did not require the filing of postfight medical reports because the commission and the small promoters and boxers did not have the financial resources to pay for physicians to conduct such examinations. The official added that in many cases the small promoters struggled to pay for the physicians needed to conduct the required prefight examinations. The Executive Director of the Miccosukee Athletic Commission said that the commission did not require the filing of postfight medical reports; however, he said a medical referral might be given to a boxer if the ringside physician suspected that the boxer had been injured and a follow-up examination or observation was needed. None of the state and tribal commissions require that boxers be provided with health and life insurance before and after each match. Generally, the commissions required the promoters to secure health insurance during a match, as the act requires. Some of the policies provided extended coverage for medical and accidental death and dismemberment for up to 1 year following the match. Four of the commissions (Michigan, Missouri, Pennsylvania, and Texas) said that providing coverage to boxers before or between matches—that is, during training—would be too costly. They stated that it is not the commissions’ responsibility to provide coverage, since boxers are independent contractors. California was the only state that required its commission to suspend boxers for training injuries. All of commissions agreed that suspending boxers for gym injuries was not feasible because many of the commissions were experiencing personnel and budgetary constraints and did not have the resources to monitor gym activities. California was the only state that required pension plans for professional boxers. Officials at the other nine commissions said that this was a positive initiative; however, six of the commissions (Indiana, Miccosukee, Michigan, Missouri, Pennsylvania, and Texas) questioned the contribution sources and basis for qualification. The Boxing Administrator of the Texas Boxing and Wrestling Program said that the problems associated with pension and retirement plans were similar to those attending the health insurance issue and that they were social rather than professional boxing issues. According to the Director of the Indiana Boxing Commission, pension plans would benefit boxers a great deal, particularly if boxers were older and nearing retirement, younger and intending to make a career of professional boxing, or injured and without an alternative source of income. The official said that problems would arise with funding, because promoters have little incentive to fund pension plans for boxers and might be unable to afford the additional expense. He said that deducting money from each boxer’s purse would also be difficult, because most boxers do not earn more than a few hundred dollars per bout. According to the Executive Director of the Pennsylvania Athletic Commission, the commission is pursuing funding for pension plans. The official added that in 1992, the commission attempted to use its budget surplus to start a trust for professional boxers; however, because of shortfalls elsewhere in the state’s budget, the funds were expended on other projects. The commission is initiating a charitable trust under ABC that has received some voluntary contributions thus far. The goal is to reach $500,000 in principal and operate the program using the account’s interest. The official said that because professional boxers are not unionized, a traditional pension fund would not be feasible. The Enforcement Division Director of the Michigan Bureau of Commercial Services said that the commission views operating a pension plan as outside the state’s role to protect the consumer. In addition, the official said, promoters operating in Michigan would not be willing to fund a pension plan. Officials from the Missouri Office of Athletics and the Miccosukee Athletic Commission supported the establishment of a pension plan; however, they questioned the feasibility of doing so, since boxers are independent contractors. Two of the state and tribal boxing commissions (Florida and Mohegan Sun) required full and open disclosure of all purses and costs of bouts, with the amounts paid to trainers and boxers broken out. According to the Executive Officer of the California State Athletic Commission, California has no provision requiring the disclosure of all purses and costs to trainers, but does require that the amounts paid and costs assessed to boxers be disclosed. The Enforcement Division Director of the Michigan Bureau of Commercial Services said that the commission did not have a provision requiring the disclosure of all purses and costs to trainers and boxers because the commission did not enforce any such agreements between these parties, as directed by their legal counsel. The Boxing Administrator of the Texas Boxing and Wrestling Program said that Texas only had provisions requiring the disclosure of all purses and costs to promoters and boxers. However, the Texas official said that the organization documented information on the fees that the trainers were paid from the boxer’s purse, although there was no requirement for such documentation. Eight state and tribal commissions (Florida, Indiana, Michigan, Nevada, Pennsylvania, Texas, Miccosukee, and Mohegan Sun) prohibited conflicts of interest for judges and referees. According to the Missouri Office of Athletics official, the commission has provisions for ensuring that there are no conflicts of interest for state boxing commission representatives. The official did not explain why the provisions do not address conflicts of interest for managers, judges, and referees. The Enforcement Division Director of the Michigan Bureau of Commercial Services said that a number of state officials resigned after the act established conflict of interest standards. Missouri was the only commission we reviewed with a provision requiring trainers, managers, promoters, and physicians to be registered and receive training. Officials from six of the commissions (California, Florida, Indiana, Miccosukee, Pennsylvania, and Texas) said that they had provisions requiring these occupations to be registered, but because of limited financial resources, the provisions governing training were applicable only to physicians and ring officials. Seven of the 10 state and tribal commissions (California, Florida, Missouri, Nevada, Pennsylvania, Texas, and Mohegan Sun) we reviewed had provisions for selecting judges and ensuring that sanctioning organizations do not influence the selection process. All 10 of the commissions we reviewed had provisions for selecting the boxing and scoring rules for events, such as ABC’s rules for championship events. Seven of the 10 commissions (California, Florida, Michigan, Nevada, Pennsylvania, Texas, and Miccosukee) had provisions that require officials who serve on boxing commissions to have knowledge of professional boxing. According to the Administrator of the Missouri Office of Athletics, the governor appoints the officials serving on the state’s boxing commission, and, as a result, some of these officials may not have a professional boxing background or knowledge. Similarly, representatives of the Mohegan Tribe appoint the officials serving on the Mohegan Tribal Gaming Commission Athletic Unit and therefore, according to the unit’s legal counsel, some of the officials may not have an extensive background in boxing. However, this has not been the commission’s experience, the counsel said. In analyzing the adequacy of efforts to protect the health, safety, and economic well-being of professional boxers and to enhance the integrity of the sport, our objectives were to (1) identify fundamental elements considered important to address the major health and safety, economic, and integrity problems facing professional boxing; (2) assess the extent to which the act’s provisions cover these elements and whether selected state and tribal boxing commissions have documentation indicating compliance with the act’s provisions; (3) assess the extent to which selected states and tribes have adopted provisions that cover fundamental elements in addition to those covered in the act; and (4) determine what actions the Department of Justice and the Federal Trade Commission (FTC) have taken under the act. To identify fundamental elements that are considered important to address the major health and safety, economic, and integrity problems facing professional boxing, we reviewed recent congressional testimony and studies conducted by a task force of the National Association of Attorneys General, the Department of Health and Human Services, and the Department of Labor. From these sources, which documented problems in the boxing industry and made recommendations to address them, we identified major problems facing the sport and consolidated the recommendations into 15 fundamental elements that, if implemented, could provide an adequate level of health, safety, and economic protection to boxers and help enhance the integrity of the sport. We discussed these elements with the Association of Boxing Commissions, which agreed that the elements could provide the desired protection and enhancement. To assess the extent to which the act’s provisions cover the fundamental elements we identified, we analyzed the act’s provisions and determined how many cover fundamental elements, either fully or partially. To assess the extent to which selected state and tribal boxing commissions have documented their compliance with the act’s provisions, we identified 8 of the 46 state boxing commissions and 2 of the 8 tribal boxing commissions for review. We looked at provisions in the act that were related to the 15 fundamental elements. The eight states are California, Florida, Indiana, Michigan, Missouri, Nevada, Pennsylvania, and Nevada; the two tribes are the Mohegan Sun (Connecticut) and the Miccosukee (Florida). We selected California, Florida, Missouri, Nevada, Pennsylvania, and Texas because they held the largest number of professional boxing events in calendar year 2001, the most recent year for which complete data were available. We selected Michigan and Indiana to represent states that held a smaller number of events in calendar year 2001 than the other states selected. We selected the Miccosukee and Mohegan Sun tribes because they were the Indian tribes that held the largest number of professional boxing events in calendar year 2001. The state and tribal commission we selected accounted for 49 percent of all professional boxing events held in the United States during calendar year 2001. At the Indiana, Michigan, Miccosukee, and Mohegan Sun commissions, we reviewed the cases files for all professional boxing events held in 2001, and at the remaining commissions, we reviewed the case files for a random selection of professional boxing events held in 2001. Because we randomly selected boxing events in these states for review, our sample for each of these states is just one of many samples we could have drawn. Since each sample could have produced a different estimate, we express our confidence in the precision of the estimates for our particular samples using 95 percent confidence intervals. These are ranges within which we are confident that 95 out of 100 samples drawn from these particular events would include the true value for all the events in the state. All the estimates based on sample data in table 3 have 95 percent confidence intervals not exceeding plus or minus 10 percentage points, unless otherwise indicated. To present the results of our case file reviews, we divided the actual or estimated percentages of cases with documentation into three compliance categories: 75 to 100 percent, 50 to 74 percent, and below 50 percent. We did not independently verify the documented compliance, and the results of our reviews at these 10 commissions cannot be generalized to all boxing events held nationwide during 2001. The documentation that we reviewed at the selected commissions varied. Because the act does not require documentation and the commissions have no uniform record-keeping standards, we considered all types of documentation maintained and provided to us by the commissions for our review. Such documentation included pre- and post-fight medical examination check sheets, insurance coverage forms, copies of contracts between boxers and promoters, event sheets identifying boxers’ registration numbers, promoters’ revenue reports to commissions, and statements of independence signed by ring officials. To assess the extent to which selected states and tribes have adopted provisions that cover fundamental elements in addition to those covered in the act, we reviewed the boxing provisions enacted by the eight states and two tribes and identified provisions that cover fundamental elements or portions of fundamental elements that the act does not cover. We confirmed with the boxing commissions of these states and tribes that they agreed with our analysis of their provisions, and we asked these officials why their state or tribe had not enacted provisions covering additional fundamental elements. Our findings for these selected states and tribes cannot be generalized to all 46 states and eight tribes. We did not assess the extent to which the states and tribes had implemented or enforced the provisions that cover additional fundamental elements. To determine what actions the Department of Justice and FTC have taken under the act, we determined the role that each is assigned under the act. We then met with Justice officials to identify whether any investigations or prosecutions had been conducted under the act in the jurisdictions of the eight state and two tribal commissions in 2001. In addition, we reviewed Justice’s central case management system for possible cases prosecuted during fiscal years 1996 through 2002. We also met with FTC officials to determine whether they had received consumer complaints related to the boxing industry. Furthermore, to determine that the sanctioning organizations were making the required information available to the public, we reviewed the Internet Web sites of 14 sanctioning organizations. We performed our work in accordance with generally accepted government auditing standards from September 2002 through July 2003 in Washington, D.C., and at the following state or tribal boxing commission locations: California State Athletic Commission, Sacramento, California; Florida State Athletic Commission, Tallahassee, Florida; Indiana Boxing Commission, Indianapolis, Indiana; Michigan Bureau of Commercial Services, Lansing, Michigan; Missouri Office of Athletics, Jefferson City, Missouri; Nevada Athletic Commission, Las Vegas, Nevada; Pennsylvania Athletic Commission, Harrisburg, Pennsylvania; Texas Boxing and Wrestling Program, Austin, Texas; Miccosukee Athletic Commission, Miami, Florida; and the Mohegan Tribal Gaming Commission Athletic Unit, Uncasville, Connecticut. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The Professional Boxing Safety Act of 1996 established minimum health and safety standards for professional boxing and provided for limited federal oversight by the Department of Justice and the Federal Trade Commission. In 2000, the Muhammad Ali Boxing Reform Act amended the act to better protect boxers' economic well-being and enhance the integrity of the sport. However, reports of problems continue, including permanent and sometimes fatal injuries, economic exploitation, and corruption. GAO was asked to (1) identify fundamental elements considered important to protect professional boxers and enhance the integrity of the sport; (2) assess the extent to which provisions of the Professional Boxing Safety Act of 1996, as amended (the act), cover these elements and determine whether selected state and tribal boxing commissions have documentation indicating compliance with the act's provisions; (3) determine whether selected states and tribes have provisions that cover additional elements; and (4) identify federal actions taken under the act. Based on GAO's review of congressional testimonies and national studies dating from 1994 through 2002, GAO identified 15 fundamental elements that are considered important to protect boxers' health, safety, and economic well-being and to enhance the integrity of the sport. The act addresses 10 of the 15 fundamental elements that GAO identified. The 8 (of 46) state and 2 (of 8) tribal boxing commissions that GAO selected for review accounted for 49 percent of the fights in 2001 and varied in the extent to which they had documentation indicating compliance with the 10 provisions of the act related to the fundamental elements. For example, all 10 commissions had documentation indicating compliance at least 75 percent of the time for 3 provisions--requiring prefight medical exams, disclosure of purses and payments, and registration of boxers--but only 2 commissions had documentation indicating compliance at least 75 percent of the time for a provision prohibiting conflicts of interest. Commissions either gave no reason for the lack of documentation, cited privacy or liability concerns, or said they were unaware of the federal provision. The eight states and two tribes that GAO reviewed vary in the extent to which they adopted additional provisions that cover elements not covered by the act's provisions. The number of such provisions ranges from 10 (California) to 4 (Missouri). For example, the states have provisions requiring the filing of postfight medical reports, uniform boxing and scoring rules, and boxing commission officials' knowledge of the sport. Federal actions taken under the act have been limited. The Department of Justice said it has not exercised its authority to prosecute cases because none have been referred to it by federal law enforcement authorities. Furthermore it noted that violations under the act are misdemeanors, and it generally applies its resources to prosecuting felonies. The Federal Trade Commission periodically checks the Web sites of the organizations that sanction professional boxing events to see whether they have posted the information that they are required to make available to the public and has found them to be adequate. Legislation was recently introduced to significantly amend the act by, among other things, creating a new organization within the Department of Labor that would provide oversight and enforcement of boxing laws. This new federal organization is intended to facilitate more uniform enforcement of federal requirements aimed at enhancing boxers' health, safety, and general interests as well as the integrity of the sport. The Department of Justice and the Federal Trade Commission provided only technical comments on our report. The Association of Boxing Commissions and five state and tribal commissions had concerns about the lack of existing federal enforcement and the economic impact of any additional federal requirements. |
MAS is a 207-person unit within Commerce’s International Trade Administration (ITA), as shown in table 1. ITA’s stated mission is to strengthen the competitiveness of U.S. industry by promoting trade and investment and by monitoring and enforcing U.S. trade laws and agreements. MAS is one of four distinct, but interrelated, business units within ITA, each led by an Assistant Secretary. MAS was created in 2004, after the publication of Commerce’s Manufacturing in America report which called for its creation to support the Secretary of Commerce in his role as the federal government’s chief advocate for the manufacturing sector. The report grew out of a 2003 review of the U.S. manufacturing sector initiated by former Secretary of Commerce Donald Evans in response to “unprecedented challenges” facing U.S. manufacturers. Commerce received input from industry associations and large and small manufacturers from critical sectors, and the report summarized manufacturers’ concerns, which included the government’s limited focus on manufacturing and its ability to compete globally. In addition to other recommendations, the report called for the creation of an Assistant Secretary of Commerce for Manufacturing and Services. Congress made recommendations in a 2003 House of Representatives Appropriations Committee Report to realign ITA’s structure and clarify the mission of each business unit to better address an increasingly competitive global economy and growing U.S. trade deficit. The report called for the creation of a better analytic basis for U.S. trade policies and negotiations. It contained several specific actions, including ones targeting U.S. manufacturers. For example, Congress expected the proposed MAS unit to develop tools and expertise to assess industry trends and evaluate the impact of trade agreements, and to identify and address challenges facing manufacturers through an interdepartmental advisory committee. Subsequent legislative and agency activity established the reorganization proposed in the committee report and transferred functions and staff across ITA business units, with one result being the reorganization of the Trade Development unit into MAS. Congressional concerns leading to the reorganization of ITA stemmed, to a large degree, from continuing trends in manufacturing and trade. Although the United States remains the world’s largest producer of manufactured products, there have been steep declines in manufacturing employment as measured by share of hours worked, and manufacturing’s share of gross domestic product (GDP) (see fig.1). Manufacturing employment fell from about 28 percent of total U.S. employment in 1969 to about 10 percent in 2009. As a share of nominal GDP, the drop has been similar. However, even with declines, almost 10 percent of the U.S. economy is in manufacturing—roughly 11 million workers. In addition, manufacturing continues to account for a large share of U.S trade. In 2009, manufactured goods accounted for about 60 percent of U.S. exports. The National Export Initiative brought new emphasis to the federal government’s role in promoting exports. Commerce, with ITA as the lead entity within Commerce, works with other federal government agencies on the Export Promotion Cabinet, which, in collaboration with the Trade Promotion Coordinating Committee (TPCC), is charged with carrying out the initiative and spurring job growth through the doubling of U.S. exports by 2015. ITA also works through the USTR-led interagency structure used to formulate trade policy and represent U.S. trade interests in multilateral and bilateral forums such as the World Trade Organization. MAS analysts cover essentially all nonagricultural sectors of the economy. MAS’s primary goal is to support the competitiveness of U.S. industry in domestic and international markets, which it does largely through providing policy advice, research, and analytical support to other parts of Commerce and the U.S. government. MAS has distributed 182 of its 207 staff across three suboffices, with the largest share in its Office of Manufacturing. As shown in figure 2, analysts in the Office of Manufacturing and the Office of Services cover a number of industry sectors, serving as sources of industry information from a trade perspective. Analysts and economists in the Office of Industry Analysis conduct economic and policy analysis to support U.S. industry and evaluate industry recommendations for trade negotiations and U.S. competitiveness. MAS analysts provide industry research and sector-specific analysis to support trade policy efforts of both internal clients in Commerce and external government clients, such as USTR. MAS’s major activities include: collection and dissemination of data on U.S. industry and trade; production of analyses on domestic and international trade and investment policies that can affect competitiveness; identification and resolution of overseas market and trade barriers; and management of the Industry Trade Advisory Committees (see tables 2 and 3 for more detailed descriptions of MAS’s activities; see app. III for examples of industry- specific trade barriers that MAS addressed in fiscal year 2010). The 2004 reorganization of Trade Development into MAS transferred day- to-day servicing of private sector requests to the Commercial Service offices in the field. The portion of MAS’s research and analysis that is used primarily by internal government clients (including Commerce and ITA business units, as well as other executive branch agencies such as USTR and the Department of the Treasury) for sensitive negotiations and policy making is not publicly available. However, some of the industry- specific information and data produced by the Office of Industry Analysis is made publicly available. MAS provides other parts of ITA and Commerce with industry sector analyses, which contain background information on the specific industry; evaluation of its competitive strengths and weaknesses both domestically and internationally; and proposed strategic direction for industry sectors (see app. IV for more information on MAS’s sector analyses). According to agency officials, other ITA units use these analyses to address specific trade issues that affect industries and regions. Table 2 shows examples of the types of support MAS provides to its internal Commerce clients. MAS also supports the efforts of other executive agencies, including USTR, through activities that include review and analysis of trade agreements, tariffs, and domestic regulations; involvement in the interagency trade policy-making process; and serving as a source of trade data to the government and public. USTR officials noted the contribution of MAS’s expertise and analysis to the U.S. trade policy process. According to USTR officials, MAS’s position in government, its insight into industry, and its analysis through a trade focused viewpoint are utilized by USTR in the trade negotiating process. A USTR official further stated that, in addition to MAS’s role in supporting the trade policy process, it also engages directly in trade promotion activities that benefit U.S. companies. USTR noted that MAS can play an important role in providing insight into industry’s concerns and potential reactions to policy changes. USTR officials also commented that MAS provided useful information to them during the U.S.-Korea free trade agreement negotiations. For example, MAS provided detailed trade data on autos traded according to engine sizes, which could affect the application of certain tariffs. According to USTR officials, other government or private sector entities do not generally have the capacity for the specialized trade and tariff data that MAS can produce. USTR officials stated they were unable to provide the actual documents MAS provided to USTR because they were part of the internal deliberative process used to develop U.S. negotiating positions. However, they did state that the type of analyses are not unlike MAS state and sector analyses, which are available to the public. That analysis is developed in collaboration with other ITA units for each free trade agreement and are posted on USTR’s Web site, as well as ITA’s. These analyses highlight the new market opportunities that a particular trade agreement provides to U.S. exporters and the effects of international trade on all 50 states’ economies. In addition to supporting USTR, MAS plays a key role within Commerce in providing support to other government agencies in areas such as supporting trade negotiations and providing data on travel and tourism, a leading U.S. services export. According to MAS officials, MAS provides analysis of and policy advice to Commerce on transactions seeking official financing from U.S. government agencies and multilateral development banks in which the U.S. government has a vote, as well as on key official finance issues under discussion in multilateral forums. These include, for example, the U.S. Export-Import Bank, Overseas Private Investment Corporation, and the Organisation of Economic Cooperation and Development Exports Credits Group. Table 3 provides specific examples of the types of MAS activities that support other executive branch agencies. While MAS conducts activities that have similarities to activities of other agencies, officials from MAS’s client agencies stated that MAS can provide analysis that combines industry and trade expertise that is not readily available elsewhere in government. For example, government officials we interviewed stated that some other agencies may have technical expertise in particular disciplines but have less of a focus on developing trade policy. Table 4 provides examples of how MAS differs from other ITA units, the U.S. International Trade Commission, and other government agencies. MAS has undertaken an internal review to update its mission and priorities regarding activities and clients and has proposed changes which are currently under departmental review. Since the 2004 reorganization of ITA, MAS’s broad mission statement has not clearly defined its role in government. MAS’s stated mission has been to strengthen the competitiveness of U.S. manufacturing and services by addressing commercial and economic impediments that disadvantage U.S. companies overseas. However, ensuring the competitiveness of U.S. industry requires policies and actions from many government agencies, not just Commerce. For example, U.S. corporate tax policy affects the competitiveness of U.S. industry, but Commerce does not have decision-making authority to regulate tax policy. MAS officials have acknowledged the need for having clearer priorities and mission alignment with the Administration’s National Export Initiative’s goals to successfully serve its clients. A MAS official stated that MAS cannot be “all things to all people,” which has been a consistent concern raised by staff. To address this need, in early summer 2010, MAS officials began an internal review of its mission and activities. As a result, MAS revised its mission statement: to advance the competitiveness of U.S. industries by leveraging its in-depth sector expertise in the development of trade policy and promotion strategies. MAS officials stated that the changes made as a result of the internal review are still in process and must be approved at the departmental level. MAS has an annual planning process to identify industry issues and determine actions to address them, but MAS officials acknowledged that these plans do not capture the numerous day-to-day unanticipated requests that come from sources such as the Secretary of Commerce, Executive Office of the President, other Executive Branch agencies, and the Congress. Every year, each industry office develops an assessment of the industry sectors it covers which, MAS guidance states, should present an analysis of the competitive strengths and weaknesses of the industry and an assessment of the industry’s needs. Each office then uses these industry assessments to develop business plans, which describe activities the office plans to undertake during the year, and links business plan activities to objectives, performance measures, and targets. For example, the fiscal year 2010 business plan for MAS’s Office of Energy and Environment included identifying market access opportunities and barriers, as part of its energy-efficiency initiative. The plan linked this activity to MAS’s performance measure to identify the percentage of industry-specific trade barriers that were removed or prevented. MAS officials told us that its management reviews the plans and uses them in making decisions throughout the year. However, MAS managers told us they prioritize the unanticipated requests based on resource availability and importance of the activity. As part of its internal review, MAS has developed decision criteria to assist analysts in prioritizing their work demands. According to the proposed criteria, which is in process awaiting departmental approval, MAS will primarily support sectors that have a direct connection to exports or that strategically impact trade. Such a sector must be a high-volume exporter, with $10 billion or more in exports or be a high potential-growth sector such as renewable energy. MAS management has set the goal of having 75 percent of MAS’s resources working toward National Export Initiative objectives of doubling exports by 2015. However, MAS officials noted that they currently conduct work that is important for U.S. global competitiveness but not directly related to exports. For example, MAS provides the business perspective on cases that go before the Committee on Foreign Investment in the United States (Committee), and coordinates Commerce participation in these cases. The Committee is an interagency panel authorized to review transactions that could result in control of a U.S. business by a foreign person. The review is to determine the effect of such transactions on the national security of the United States and is not directly related to exports. To align with its new approach for prioritizing analysis related to exports, MAS management plans to streamline resources devoted to working on those cases that are not of strategic importance. In addition, MAS plans to reduce its effort in analyzing domestic regulations that do not have a significant impact on exports. MAS does not have a mechanism to systematically monitor analysts’ workload or the amount of time spent on requests for different clients. MAS officials identified several top-priority clients, but the many unanticipated daily requests can pose difficulties for MAS analysts. Some requests for the Offices of Services and Manufacturing come to analysts through ITA’s formal “tasker” system and are delegated by senior management to analysts. The “tasker” system is an electronic system through which tasks are assigned to staff and signed off on by management. MAS officials stated that it is useful for tasking out assignments to multiple offices; however, they noted that it does not cover all the work undertaken within MAS. They noted that for the Office of Industry Analysis, most requests are directly communicated to managers and analysts by a colleague in ITA or another government agency. They stated that entering Office of Industry Analysis workload data into the tasker system is not current practice. Moreover, according to MAS officials, many of the requests involve sensitive documents which cannot be entered into the system in order to comply with security protocols. MAS managers said that the proportion of work captured by the tasker system varies greatly among its program areas, ranging from 85-90 percent for the Offices of Manufacturing and Services to less than 5 percent for the Office of Industry Analysis. Without a way to systematically monitor staff’s workload, it is difficult for management to determine how to most efficiently allocate resources. In prior work, we found that for an entity to run and control its operations, management must have relevant, reliable, and timely information. However, MAS did not systematically track the time spent on tasks by different types of clients. After our inquiries, the Deputy Assistant Secretary of MAS’s Office of Industry Analysis began monitoring short- term (2 to 3 days) requests by different types of clients and found that approximately 70-75 percent of short-term requests for analysis came from within ITA, 5-10 percent were generated from the rest of Commerce, and the remaining 15-20 percent were from other agencies in the Executive Branch and the Congress. However, these data reflect only the requests— which average 5,400 annually, according to MAS officials—submitted to the 41 staff in Office of Industry Analysis, not MAS offices overall, and they do not include information that involves security protocols or longer- term projects. According to agency officials, these short-term requests are ad hoc and often time sensitive and must be balanced with the broader long-term activities MAS analysts undertake. Given that management does not have reliable and timely information about staff’s workload, it may be difficult for management to ensure staff are working on the highest- priority efforts. Officials from agencies that work with MAS told us they understand MAS’s role and contributions to trade policy and competitiveness issues, but these contributions are not readily apparent to Congress or the general public. This is partly because MAS’s policy-development work is not publicly available. Although ITA’s Web site does provide public access to industry, trade, and economic data and analysis produced in MAS, it provides limited information about MAS’s role and activities. Consequently, the public and Congress may have limited information about MAS’s contributions to policy making. This lack of transparency may hinder Congress’s ability to provide effective oversight of MAS’s activities. There is little publicly available information about MAS’s priorities and contributions to policy making. As noted above, MAS’s work is largely intended for use by internal government clients, and much of that output is not in the form of products that are externally available. Previously, we found that, in addition to adequate internal communications, management should ensure there are adequate means of communicating with external stakeholders, which can include Congress. An organization’s Web site is typically a readily accessible source of information to the public about the organization’s mission and activities. A MAS official stated that when she worked in the private sector, she relied on the Web site to get the analysis and information they needed and that its Web sites were organized in a manner useful for industry users. In addition, officials from agencies that rely on MAS for analytical support stated they generally communicate with MAS via phone or e-mail and do not rely on the Web site. Further, MAS and other officials stated that ITA’s internal Web site, or intranet, provides a primary means of communication among ITA units and, thus, a public ITA Web site is less important for that purpose. For the general public, ITA’s Web sites provide limited information about MAS’s priorities and activities. According to a 2007 Commerce Inspector General report, ITA’s Web sites have duplicative, confusing, disorganized, and outdated pages. In a December 2010 meeting, members of ITA’s Web Governance Board (Board) told us that the issues raised in the report are still relevant. The board observed that Web sites are important tools to increase public awareness of ITA programs and expertise, not only for the business community, but also for external stakeholders who may have difficulty finding useful information about MAS as an organization and its role in trade policy. The board members told us that because ITA does not have a centrally directed web management office, each business unit is responsible for managing its own web presence, but there is great variation among the offices’ ability and technical expertise. A MAS official stated that, MAS, like some of the other ITA business units, does not have dedicated full- time staff focused on maintaining MAS’s web presence. Instead there are several MAS analysts who have the auxiliary duty of working on MAS’s Web site in addition to their other duties. Consequently, MAS and ITA as a whole lack cohesive, transparent, and consistent Web sites. This may also hinder MAS’s ability to effectively communicate its priorities and contributions to stakeholders, Congress, and the public. Due to its policy-support role, MAS’s ability to meet its performance targets, such as breaking down trade barriers faced by U.S. firms, depends on actions from other agencies, Congress, businesses, and foreign governments. MAS has three main objectives linked to five outcome-based performance measures that it reports on externally in assessing its performance. One objective—to ensure appropriate industry and other stakeholder input into trade policy development, negotiations, and implementation—covers much of what MAS does, including addressing industry-specific trade barriers and working with USTR on trade agreements. MAS has also developed internal measures that are used to track activities and for planning, but are not reported externally (see tables 5 and 6). MAS’s fiscal year 2011 budget justification stated that one limitation to its meeting its performance targets is that many factors—including U.S. business cooperation, global trade trends, political developments, and the extent to which foreign governments create barriers or act inconsistently with trade obligations—affect the number of barriers removed. MAS’s performance-measurement process attempts to address this challenge by breaking its outcome-based measures into specific milestones, according to officials. The specific milestones—such as identifying market-access issues of interest to U.S. industry or industry-specific opportunities resulting from a trade agreement—are analyses or industry-outreach steps that MAS itself can undertake and complete, even though MAS cannot control the ultimate impact of those actions. MAS sets targets for its performance measures annually. These milestones, while more specific than the overall performance measures, are nonetheless qualitative measures. In Commerce’s fiscal year 2010 Performance and Accountability report, MAS reported that it had met or exceeded all of its targets for its four performance measures. For example, MAS reported that it had met its target of removing or preventing 30 percent of the industry-specific trade barriers it addressed. In both 2008 and 2009, Commerce’s Performance and Accountability reports stated that MAS exceeded three of its targets and did not meet its target for its work on trade agreements, because the Administration suspended work on two trade agreements MAS was working on. MAS does not systematically obtain feedback on its performance from all its clients on whether or not it is meeting their individual needs. In previous work, we stated that it is important for agency managers to collect performance data that are sufficient to support decision making. A MAS official stated it does not have the resources to systematically survey its clients about their satisfaction with MAS’s activities. In addition, MAS officials stated they do not view administering a survey to obtain feedback as feasible since many of their clients are senior policy makers. MAS officials said that if their clients were not satisfied with their work, the clients would not continue to request analysis and information. Officials stated they do obtain feedback from its ITA clients through periodic meetings with managers in the other ITA business units. Although MAS does not obtain systematic feedback from all its clients, officials from other government agencies told us that while there is a great deal of industry expertise in critical areas within MAS, the level of expertise varies by sector, and some gaps do exist. Officials from the Departments of State and Treasury stated that MAS provided valuable analysis on export financing for airplanes for the Organisation for Economic Cooperation and Development interagency delegation on export financing. In addition, officials from the Department of Transportation stated that MAS’s knowledge of trade policy complemented its transportation-policy expertise and its technical understanding of engineering and infrastructure, allowing it to better serve industry. However, officials from State and USTR noted that the level of expertise in MAS is not consistent across all sectors and, consequently, the usefulness and relevance of MAS input can be uneven. In addition, an Import Administration official stated that while it relies on MAS to assist with industry analysis of foreign trade zone applicants, the coverage of some sectors is uneven. MAS officials recognized there is unevenness in the level of expertise across sectors and said they are trying to address the issue as part of its internal-review process and by providing better training for analysts. Some of MAS’s activities fall outside its external and internal measures. For example, MAS officials stated it is difficult to measure some of the economic analysis work that does not contribute directly to breaking down trade barriers or toward developing a trade agreement. Some of this work is used by senior government officials in Commerce and other parts of the Executive Branch. Further, MAS activities that contribute to the performance measures of other parts of ITA are reported not by MAS, but by the unit with primary responsibility for each specific performance measure. MAS’s strategic plan includes performance measures related to the goals and objectives that other ITA units have primary responsibility for and on which it does not report. As a result, the performance measures MAS reports on do not include its contributions to the other ITA units’ goals. For example, MAS’s contributions to antidumping and countervailing duty proceedings are reported by Import Administration. Responding to our request, MAS officials compiled two large volumes of examples (i.e., analysis documents, information used in trade negotiations, and regulatory cost analyses) illustrating MAS activities over the past few years. MAS officials told us this was not an exercise the office conducts on a regular basis, and that it was very useful. The officials said that it provided a way for staff to link their activities to broad outcomes separately from the overall performance measures. However, they added that the exercise was resource intensive, and they do not have plans to regularly repeat this type of effort. At a time of heightened interest in both expanding U.S. exports and streamlining government operations to lower costs and avoid inefficiencies, the role and effectiveness of MAS within ITA is of particular interest. Created out of an existing Commerce unit in 2004 to more explicitly focus on U.S. competitiveness, MAS’s services are primarily provided to government clients. Unlike other Commerce offices such as Commercial Service and the Import Administration, or independent agencies such as USITC, MAS functions largely as an internal consulting group for government trade policy. MAS’s clients report that MAS provides analysis not readily available elsewhere in the government. Because of the reduced visibility that accompanies this role, it is important for MAS to clearly define its contribution to the Administration’s trade and economic policy. We found that MAS continues to refine its mission and develop decision criteria to prioritize its activities, and enhancements are still undergoing departmental review. Given that MAS does not face a market test for its services, a clear set of priorities could help it meet its clients’ needs. MAS faces challenges in measuring its contributions to trade policy, because as a policy support organization, MAS may have difficulty isolating its specific contributions to the trade policy process. MAS has established a detailed process to measure its performance, but the office does not systematically obtain feedback from all its clients or track its contributions to major policy decisions that fall outside its performance measures. This makes it difficult for MAS to accurately determine the extent to which it adds value or to identify opportunities for improvement. In terms of communication, MAS also faces challenges in creating transparency because of the nature of its activities. Because MAS provides industry and trade policy expertise, seldom through products that are publicly attributable to MAS, there is limited visibility of MAS’s contributions in key work areas. Its Web site could be a useful communication tool, but as part of ITA, MAS relies on ITA-level support for management of the Web site. Nonetheless, since its support functions are central to its contributions, presenting them more clearly would be useful for congressional and stakeholder oversight of its activities. MAS is in the process of finalizing its mission statement and decision criteria and is taking steps to more clearly prioritize its activities and better align its resources to meet the goals of the Administration’s National Export Initiative. Moving this initiative forward within Commerce will be an important step. To better assure MAS is meeting the needs of its clients, we recommend that the Secretary of Commerce, in concert with MAS management, take the following four actions: 1. To facilitate MAS’s efforts to prioritize its activities, establish time frames to finalize the clarification of MAS’s mission and decision criteria. 2. To better enable MAS to target its resources, ensure MAS has a way to more systematically monitor how staff time is allocated across various efforts. 3. To improve transparency and ensure that priorities are consistent with those of key stakeholders, explore methods for MAS to more clearly communicate its mission, priorities, and activities to clients, stakeholders, the public, and Congress. These methods could include, among others, working with ITA leadership to develop a strategic plan with associated time frames to improve ITA’s web presence and management. 4. In order to ascertain whether MAS is meeting the needs of its government clients involved in the trade policy process, explore ways to more systematically obtain information on the value it is adding. This could include collecting feedback from its clients on its activities more systematically and tracking the outcomes of the analyses it provides for major trade policy decisions. We requested comments on a draft of this report from Commerce. In its comment letter, Commerce stated that it fully concurred with our findings and recommendations. The letter also said that MAS’s current redefinition plan is expected to be completed by October 2011. Commerce’s complete letter is reprinted in appendix V. As agreed with your offices, unless you publicly release the contents of the report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of Commerce. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contact and staff acknowledgements are listed in appendix VI. The objectives of this report were to examine (1) the Office of Manufacturing and Services’s (MAS) goals and activities, and how the types of analysis and expertise it provides compare with those provided by other government entities; (2) how MAS prioritizes its activities and targets its resources; and (3) the extent to which MAS tracks and reports its contributions to increasing U.S. competitiveness and trade. To assess MAS’s activities and how they compare with the types of analysis and expertise provided by other government entities, we reviewed relevant documents, including legislative authority, Department of Commerce’s (Commerce) 2004 Manufacturing in America report, National Export Initiative documents, and budget and staffing information. We reviewed the International Trade Administration’s (ITA) strategic plan and Web site to evaluate how the type of work MAS undertakes compares to that conducted by the other ITA business units. Additionally, we reviewed information on other agencies’ Web sites to obtain information on the types of analysis and expertise they offer and initiatives the agencies participated in with MAS. We discussed similarities and differences in the types of work that MAS’s government clients conduct with officials with knowledge of the unit. We interviewed Commerce officials from the Office of the Secretary and National Institute of Standards and Technology and representatives from the other ITA business units—Commercial Service, Market Access and Compliance, and Import Administration. We also interviewed officials from other U.S. government agencies, including International Trade Commission, Office of the United States Trade Representative, the Departments of State, Energy, Transportation, Treasury, Environmental Protection Agency, and Office of Management and Budget (OMB). Additionally, we interviewed the Industry Trade Advisory Committees’ designated federal officers (DFO), representatives from two industry associations, and the chairman of one of the committees. To gain additional information on MAS’s activities, we also attended multiple MAS events, including its Manufacture America conferences in Pittsburgh, Pennsylvania, and Chicago, Illinois; a meeting of the U.S. government’s Manufacturing Council; and a congressional event in Washington D.C., for ITA’s Market Development Cooperator Program. We also reviewed documents provided by MAS containing examples of how its work has been used by other parts of Commerce and other U.S. government agencies over the last 5 years. To obtain information on how MAS prioritizes its activities and targets its resources, we reviewed documents related to MAS’s proposed decision- making criteria and revised mission and discussed them with MAS officials. Additionally, to assess the availability of information to the public about MAS’s role in the U.S. government, we reviewed the Commerce Inspector General’s 2007 report on trade coordination efforts and met with ITA’s Web Governance Board. In order to understand any past and current issues related to MAS’s priorities, we reviewed OMB’s 2006 Program Assessment Rating Tool (PART) report and National Export Initiative documents. We also interviewed MAS leadership from the offices of the Assistant Secretary and Planning, Coordination, and Management; officials from its offices of Manufacturing, Services, and Industry Analysis; the administrators of MAS’s assignment tasker system; and ITA’s Deputy chief financial officer. To assess the extent to which MAS tracks and reports its contributions to increasing U.S. competitiveness and trade, we reviewed documents related to its performance measurement system, including MAS’s industry assessments, business plans, performance targets, and Commerce’s fiscal year 2010 Performance and Accountability Report. We also reviewed past GAO work on the principles of effective performance measurement. In order to ensure consideration of any past issues concerning MAS’s performance measurement, we reviewed OMB’s 2006 PART report. We also discussed MAS’s performance measurement with knowledgeable officials from MAS, ITA, and OMB. In order to present information on manufacturing’s share of U.S. gross domestic product (GDP), employment, and exports, we relied on data from Commerce. Information on GDP and employment was obtained from the Bureau of Economic Analysis (BEA). We used calculations of hours worked to report shares of employment. To report the numbers of employees in the U.S. economy, we computed full-time equivalents, assuming 2,080 hours worked in each calendar year. Information on manufacturing exports was obtained from MAS’s TradeStats Express. To compute the U.S. share of world-wide production and exports in manufacturing data, we relied on the World Bank’s Data Bank. To assess the reliability of the World Bank and Commerce data, we interviewed knowledgeable agency officials. To the extent possible, we compared values against alternative sources. We found that the data were sufficiently reliable for the purposes of presenting historic trends on manufacturing production, exports, and employment. We conducted this performance audit from August 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Over the past two decades, manufacturing’s share of the output of the U.S. economy has fallen. In 1989, manufacturing’s output expressed in constant 2009 dollars to correct for inflation was approximately $1.5 trillion, while total output was approximately $8.6 trillion. By 2009, while manufacturing output had grown to $1.6 trillion, total output had grown to more than $14 trillion. Using data from BEA, figure 3 shows manufacturing’s share of total U.S. output, measured by GDP. As the figure shows, from 1989 to 2009, U.S. manufacturing as a share of GDP declined from a peak of about 17 percent to about 11 percent, and has generally displayed a steady decline. While the share of U.S. economic output devoted to manufacturing has fallen, the United States remains a leading producer of manufactured goods. Table 7 uses data from the World Bank to compare selected countries’ proportions of world production in manufacturing in 2000 and 2008. These four countries collectively represent more than 50 percent of world manufacturing. As the table shows, the U.S. share of world production in manufacturing fell from about 26 percent to about 18 percent. During the same period, China’s share increased from about 7 percent to about 15 percent. Over the past 20 years, the percentage of U.S. exports represented by manufacturing has remained largely unchanged. In 1989, correcting for inflation, the United States exported approximately $800 billion of goods and services, of which about $450 billion were manufactured goods, according to BEA and MAS data. In 2009, the United States exported approximately $1.6 trillion, of which about $917 billion were manufactured goods. Figure 4 shows the share of total exports that were in manufacturing. Over this time manufacturing’s share of exported goods and services has stayed between 58 and 65 percent. The U.S. share of world-wide exported manufactured goods has remained consistent over the past decade. Table 8 shows manufacturing exports for selected countries in 2000 and in 2009 using World Bank data. In 2000, the United States represented about 13 percent of manufactured merchandise exported, which declined to about 8 percent in 2009. During this period, China’s contribution to world manufacturing rose from about 5 percent in 2000 to about 13 percent in 2009. From 1989 to 2009, manufacturing’s share of employment has fallen. In 1989, measured by hours worked by full- and part-time employees, there were approximately 18 million U.S. workers in manufacturing and approximately 94 million in the entire U.S. economy. In 2009, the number employed in manufacturing had fallen to approximately 11 million, while the number employed in the United States had grown to about 107 million. Figure 5 shows the share of hours worked in manufacturing, which fell from almost 20 percent in 1989 to about 10 percent in 2009. Over the past 10 years, the drop in manufacturing employment has been matched by an increase in the share of employment in services. Table 9 shows employment in the United States by sector for 2000 and 2009. As the table shows, over this period manufacturing’s share has fallen by about 5 percentage points, while the service industry’s share has increased by about 4 percentage points. China insurance branching and licensing Japan postal insurance (new) Canada and Mexico CITEL Mutual Recognition Agreement implementation China Joint Commission on Commerce and Trade health Environmental trade liberalization World Trade Organization European Union (EU) access for U.S. wine (Phase II) Recently, in response to the National Export Initiative, MAS began to prepare “global sector strategies,” which, according to MAS officials, identify and provide information on potential export markets for high- priority sectors; identify obstacles and risks associated with the sector; and provide policy recommendations for decision makers. Agency officials stated that the global sector strategies are based on a common template and will be MAS’s signature product, with portions available to the public. Currently, one strategy, for aerospace, has been completed. An excerpt is reproduced in figure 6. The other 14 strategies that are listed below are in various stages of completion: Automotive; Basic chemicals; Civil nuclear; Travel and tourism; Professional services; Digital content; Medical technologies; Building products; Semiconductor industry; Franchising/distribution; Renewable energy; Supply chain and logistics; Architecture, engineering, and construction; and Insurance and asset management. In addition to the individual named above, Celia Thomas, Assistant Director; Christina Werth; Benjamin Bolitzer; Jacob Beier; Gezu Bekele; and Margaret McKenna made key contributions to this report. Other contributors include Karen Deans, Ernie Jackson, David Dornisch, Elizabeth Curda, Kim Frankena, Etana Finkler, Rob Ball, and Joe Carney. | Declining U.S. manufacturing has been an issue of continuing concern for policymakers; this was reflected in the Obama Administration's (Administration) 2010 announcement of the National Export Initiative. The Administration has also shown interest in improving the efficiency of the federal support of trade operations. In 2004, the Office of Manufacturing and Services (MAS) was established within the Department of Commerce's (Commerce) International Trade Administration (ITA) to enhance the global competitiveness of U.S. industry. GAO was asked to examine (1) MAS's goals and activities and how they compare with those of other government entities; (2) how MAS prioritizes its activities and targets its resources; and (3) the extent to which MAS tracks and reports its efforts. GAO reviewed agency documents and interviewed officials from MAS, other parts of ITA and Commerce, and other agencies.. MAS's primary goal is to support the competitiveness of U.S. industry, which it does largely through combining its industry and trade expertise to support other parts of Commerce, including other parts of the ITA and external U.S. government clients, such as the Office of the U.S. Trade Representative (USTR). The major activities of MAS's offices include: collection and dissemination of data on U.S. industry and trade, production of analyses on policies that can affect competitiveness, and identification and resolution of overseas trade barriers. While some activities may seem similar to those of other agencies, such as USTR, officials from MAS's client agencies stated that MAS's combination of industry and trade expertise is not readily available to them elsewhere in the government. MAS has undertaken an internal review to update its mission and priorities regarding activities and clients and has proposed changes currently under departmental review. MAS does not have a mechanism to systematically monitor analysts' workload or the amount of time spent on requests for different clients. The absence of workload data may hinder its ability to effectively allocate its resources to address the needs of the trade policy process. Further, MAS's role has not been clearly communicated, and ITA's Web site provides limited information about MAS. Consequently, the public and Congress have limited information about MAS's activities and contributions to policy making. MAS's ability to meet its performance targets largely depends on actions from other government agencies and other parties, making isolating its contributions difficult. MAS developed a series of steps, or milestones, to help isolate its contributions to trade policy outcomes, although officials acknowledged continuing challenges. Further, MAS does not systematically obtain feedback on its performance from the agencies to which it provides analysis, nor does it track its contributions to major policy decisions that fall outside its externally reported performance targets. This makes it difficult to assess the extent to which MAS's work adds value to the trade policy process. GAO recommends that the Secretary of Commerce take actions, in concert with MAS, to finalize MAS's focusing of mission and priorities, systematically monitor workload, and more systematically obtain and communicate information on the value MAS adds to the trade policy process. In its comments, Commerce concurred with the findings and recommendations and expects to make progress by October 2011. |
As a result of a 1995 Base Closure and Realignment (BRAC) Commission decision, Kelly Air Force Base, Texas, is to be realigned, and the San Antonio Air Logistics Center, including its Air Force maintenance depot, is to be closed by July 2001. Similarly, McClellan Air Force Base, California, and the Sacramento Air Logistics Center, including its Air Force maintenance depot, is to be closed by July 2001. To mitigate the impact of the closures on the local communities and center employees, the administration announced its decision to maintain certain employment levels at these locations. Privatization-in-place was one initiative for achieving these employment goals. Since that time, Congress and the administration have debated the process and procedures for deciding where and by whom the depot maintenance workloads at the closing depots should be performed. Central to this debate are concerns about the excess facility capacity at the Air Force’s three remaining maintenance depots and the legislative requirement in 10 U.S.C. 2469 that, for workloads exceeding $3 million in value, a public-private competition must be held before the workloads can be moved from a public depot to a private sector company. Because of congressional concerns raised in 1996, the Air Force revised its privatization-in-place plans to provide for competitions between the public and private sectors as a means to decide where the depot maintenance workloads would be performed. The first competition was for the C-5 aircraft depot maintenance workload, which had been performed at the San Antonio depot. The Air Force awarded the workload to the Warner Robins depot in Georgia on September 4, 1997. During 1997, Congress continued to oversee DOD’s strategy for allocating workloads currently performed at the closing depots. The 1998 Defense Authorization Act required that we and DOD analyze various issues related to the competitions at the closing depots and report to Congress regarding several areas, which are discussed in appendix I. One of these areas involves the combination into single solicitations of aircraft and multi-commodity workloads at the Sacramento depot and multiengine workloads at the San Antonio depot. Appendix II provides additional information about the maintenance workloads currently performed at these facilities. As required by the act, a solicitation may be issued for a single contract for the performance of multiple depot-level maintenance or repair workloads. However, the Secretary of Defense must first (1) determine in writing that the individual workloads cannot be performed as logically and economically without combination by sources that are potentially qualified to submit an offer and be awarded a contract to perform those individual workloads and (2) submit a report to Congress setting forth the reasons for the determination. Further, the Air Force cannot issue a solicitation for combined workloads until at least 60 days after the Secretary submits the required report. Our January 20, 1998, report made two key points about DOD’s determinations. First, we stated that there was no analysis of the logic and economies associated with having the workload performed individually by potentially qualified offerors. Consequently, there was no support for determining that the individual workloads cannot as logically and economically be performed without combination. Second, we noted that the reports and available supporting data did not adequately support DOD’s determinations. Appendix III contains a summary of this report. We discussed our findings in a February 24, 1998, hearing conducted by the Subcommittee on Military Readiness, House National Security Committee, and a March 4, 1998, hearing conducted by the Subcommittee on Readiness, Senate Armed Services Committee. At those hearings, Office of Secretary of Defense and Air Force officials provided additional rationale supporting DOD’s determinations to combine the workloads. Subcommittee members expressed their concerns regarding whether the new data provided adequate support for the determinations. Both subcommittees requested that we analyze the additional data and report to them on our findings. On February 24, 1998, the Air Force provided additional information in support of DOD’s December 19, 1997, determinations. This information included two documents: a white paper containing the rationale for combining the Sacramento depot’s aircraft and commodity workloads into a single solicitation and a report containing the rationale for combining the San Antonio depot’s engine workloads into a single solicitation. Air Force officials stated that the decision to combine most of the aircraft and commodity workloads at the Sacramento depot and the engine workloads at the San Antonio depot was made before the mandate in the 1998 National Defense Authorization Act. The officials also said that the process used to make the decision was valid and that a reassessment of alternative acquisition strategies was not required in response to the act. The Sacramento white paper described the rationale supporting the workload combination determination as an iterative process that evolved over a 2-1/2-year period beginning in September 1995. This process included conferences and discussions with potential offerors, strategy panels with Air Force acquisition experts, repair base analyses,unsolicited input from industry representatives, and reviews of recent DOD outsourcing efforts. Sacramento officials explained that the initial approach involved a privatization-in-place strategy, including separate solicitations for seven individual workloads and separate transition schedules for some of the individual workloads. In July 1996, the Air Force decided to conduct a public-private competition combining the Sacramento KC-135 and A-10 aircraft and various commodity workloads, including hydraulics, instruments and avionics, and electrical accessories. According to Air Force officials, the Air Force has pursued workload combination as its acquisition strategy since that time. The San Antonio report recognized that the Air Force had not conducted an economic analysis regarding the potential savings of issuing single versus multiple solicitations. Instead, the Air Force relied on reviews of engine workload data, repair processes, and market surveys to identify the acquisition strategy for determining how San Antonio’s engine workloads will be performed in the future. Both documents discuss the logic and economies supporting DOD’s determinations to combine workloads into a single solicitation at each of the closing depots. The key points in the Air Force’s rationale and support for DOD’s determinations are summarized below. The Air Force stated that its decisions to combine the Sacramento and San Antonio workloads into single solicitations at each location were based on the following logic factors: Workload commonality and overhead sharing. The Air Force believes that shared personnel skills and backshops provide an opportunity for achieving improved efficiencies and lower prices in peacetime while providing flexibility to better plan for wartime surge requirements. Air Force officials noted that shared fixed overhead costs for such functions, such as planning, scheduling, and providing materiel support over a larger workload base, provide opportunities for improved economies and reduced costs at both the Sacramento and San Antonio depots. Further, using the same backshops for multiple workloads should reduce the overall cost of the combined work at each location. Avoidance of multiple transitions and personnel turbulence. The Air Force believes that managing multiple transitions increases the readiness risks associated with closing complex, integrated industrial facilities. Further, delaying the award of the contract by splitting the competition into multiple awards could subject the workforce to multiple reduction-in-force actions, which would disrupt the skill mix and result in productivity losses and production delays that adversely affect the readiness of the Air Force’s operational units. Workload stability. This factor was also cited to support the rationale at the Sacramento depot. The Air Force stated that, because the aircraft workload is stable, it can be competed using a guaranteed minimum quantity. However, the Air Force noted that many of the commodity workloads have been erratic and therefore cannot be competed with a minimum guaranteed workload. Consequently, the Air Force stated that combining the aircraft and commodity workloads into one solicitation would allow the winning offeror to smooth peaks in one workload segment and offset valleys in other workload segments, providing a more stable production capability. Further, the Air Force stated that a more stable workload would increase efficiency and savings by providing potential offerors a more reliable basis for employment levels and cost planning. Market surveys. To support workload consolidation at the San Antonio depot, the Air Force said that the majority of respondents to its October 1995 market survey indicated a preference for a single contract for the C-5 aircraft and a single contract for the combined engine workloads. Further, the Air Force concluded from survey results that more competitors would participate under the single solicitation for the multiple engine workloads. The Air Force cited the following factors supporting the economies of workload combination at the closing depots: Time delays. The Sacramento white paper stated that separating the Sacramento workload into five segments would delay contract award and transition completion dates by 16 months, which would impact closure, increase costs, and reduce projected BRAC savings. Similarly, the San Antonio report stated that separating the San Antonio engine workloads into three solicitations would extend the planned contract award from 225 to 740 days, impacting closure and increasing costs. Cost increases. The Air Force stated that conducting multiple competitions at Sacramento could result in cost increases to the offerors and the government, which the Air Force estimates to be between $22 million and $130 million. At San Antonio, the Air Force estimated the increased cost to be between $92 million and $259 million. Increased risks. The Air Force believes that changing the strategy from single to multiple awards would increase risks and translate into higher costs. The additional rationale that the Air Force provided to further justify DOD’s December 19, 1997, determinations is not well supported. We identified significant weaknesses in both the logic and economic rationale presented to support combining workloads at the Sacramento and San Antonio depots into single solicitations at each location. We identified significant weaknesses in the rationale presented by the Air Force to support DOD’s determinations to combine workloads at the closing Sacramento and San Antonio depots into single solicitations at each location. First, the Air Force did not adequately consider some other viable alternatives as a part of its assessment. Second, some assumptions are creditable only if the combined workloads are performed in place. Third, each of the supporting points has specific weaknesses that create additional questions regarding the adequacy of DOD’s support for workload combination determinations. Our concerns regarding the economic rationale are discussed in the following section. Although the Air Force gave limited consideration to options other than combining the workloads at the two locations, they did not consider, or gave only limited consideration to, some other feasible alternatives. According to the 1998 Defense Authorization Act, alternatives that appear logical and potentially cost-effective should have been evaluated. Options not considered include (1) using solicitations that permit the competitors to offer on any combination of workloads, from one to all and (2) having another contracting activity conduct simultaneous competitions for segments of the Sacramento or San Antonio workloads to avoid delays from sequential competitions for individual segments of the competition. Our review indicates that several of the assumptions supporting the Air Force’s rationale are questionable unless the workload remains at the existing locations. For example, the Air Force states that combining workloads will preclude multiple workload transitions, thereby avoiding multiple reduction-in-force actions, limiting personnel turbulence, and minimizing readiness impacts. Further, the Air Force states that, for the Sacramento workload, combining aircraft and commodities into a single solicitation would provide the winning offeror the ability to shift employees between workload segments. The advantages cited by the Air Force are not likely to occur if the workloads are performed at a single location other than Sacramento and San Antonio or at multiple locations. We identified other weaknesses or deficiencies with each of the factors cited by the Air Force, including the following: Workload commonality and sharing of overhead. The Air Force’s position that realizing efficiencies from shared personnel and facilities at Sacramento and San Antonio is best achieved with a single solicitation for combined workloads is questionable. The efficiencies that are achievable from shared facilities and personnel may be greater if the workloads being combined are the same or more similar than the workloads being combined under the Sacramento and San Antonio solicitations. For example, the Air Force may achieve greater efficiency by combining (1) the management of the Sacramento KC-135 workloads with other KC-135 workloads to be competed and/or (2) the San Antonio Air Force T-56 engine workloads with other engine workloads also to be competed. Both of these options provide opportunities for significant cost savings that were not considered by the Air Force. Avoidance of multiple transitions and personnel turbulence. We realize that risks can be associated with the transition of any depot maintenance workload. However, we have reported that there is no inherent reason why these workloads cannot be transitioned without impacting equipment readiness if the transition is properly planned and effectively implemented. Further, DOD has successfully closed 17 depots over the past 10 years and has successfully managed multiple transitions and the resulting sequential personnel reductions. Workload stability for commodities and aircraft repair at Sacramento. The Air Force data does not support the conclusion that the inherent inefficiencies of the commodity workload are improved by combining it with the more predictable and consistent aircraft workload. For example, even though the Air Force states that stability will come from being able to transfer employees between the aircraft and commodity workloads, this transfer has rarely happened. Although the Air Force has had the ability to shift workers among the aircraft and commodity workloads, Sacramento depot personnel data shows that on average, each year over the last 7 years, only 22 of the approximately 1,500 wage grade depot employees have been shifted between aircraft and commodities. Results of market surveys. We question whether the results of the 1995 market survey are applicable to the Air Force’s current position that combining the San Antonio workloads is more logical and economical than issuing individual solicitations. The survey was designed to collect potential offeror preferences under the then-current acquisition strategy of privatizing the San Antonio aircraft and engine workloads in place. However, in 1996 the Air Force revised this acquisition strategy and adopted a public-private competition strategy. Further, in 1997 the Air Force conducted a market analysis of engine manufacturing companies to determine the availability and interest of public and private sector sources to perform the required repair of engines currently maintained in Air Force depots. In this survey, engine manufacturers indicated a preference for repairing their own engines and were less interested in repairing other engines. Additionally, our discussions with four potential offerors for the engine workload indicated that they are interested in participating regardless of whether the workloads are combined into a single solicitation. We also identified two significant weaknesses in the Air Force’s economic analyses supporting the combination of workloads into single solicitations at each site. First, and most significantly, the analyses were not comprehensive or consistent estimates of the comparative costs associated with the alternatives examined. Second, the cost estimates are questionable for several key categories. The Air Force analyses stated that workload combination would save $22 million to $130 million at Sacramento and $92 million to $259 million at San Antonio. These figures represent estimates of costs associated with administering the additional contracts and delaying contract award and transition. However, the estimates contain two significant weaknesses. First, all costs associated with performing the work are not included. For example, the analyses did not consider the cost of performing maintenance operations, including the costs of labor, parts, and overhead required to perform the repair under the two alternatives considered, or the additional layer of cost associated with subcontracting under the combined workload package scenario. Also, the possibility of the cost benefits of increased competition resulting from solicitations for individual workloads was not recognized. Further, because the estimated value of the workload at these locations is $2.4 billion at Sacramento and $8 billion at San Antonio, the effect of not considering these costs could significantly impact the outcome of the analyses. To illustrate the significance, a small difference of, for example, 5 percent between cost estimates for single versus multiple solicitations would represent $120 million and $400 million, for the Sacramento and San Antonio workloads respectively. These amounts would materially affect the savings ranges projected by the Air Force. Second, the cost estimates for the two locations did not use consistent cost elements. For example, the San Antonio estimate included a $40-million cost associated with delaying depot closure, which would reduce the amount of estimated savings, whereas the Sacramento estimate did not consider such costs. We do believe costs associated with delaying closure are relevant to both locations, although we have some questions about the accuracy of the $40-million cost estimate. Notwithstanding our concerns about the comprehensiveness and consistency of the cost analysis, our review of the cost data provided indicates that the estimates are overstated or questionable in several areas, including the following: Contract administration costs at Sacramento. The Sacramento estimate included a 1.9 percent estimate for contract administration costs resulting from having more than one contract for the Sacramento workload. This estimate was based on a contractor industrial performance metrics study.This estimate may be overstated because participants in the original study found the cost impacts projected by the contractor were significantly overstated. For example, five participants prepared estimates of the top 10 cost drivers identified in the contractor study and found that the study estimates were overstated from 14 to 70 percent. Closure savings costs. As mentioned above, the San Antonio cost estimate included a $40-million cost associated with delaying depot closure.However, this estimate is overstated. The $40-million estimate is based on the closure of all logistics operations, some of which will not close until 2001. According to the BRAC estimates, savings from closing the depot maintenance operations provided only 21 percent of the estimated annual savings from closure. At this rate, the cost of delay should be no higher than $8.4 million rather than the $40 million estimated in the San Antonio report. Transition costs. Sacramento included a cost estimate for extending the transition period. Under the multi-contract approach, Sacramento assumed workload segments would be transitioned incrementally over a 20- to 24-month period. Although the Air Force may incur additional transition costs under a multiple contract strategy, we found transition costs were overstated. The Air Force’s transition cost methodology assumed that each individual winning offeror would require the full 20 to 24 months to complete the transition. However, Sacramento officials recognized that the contractors’ transitions for the individual workload segments will not require the entire 20- to 24-month period. The officials stated that they were unable to separately identify a more precise cost estimate. The Air Force’s support for DOD’s determinations that it is more logical and economical to combine the workloads being competed at the closing depots is based on a wide variety of information accumulated during the acquisition strategy development process started in September 1995. We recognize that this substantial body of data includes certain information relevant to the determinations required by the National Defense Authorization Act of 1998. We also recognize that the determinations ultimately represent a management judgment based on various qualitative and quantitative factors. However, our assessment of these factors, as presented by the Air Force in its February 24, 1998, Sacramento white paper and San Antonio report shows significant weaknesses in logic, assumptions, and data. Consequently, DOD’s determinations may well be appropriate, but its rationale is not well supported. On April 10, 1998, we provided a draft of this report for comment. DOD informed us that, given the short amount of time available, it chose not to comment on the report at this time. To determine the reasons the Air Force believes it is more logical and economical to combine the workloads at the Sacramento and San Antonio depots, we reviewed the December 19, 1997, reports DOD provided to Congress, as required by 10 U.S.C. 2469a; the Sacramento white paper and San Antonio report provided to Congress on February 24, 1998, which expanded on DOD’s rationale for combining workloads into single solicitations; and other information relevant to the preparation of these reports. To analyze the rationale for DOD’s determination, we reviewed (1) information contained in the reports; (2) documentation and other data supporting the reports; (3) discussions with Air Force officials responsible for preparing the reports and managing depot maintenance workloads; (4) discussions with contractor officials who are planning to participate in the competitions for workloads currently performed at the Sacramento and San Antonio depots; (5) discussions with Air Force Audit Agency officials who provided advice on the preparation of the Sacramento white paper and San Antonio report; (6) a review of related Air Force studies, reports, and data; (7) our prior work regarding related depot maintenance issues; and (8) a review of applicable laws and regulations. We conducted our review between February and April 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Air Force; the Director, Office of Management and Budget; and interested congressional committees. Copies will also be made available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-8412. Major contributors to this report are listed in appendix IV. The National Defense Authorization Act for Fiscal Year 1998 contains several depot-related reporting requirements. 1. Report on DOD’s Compliance with 50-Percent Limitation (section 358) The act amends 10 U.S.C. 2466(a) by increasing the amount of depot-level maintenance and repair workload funds that the Department of Defense (DOD) can use for contractors from 40 to 50 percent and revises 10 U.S.C. 2466(e) by requiring the Secretary of Defense to submit a report to Congress identifying the percentage of funds expended for contractors’ performance by February 1 of each year. Within 90 days of DOD’s submission of its annual report to Congress, we must review the DOD report and report to Congress whether DOD has complied with the 50-percent limitation. 2. Reports Concerning Public-Private Competitions for the Depot Maintenance Workloads at the Closing San Antonio and Sacramento Depots (section 359) The act adds to 10 U.S.C. a new section, 2469a, which provides for special procedures for public-private competitions for the workloads of these two closing depots. It also requires that we report in the following areas: First, the Secretary of Defense is required to submit a determination to Congress if DOD finds it necessary to combine any of the workloads into a single solicitation. We must report our views on the DOD determination within 30 days. Second, we are required to review all DOD solicitations for the workloads at San Antonio and Sacramento and to report to Congress within 45 days of the solicitations’ issuance whether the solicitations provide “substantially equal” opportunity to compete without regard to performance location and otherwise comply with applicable laws and regulations. Third, we must review all DOD awards for the workloads at the two closing Air Logistics Centers and report to Congress within 45 days of the contract awards whether the procedures used complied with applicable laws and regulations and provided a “substantially equal” opportunity to compete without regard to performance location, determine whether “appropriate consideration was given to factors other than cost” in the selection, and ascertain whether the selection resulted in the lowest total cost to DOD for performance of the workload. Fourth, within 60 days of its enactment, the 1998 Defense Authorization Act requires us to review the C-5 aircraft workload competition and subsequent award to the Warner Robins Air Logistics Center and report to Congress on whether the procedures used provided an equal opportunity for offerors to compete without regard to performance location, whether the procedures complied with applicable laws and the Federal Acquisition Regulation, and whether the award resulted in the lowest total cost to DOD. 3. Report on Navy’s Practice of Using Temporary Duty Assignments for Ship Maintenance and Repair (section 366) The act requires us to report by May 1, 1998, on the Navy’s use of temporary duty workers to perform ship maintenance and repairs at homeports not having shipyards. At the time it was identified for closure during the 1995 Base Closure and Realignment (BRAC) process, the Air Force’s Sacramento depot had responsibility for the repair of four aircraft and four commodity groups. The depot also had a significant body of manufacturing or repair work it performed in small quantities for various non-Air Force customers. Additionally, it had a microelectronics facility that performed reverse engineering on parts to provide technical data for manufacturing support parts or for developing repair procedures. Two of the four aircraft repaired at the Sacramento depot will not be included in the competition package—the F-15 and EF-111. F-15 repairs are being consolidated at the Warner Robins depot, which is the F-15 center of excellence and already performs most of the F-15 work. The EF-111 repair requirement is expected to end as the aircraft is phased out of operations. KC-135 and A-10 aircraft requirements are expected to be included in the Sacramento competition package. The KC-135 aircraft is currently repaired at the Oklahoma City depot and at a contractor facility in Birmingham, Alabama. Table II.1 shows the production hours for 1995, 1996, and 1997 for the KC-135 and A-10 aircraft. The KC-135 workload may be increased in the competition package, but the A-10 workload is expected to decrease and to be erratic as the aircraft is phased out of the inventory. In accordance with a 1995 BRAC Commission decision, the Sacramento depot’s largest commodity grouping—ground communications and electronics—which has a projected workload of about 825,000 hours, is being transitioned to the Tobyhanna Army Depot between 1998 and 2001. The Sacramento depot’s software maintenance workload has declined significantly, and the remaining software work is expected to be transferred outside the competition process to the Ogden depot. The remaining commodity groups currently repaired at Sacramento include hydraulics, instruments and avionics, and electrical accessories. Table II.2 provides an overview of the actual direct labor hours used during fiscal years 1995-97 for the commodity groupings that are currently repaired at the Sacramento depot and are expected to be a part of the competitive package. The Air Force assessed Sacramento’s core capabilities and analyzed the private sector’s repair base. Through this process, which was approved by the Defense Depot Maintenance Council, none of the Sacramento workload was determined to be core. At the time of its closure, the San Antonio depot largely did modifications and repairs of aircraft, turbine engines, and support equipment, and did a smaller amount of work on nuclear ordnance and engine software. The source of repairs for the C-5 aircraft was determined through a separate public-private competition. That workload was won by the Warner Robins depot, which assumed responsibility for the C-5 in November 1997; work-in-process will continue at San Antonio until the summer of 1998. The Warner Robins depot inducted its first C-5 aircraft in January 1998. The nuclear ordnance commodity management workload is being transferred outside the competition to the Ogden and Oklahoma City depots and Kirkland Air Force Base, with the bulk of the work going to Ogden. Table II.3 shows a breakout of the San Antonio engine workload based on direct production actual hours for fiscal years 1995 through 1997. For various reasons, the competition for engine workloads will not include all of the workload at the San Antonio depot. For example, the Navy is making independent source-of-repair decisions for its T56 engine workloads. Further, core engine workload will be moved outside the competition process to the Oklahoma City depot. The Air Force assessed the core engine capabilities at the San Antonio and Oklahoma City depots and analyzed private industry’s repair base. As a result of this process, the Air Force determined that it should retain the capability to repair about 24 percent of the annual F100 engine module workload and 50 percent of the workload required to maintain the capability to repair and check out whole engines—or about nine whole engines. Accordingly, the Air Force is moving the F100 core workload to the Oklahoma City depot outside the engine competition. Finally, it is uncertain whether the Air Force could outsource all the engine workload in the competitive package given the statutory limits on the percentage of depot maintenance work that can be performed by the private sector. The Air Force is using a management structure for administering and managing the Sacramento and San Antonio competitions similar to the one it used for the C-5 competition. The structure includes a program office and evaluation team at each center as well as an advisory council and source selection official at Air Force headquarters. The program office has general responsibility for preparing and managing the request for proposals. The evaluation team will report its assessments to a council made up of representatives from the Office of the Secretary of Defense, Air Force headquarters, and Air Force Materiel Command staff. The council will review the team’s assessment and advise the source selection official. It may be that the individual workloads at the closing San Antonio, Texas, and Sacramento, California, Air Force depots cannot as logically and economically be performed without combination by sources that are potentially qualified to submit an offer and be awarded a contract for individual workloads. However, DOD reports and data do not provide adequate information to support DOD’s determinations. First, DOD has not analyzed the logic and economies associated with having the workload performed individually by potentially qualified offerors. Consequently, it has no support for determining that the individual workloads cannot as logically and economically be performed without combination by sources that would do them individually. Air Force officials stated that they were uncertain as to how they would analyze the performance of workloads on an individual basis. However, Air Force studies indicate that the information to make such an analysis is available. For example, in 1996 the Air Force performed analyses for six depot-level workloads performed by the Sacramento depot to identify industry capabilities and capacity. Individual analyses were accomplished for hydraulics, software, electrical accessories, flight instruments, A-10 aircraft, and KC-135 aircraft depot-level workloads. As a part of these analyses, the Air Force identified sufficient numbers of qualified contractors interested in various segments of the Sacramento workload to support a conclusion that it could rely on the private sector to support the workloads. Second, reports and available data did not adequately support DOD’s determinations “that the individual workloads cannot as logically and economically be performed without combination by sources that are potentially qualified to submit an offer and to be awarded a contract to perform those individual workloads.” For example, DOD’s determination report relating to the Sacramento Air Logistics Center, McClellan Air Force Base, California, states that all competitors indicated throughout their Sacramento workload studies that consolidating workloads offered the most logical and economical performance possibilities. This statement was based on studies performed by the offerors as part of the competition process. However, one offeror’s study states that the present competition format is not in the best interest of the government and recommends that the workload be separated into two competitive packages. We were unable to determine whether the other two contractor studies support the statement in the DOD report that all competitors favored consolidating the workloads because the Air Force did not provide us adequate or timely access to the studies cited in the report. Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector (GAO/NSIAD-98-8, Mar. 31, 1998). Dex pot Maintenance: Lessons Learned From Transferring Alameda Naval Aviation Depot Engine Workloads (GAO/NSIAD-98-10BR, Mar. 25, 1998). Public-Private Competitions: Access to Records Is Inhibiting Work on Congressional Mandates (GAO/T-NSIAD-98-111, Mar. 4, 1998). Public-Private Competitions: Access to Records Is Inhibiting Work on Congressional Mandates (GAO/T-NSIAD-98-101, Feb. 24, 1998). Public-Private Competitions: DOD’s Determination to Combine Depot Workloads Is Not Adequately Supported (GAO/NSIAD-98-76, Jan. 20, 1998). Public-Private Competition: Processes Used for C-5 Aircraft Award Appear Reasonable (GAO/NSIAD-98-72, Jan. 20, 1998). DOD Depot Maintenance: Information on Public and Private Sector Workload Allocations (GAO/NSIAD-98-41, Jan. 20, 1998). Air Force Privatization-in-Place: Analysis of Aircraft and Missile System Depot Repair Costs (GAO/NSIAD-98-35, Dec. 22, 1997). Outsourcing DOD Logistics: Savings Achievable but Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Air Force Depot Maintenance: Information on the Cost-Effectiveness of B-1B and B-52 Support Options (GAO/NSIAD-97-210BR, Sept. 12, 1997). Navy Depot Maintenance: Privatizing the Louisville Operations in Place Is Not Cost-Effective (GAO/NSIAD-97-52, July 31, 1997). Defense Depot Maintenance: Challenges Facing DOD in Managing Working Capital Funds (GAO/T-NSIAD/AIMD-97-152, May 7, 1997). Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program (GAO/T-NSIAD-97-111, Mar. 18, 1997 and GAO/T-NSIAD-97-112, May 1, 1997). Navy Ordnance: Analysis of Business Area Price Increases and Financial Losses (GAO/AIMD/NSIAD-97-74, Mar. 14, 1997). Defense Outsourcing: Challenges Facing DOD as It Attempts to Save Billions in Infrastructure Costs (GAO/T-NSIAD-97-110, Mar. 12, 1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, Feb. 1997). Air Force Depot Maintenance: Privatization-in-Place Plans Are Costly While Excess Capacity Exists (GAO/NSIAD-97-13, Dec. 31, 1996). Army Depot Maintenance: Privatization Without Further Downsizing Increases Costly Excess Capacity (GAO/NSIAD-96-201, Sept. 18, 1996). Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place the Louisville, Kentucky, Depot (GAO/NSIAD-96-202, Sept. 18, 1996). Defense Depot Maintenance: Commission on Roles and Mission’s Privatization Assumptions Are Questionable (GAO/NSIAD-96-161, July 15, 1996). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-146, Apr. 16, 1996, and GAO/T-NSIAD-96-148, Apr. 17, 1996). Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Personnel, and Workload Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Navy Maintenance: Assessment of the Public-Private Competition Program for Aviation Maintenance (GAO/NSIAD-96-30, Jan. 22, 1996). Depot Maintenance: The Navy’s Decision to Stop F/A-18 Repairs at Ogden Air Logistics Center (GAO/NSIAD-96-31, Dec. 15, 1995). Military Bases: Case Studies on Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-139, Aug. 15, 1995). Military Base Closure: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Aerospace Guidance and Metrology Center: Cost Growth and Other Factors Affect Closure and Privatization (GAO/NSIAD-95-60, Dec. 9, 1994). Navy Maintenance: Assessment of the Public and Private Shipyard Competition Program (GAO/NSIAD-94-184, May 25, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). Depot Maintenance (GAO/NSIAD-93-292R, Sept. 30, 1993). Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military (GAO/T-NSIAD-93-13, May 6, 1993). Air Logistics Center Indicators (GAO/NSIAD-93-146R, Feb. 25, 1993). Defense Force Management: Challenges Facing DOD as It Continues to Downsize Its Civilian Workforce (GAO/NSIAD-93-123, Feb. 12, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) supporting rationale for combining certain depot-level maintenance and repair workloads. GAO noted that: (1) the Air Force's support for DOD's determinations that it is more logical and economical to combine the workloads being competed at the closing depots is based on a wide variety of information accumulated during the acquisition strategy development process started in September 1995; (2) while GAO recognizes that the determinations ultimately represent a management judgment based on various qualitative and quantitative factors and that DOD's determinations may well be appropriate, the rationale presented in the February 24, 1998, Sacramento white paper and San Antonio report for combining the workloads in single solicitations at each location is not well supported; (3) GAO's assessment indicates that there are significant weaknesses in logic, assumptions, and data; (4) DOD did not consider other alternatives that appear to be logical and potentially cost-effective, and its assumption that efficiencies from shared personnel and facilities are best achieved with a single solicitation for combined workloads at each location is questionable; (5) also, the Air Force's claim that the effects of sequential personnel reductions and transition delays can be problematic is questionable in view of DOD's demonstrated success in the past handling multiple transitions and sequential reductions; (6) in addition, the workload stability rationale for Sacramento is questionable because the inherent inefficiencies of the commodity workload are not likely to be improved by combination with the more predictable and consistent aircraft workload; and (7) finally, the Air Force's cost analysis, which concluded that workload combination would save $22 million to $130 million at Sacramento and $92 million to $259 million at San Antonio, is questionable because it did not consider all cost factors, such as the cost benefits of increased competition resulting from solicitations for individual workloads. |
DOD is a massive and complex organization. To illustrate, the department reported that its fiscal year 2006 operations involved approximately $1.4 trillion in assets and $2.0 trillion in liabilities, more than 2.9 million military and civilian personnel, and $581 billion in net cost of operations. To date, for fiscal year 2007, the department received appropriations of about $501 billion. Organizationally, the department includes the Office of the Secretary of Defense (OSD), the Chairman of the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that are responsible for either specific geographic regions or specific functions. (See fig. 1 for a simplified depiction of DOD’s organizational structure.) In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the systems environment that supports these business functions is overly complex and error-prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. Moreover, according to DOD, this systems environment is comprised of approximately 3,100 separate business systems. For fiscal year 2007, Congress appropriated approximately $15.7 billion to DOD, and for fiscal year 2008, DOD has requested about $15.9 billion in appropriated funds to operate, maintain, and modernize these business systems and the associated infrastructures. As we have previously reported, the department’s nonintegrated and duplicative systems impair DOD’s ability to combat fraud, waste, and abuse. In fact, DOD currently bears responsibility, in whole or in part, for 15 of our 27 high-risk areas. Eight of these areas are specific to DOD, and the department shares responsibility for 7 other governmentwide high- risk areas. DOD’s business systems modernization is one of the high-risk areas, and it is an essential enabler to addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. A corporate approach to IT investment management is characteristic of successful public and private organizations. Recognizing this, Congress enacted the Clinger-Cohen Act of 1996, which requires the Office of Management and Budget (OMB) to establish processes to analyze, track, and evaluate the risks and results of major capital investments in IT systems made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB has developed policy and issued guidance for the planning, budgeting, acquisition, and management of federal capital assets. We have also issued guidance in this area, which defines institutional structures, such as the IRBs; processes for developing information on investments (such as costs and benefits); and practices to inform management decisions (such as whether a given investment is aligned with an enterprise architecture). IT investment management is a process for linking IT investment decisions to an organization’s strategic objectives and business plans. Consistent with this, the federal approach to IT investment management focuses on selecting, controlling, and evaluating investments in a manner that minimize risks while maximizing the return of investment. During the selection phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that will best support its mission needs. During the control phase, the organization ensures that projects, as they develop and investment expenditures continue, meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems arise, steps are quickly taken to address the deficiencies. During the evaluation phase, expected results are compared with actual results after a project has been fully implemented. This comparison is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. Our ITIM framework consists of five progressive stages of maturity for any given agency relative to selecting, controlling, and evaluating its investment management capabilities. (See fig. 2 for the five ITIM stages of maturity.) This framework is grounded in our research of IT investment management practices of leading private and public sector organizations. The maturity stages are cumulative; that is, to attain a higher stage, an agency must institutionalize all of the critical processes at the lower stages, in addition to the higher stage critical processes. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment selection and control and to evaluate processes that promote business value and mission performance, reduce risk, and increase accountability and transparency. We have used the framework in several of our evaluations, and a number of agencies have adopted it. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized for the organization to achieve that stage. Each ITIM critical process consists of “key practices”—to include organizational structures, policies, and procedures—that must be executed to implement the critical process. It is not unusual for an organization to perform key practices from more than one maturity stage at the same time. However, our research shows that agency efforts to improve investment management capabilities should focus on implementing all lower-stage practices before addressing higher- stage practices. In the ITIM framework, Stage 2 critical processes lay the foundation by establishing successful, predictable, and repeatable investment control processes at the project level. At this stage, the emphasis is on establishing basic capabilities for selecting new IT projects; controlling projects so that they finish predictably within the established cost, schedule, and performance expectations; and identifying and mitigating exposure to risk. Stage 3 is where the agency moves from project-centric processes to portfolio-based processes and evaluates potential investments according to how well they support the agency’s missions, strategies, and goals. This stage focuses on continually assessing both proposed and ongoing projects as part of complete investment portfolios—integrated and competing sets of investment options. It also focuses on maintaining mature, integrated selection (and reselection); control; and postimplementation evaluation processes. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than to focus exclusively on the balance between the costs and benefits of individual investments. Organizations implementing Stages 2 and 3 practices have in place capabilities that assist in establishing selection, control, and evaluation structures, policies, procedures, and practices that are required by the investment management provisions of the Clinger-Cohen Act. Stages 4 and 5 require the use of evaluation techniques to continuously improve both investment processes and portfolios to better achieve strategic outcomes. At Stage 4, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough technologies that will enable it to change and improve its business performance. DOD’s major system investments (i.e., weapon and business systems) are governed by three management systems—the Joint Capabilities Integration and Development System (JCIDS); the Planning, Programming, Budgeting, and Execution (PPBE) system; and the Defense Acquisition System (DAS). JCIDS is a need-driven, capabilities-based approach to identify warfighting needs and meet future joint forces challenges. It is intended to identify future capabilities for DOD; address capability gaps and mission needs recognized by the Joint Chiefs of Staff or derived from strategic guidance, such as the National Security Strategy Report or Quadrennial Defense Review; and identify alternative solutions by considering a range of doctrine, organization, training, materiel, leadership and education, personnel, and facilities solutions. According to DOD, the Joint Chiefs of Staff, through the Joint Requirements Oversight Council, has primary responsibility for defining and implementing JCIDS. PPBE is a calendar-driven approach that is composed of four phases that occur over a moving 2-year cycle. The four phases—planning, programming, budgeting, and executing—define how budgets for each DOD component and the department as a whole are created, vetted, and executed. As recently reported, the components start programming and budgeting for addressing a JCIDS-identified capability gap or mission need several years before actual product development under DAS begins, and before OSD formally reviews the components’ programming and budgeting proposals (i.e., Program Objective Memorandums). Once reviewed and approved, the financial details in the Program Objective Memorandums become part of the President’s budget request to Congress. During budget execution, components may submit program change proposals or budget change proposals, or both (e.g., program cost increases or schedule delays). According to DOD, the OSD Under Secretary of Defense (Policy), the Director for Program Analysis and Evaluation, and the Under Secretary of Defense (Comptroller) have primary responsibility for defining and implementing the PPBE system. DAS is described in the DOD Directive 5000.1 and the DOD Instruction 5000.2 and establishes the procedures for the Defense Acquisition Management Framework, which consists of three event-based milestones associated with five key program life-cycle phases. These five phases are as follows: 1. Concept Refinement: Intended to refine the initial JCIDS-validated system solution (concept) and create a strategy for acquiring the investment solution. A decision is made at the end of this phase (milestone A decision) regarding whether to move to the next phase (Technology Development). 2. Technology Development: Intended to determine the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of various technologies while simultaneously refining user requirements. Once the technology has been demonstrated in a relevant environment, a decision is made at the end of this phase (milestone B decision) regarding whether to move to the next phase (System Development and Demonstration). 3. System Development and Demonstration: Intended to develop a system or a system increment and demonstrate through developer testing that the system/system increment can function in its target environment. A decision is made at the end of this phase (milestone C decision) regarding whether to move to the next phase (Production and Deployment). 4. Production and Deployment: Intended to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and ensures that the system is implemented at all applicable locations. 5. Operations and Support: Intended to operationally sustain the system in the most cost-effective manner over its life cycle. A key principle of DAS is that investments are assigned a category, where programs of increasing dollar value and management interest are subject to more stringent oversight. For example, Major Defense Acquisition Programs (MDAP) and Major Automated Information Systems (MAIS) are large, expensive programs subject to the most extensive statutory and regulatory reporting requirements and, unless delegated, are reviewed by acquisition boards at the DOD corporate level. Smaller and less risky acquisitions are generally reviewed at the component executive or lower levels. Another key principle is that DAS requires acquisition management under the direction of a milestone decision authority. The milestone decision authority—with support from the program manager and advisory boards, such as the Defense Acquisition Board and the IT Acquisition Board—determines the project’s baseline cost, schedule, and performance commitments. The Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) has primary responsibility for defining and implementing DAS. DOD’s business system investments are also governed by a fourth management system that addresses how these investments are reviewed, certified, and approved for compliance with the business enterprise priorities and activities outlined by the business enterprise architecture (BEA). For the purposes of this report, we refer to this fourth management system as the Business Investment Management System. This fourth management system is described in the following text in terms of governance entities, tiered accountability, and business system investment certification reviews and approvals. According to DOD, these four management systems are the means by which DOD selects, controls, and evaluates its business system investments. In 2005, the department reassigned responsibility for providing executive leadership for the direction, oversight, and execution of its business systems modernization efforts to several entities. These entities and their responsibilities include the following: The Defense Business Systems Management Committee (DBSMC) serves as the highest-ranking governance body for business systems modernization activities. The Principal Staff Assistants serve as the certification authorities for business system modernizations in their respective core business missions. The IRBs are chartered by the Principal Staff Assistants and are the review and decision-making bodies for business system investments in their respective areas of responsibility. The component pre-certification authority (PCA) is accountable for the component’s business system investments and acts as the component’s principal point of contact for communication with the IRBs. The Business Transformation Agency (BTA) is responsible for leading and coordinating business transformation efforts across the department. The BTA is organized into seven directorates, one of which is the Defense Business Systems Acquisition Executive (DBSAE)—the component acquisition executive for DOD enterprise-level (DOD-wide) business systems and initiatives. This directorate is responsible for developing, coordinating, and integrating enterprise-level projects, programs, systems, and initiatives—including managing resources such as fiscal, personnel, and contracts for assigned systems and programs. Table 1 lists these entities and provides greater detail on their roles, responsibilities, and composition. Figure 3 provides a simplified illustration of the relationships among these entities. According to DOD, in 2005 it adopted a tiered accountability approach to business transformation. Under this approach, responsibility and accountability for business investment management is allocated between the DOD corporate (i.e., OSD) and the components on the basis of the amount of development/modernization funding involved and the investment’s “tier.” DOD corporate is responsible for ensuring that all business systems with a development/modernization investment in excess of $1 million are reviewed by the IRBs for compliance with the BEA, certified by the Principal Staff Assistants, and approved by the DBSMC. Components are responsible for certifying development/modernization investments with total costs of $1 million or less. All DOD development and modernization efforts are also assigned a tier on the basis of the acquisition category or the size of the financial investment, or both. According to DOD, a system is given a tier designation when it passes through the certification process. Table 2 describes the four investment tiers and identifies the associated reviewing and approving entities. DOD’s business investment management system includes two types of reviews for business systems: certification and annual reviews. Certification reviews apply to new modernization projects with total cost over $1 million. This review focuses on program alignment with the BEA and must be completed before components obligate funds for programs. The annual review applies to all business programs. The focus for the annual review is to determine whether the system development effort is meeting its milestones and addressing its IRB certification conditions. Certification reviews and approvals: Tiers 1 through 3 business system investments are certified at two levels—component-level precertification and corporate-level certification and approval. At the component level, program managers prepare, enter, maintain, and update information about their investments in the DOD IT Portfolio Repository (DITPR), such as regulatory compliance reporting, an architectural profile, and requirements for investment certification and annual reviews. The component PCA validates that the system information is complete and accessible on the IRB Portal, reviews system compliance with the BEA and enterprise transition plan, and verifies the economic viability analysis. The PCA asserts the status and validity of the investment information by submitting a component precertification letter to the appropriate IRB for its review. At the corporate level, the IRB reviews the system information and precertification letter submitted by the PCA to determine whether to recommend investment certification. On completion of its review, a certification memorandum is prepared and signed by the designated certification authority that documents the IRB’s system certification decisions and any related conditions. The memorandum is then forwarded to the DBSMC, which either approves or disapproves the IRB’s decisions and issues a memorandum containing its decisions. If the DBSMC disapproves a system investment, it is up to the component PCA to decide whether to resubmit the investment after it has resolved the relevant issues. Figure 4 provides a simplified overview of the process flow of certification reviews and approvals. Annual reviews: Tiers 1 through 4 business system investments are annually reviewed at two levels—the component level and the corporate level. At the component level, program managers review and update information on all tiers of investments, both in modernization and operations and maintenance, on an annual basis in DITPR. The updates for Tiers 1 through 3 with system development/modernization include cost, milestone, and risk variances and actions or issues related to certification conditions. The PCA then verifies and submits the information for Tiers 1 through 3 systems in development/modernization for IRB review in an annual review assertion letter. The letter addresses system compliance with the BEA and the enterprise transition plan, and includes investment cost, schedule, and performance information. At the corporate level, the IRBs annually review certified Tiers 1 through 3 investments in development/modernization. These reviews focus on program compliance with the BEA, program performance against cost and milestone baselines, and progress in meeting certification conditions. The IRBs can revoke an investment’s certification when the system has significantly failed to achieve performance commitments (i.e., capabilities and costs). When this occurs, the component must address the IRB’s concerns and resubmit the investment for certification. Figure 5 shows a simplified overview of the process flow of annual reviews. According to our ITIM framework, organizations should establish the management structures needed to manage their investments and build an investment foundation by having defined policies and procedures for selecting and controlling individual projects (Stage 2 capabilities), and organizations also should manage projects as a portfolio of investments according to defined policies and procedures, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency (Stage 3 capabilities). These Stages 2 and 3 capabilities assist agencies in complying with the investment management provisions of the Clinger- Cohen Act. The department has defined four of nine practices that call for project- level policies and procedures (see table 4) and one of the five practices that call for portfolio-level policies and procedures (see table 6). Specifically, it has established the management structures contained in our ITIM framework, but it has not fully defined many of the related policies and procedures. With respect to project-level investment management practices, DOD officials stated that these are performed at the component level, and that departmental policies and procedures established for overseeing components’ execution of these practices are sufficient. With respect to portfolio-level practices, however, these officials stated that they intend to improve departmental policies and procedures for business system investments by, for example, establishing a single governance structure, but plans or time frames for doing so have not been established. According to our ITIM framework, adequately documenting both the policies and the associated procedures that govern how an organization manages its IT investment portfolio(s) is important because doing so provides the basis for having rigor, discipline, and repeatability in how investments are selected and controlled across the entire organization. Until DOD fully defines departmentwide policies and procedures for both individual projects and the portfolios of projects, it risks selecting and controlling these business system investments in an inconsistent, incomplete, and ad hoc manner, which in turn reduces the chances that these investments will meet mission needs in the most cost-effective manner. At ITIM Stage 2, an organization has attained repeatable and successful IT project-level investment control and basic selection processes. Through these processes, the organization can identify project expectation gaps early and take the appropriate steps to address them. ITIM Stage 2 critical processes include (1) defining investment board operations, (2) identifying the business needs for each investment, (3) developing a basic process for selecting new proposals and reselecting ongoing investments, (4) developing project-level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 3 describes the purpose of each of these Stage 2 critical processes. Within these five critical processes are nine key practices that call for policies and procedures associated with effective project-level management. DOD has fully defined the policies and procedures needed to ensure that four of these nine practices are performed in a consistent and repeatable manner. Specifically, DOD has established the management structures by instituting an enterprisewide investment board—the DBSMC—composed of senior executives, including the Deputy Secretary of Defense, with final approval authority over associated subsidiary investment boards. These lower-level investment boards include representatives from combatant commands, components, and the Joint Chiefs of Staff. In addition, DOD’s business transformation and IRB guidance define a process for ensuring that programs support the department’s ongoing and future business needs. DOD also has policies and procedures for submitting, updating, and maintaining investment information in DITPR and the IRB Portal. Furthermore, the department has assigned the component’s PCA the responsibility to ensure that specific investment information contained in the portfolio repository and the IRB Portal is accurate and complete. However, the policies and procedures associated with the remaining five project-level management practices are missing critical elements needed to effectively carry out essential investment management activities. For example: Policies and procedures for instituting the investment board do not address how investments that are past the development/modernization stage (i.e., in operations and maintenance) are to be governed. Given that DOD invests billions of dollars annually in operating and maintaining business systems, this is significant. While DOD officials stated that component-level policies and procedures address systems outside of development/modernization, our ITIM framework emphasizes that the corporate investment boards should continue to review important information about an investment, such as cost and performance baselines, throughout the investment’s life cycle. In addition, the IRB Concept of Operations and other IRB documentation do not explicitly outline how the business investment management system is coordinated with JCIDS, PPBE, and DAS. Without clearly defined visibility into all investments with an understanding of decisions reached through other management systems, inconsistent decisions may result. Procedures do not specify how the full range of cost, schedule, and benefit data is used by the IRBs in making selection (i.e., certification) decisions. According to BTA officials, each IRB decides how to ensure compliance and determines additional factors to consider when making certification decisions. However, DOD did not provide us with any supplemental policies or procedures for any of the four IRBs. Without documenting how IRBs consider factors such as cost, schedule, and benefits when making selection decisions, the department cannot ensure that the IRBs and the DBSMC consistently and objectively select proposals that best meet the department’s needs and priorities. Furthermore, while the procedures specify decision criteria that address statutory requirements for alignment to the BEA, the criteria allow programs to postpone demonstrating full compliance with several BEA artifacts until the final phases of the acquisition process. As a result, programs risk beginning production and deployment before ensuring that a business system is fully aligned to the BEA. Policies and procedures do not specify how reselection decisions at the corporate level (i.e., annual review decisions) consider investments that are in operations and maintenance. Without an understanding of how the IRBs are to consider these investments when making reselection decisions, their ability to make informed and consistent reselection and termination decisions is limited. Policies and procedures do not specify how funding decisions are integrated with the process of selecting an investment at the corporate level. Without considering component and corporate budget constraints and opportunities, the IRBs risk making investment decisions that do not effectively consider the relative merits of various projects and systems when funding limitations exist. Policies and procedures do not exist that provide for sufficient oversight and visibility into component-level investment management activities, including component reviews of systems in operations and maintenance and Tier 4 investments. According to DOD officials, investment oversight is implemented through tiered accountability, which, among other things, allocates responsibility and accountability for business system investments with total costs of $1 million or less and those in operations and maintenance to the components. However, the department did not provide policies and procedures defining how the DBSMC and the IRBs ensure visibility into these component processes. This is particularly important because, according to DOD’s March 15, 2007, annual report to Congress, only 285 of approximately 3,100 total business systems have completed the IRB certification process and have been approved by the DBSMC. DOD officials also stated that the remaining business systems have not been through the certification process and have not been given a tier designation. Without policies and procedures defining how the DBSMC and the IRBs have visibility into and oversight of all business system investments, DOD risks components continuing to invest in systems that are duplicative, stovepiped, nonintegrated, and unnecessarily costly to manage, maintain, and operate. Table 4 summarizes our findings relative to DOD’s execution of the nine practices that call for the policies and procedures needed to manage IT investments at the project level. According to BTA officials, the IRB Concept of Operations and the Investment Certification and Annual Review Process User Guidance are not intended to describe the detailed approach that each IRB will use when making certification decisions, adding that the components are responsible for selection, annual review, budgeting, and acquisition. While the ITIM framework does allow for multiple entities to carry out investment selection, control, and evaluation, building a sound investment foundation requires that the enterprisewide investment review board has documented criteria and decision-making procedures, clear integration among investment decision-support systems, and policies to ensure board access to system information throughout the life cycle for all investments. Until DOD’s documented IT investment management policies and procedures include fully defined policies and procedures for Stage 2 activities, specify the linkages between the various related processes, and describe how investments are to be governed in the operations and maintenance phase, DOD risks that investment management activities will not be carried out consistently and in a disciplined manner. Moreover, DOD also risks selecting investments that will not cost-effectively meet its mission needs. At Stage 3, an organization has defined critical processes for managing its investments as a portfolio or set of portfolios. Portfolio management is a conscious, continuous, and proactive approach to allocating limited resources among competing initiatives in light of the investments’ relative benefits. Taking an agencywide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments as portfolios also allows an organization to determine its priorities and make decisions about which projects to fund on the basis of analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. Although investments may initially be organized into subordinate portfolios—on the basis of, for example, business lines or life-cycle stages—and managed by subordinate investment boards, they should ultimately be aggregated into enterprise- level portfolios. According to ITIM, Stage 3 involves (1) defining the portfolio criteria; (2) creating the portfolio; (3) evaluating (i.e., overseeing) the portfolio; and (4) conducting postimplementation reviews. Table 5 summarizes the purpose of each of these activities. DOD is executing one of the five practices within these four critical processes that call for policies and procedures associated with effective portfolio-level management. Specifically, DOD has issued departmentwide guidance that assigns responsibilities to the USD(AT&L) for managing and establishing business system investment portfolios, including leveraging or establishing a governance forum to oversee these business system investment portfolio activities. However, DOD has not fully defined the policies and procedures needed to effectively execute the remaining four portfolio management practices relative to business system investments. Specifically, DOD does not have policies and procedures for defining the portfolio criteria or for creating and evaluating the portfolio. In addition, while DOD has policies and procedures for conducting postimplementation reviews as part of DAS, these reviews do not address systems at all tier levels. Furthermore, there are no procedures detailing how lessons learned from these reviews are used during investment review as the basis for management and process improvements. Table 6 summarizes the rating for each critical process required to manage investment as a portfolio and summarizes the evidence that supports these ratings. According to BTA officials, while portfolio management is primarily a component responsibility, they are working toward developing more effective departmentwide portfolio management processes, but plans or time frames for doing so have not been established. Without defining corporate policies and procedures for managing business system investment portfolios, DOD is at risk of not consistently selecting the mix of investments that best supports the departmentwide mission needs and ensuring that investment-related lessons learned are shared and applied departmentwide. Given the importance of business systems modernization to DOD’s mission, performance, and outcomes, it is vital for the department to adopt and employ an effective institutional approach to managing business system investments. While the department has established aspects of such an approach and, thus, has a foundation on which to build, it is lacking other important elements, such as specific policies and procedures needed for project-level and portfolio-level investment management, including integration with DOD’s other key management systems and sufficient oversight and visibility into operations and maintenance investments and Tier 4 investments. This means that DOD lacks an institutional capability to ensure that it is investing in business systems that best support its strategic needs, and that ongoing projects meet cost, schedule, and performance expectations. Until DOD develops this capability, the department will be impaired in its ability to optimize business mission area performance and accountability. To strengthen DOD’s business system investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of Defense direct the Deputy Secretary of Defense, as the chair of the DBSMC, to ensure that well-defined and disciplined business system investment management policies and procedures are developed and issued. At a minimum, this should include project-level management policies and procedures that address the following five areas: instituting the investment boards, including assigning the investment boards responsibility, authority, and accountability for programs throughout the investment life cycle and specifying how the business investment management system is coordinated with JCIDS, PPBE, and DAS; selecting new investments, including specifying how cost, schedule, and benefit data are to be used in making certification decisions; defining the criteria used to select investments as enterprisewide; and establishing consistent and effective guidance for BEA compliance; reselecting ongoing investments, including specifying how cost, schedule, and performance data are to be used in the annual review process and providing for the reselection of investments that are in operations and maintenance; integrating funding with the process of selecting an investment, including specifying how the DBSMC and the IRBs use funding information in carrying out decisions on system certification and approvals; and overseeing IT projects and systems, including providing sufficient oversight and visibility into component-level investment management activities. These well-defined and disciplined business system investment management policies and procedures should also include portfolio-level management policies and procedures that address the following four areas: creating and modifying IT portfolio selection criteria for business system analyzing, selecting, and maintaining business system investment reviewing, evaluating, and improving the performance of its portfolio(s) by using project indicators, such as cost, schedule, and risk; and conducting postimplementation reviews for all investment tiers and directing the investment boards, which are accountable for corporate business system investments, to consider the information gathered and to develop lessons learned from these reviews. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, the department stated that it agreed with the report’s overall conclusions, and it described efforts under way and planned that it said would address many of the gaps identified in the report. In this regard, the department partially concurred with five of the report’s recommendations, adding that our recommendations and feedback are helpful in guiding DOD’s business transformation and related improvement efforts. Nevertheless, the department disagreed with the remaining four recommendations on the grounds that their intent had already been met through DOD’s existing business system investment management structure and processes, or that they contradicted the tiered accountability concept embedded in this structure and processes. The department’s comments relative to each of our project-level and portfolio-level recommendations, along with our responses to its comments, are provided below. With respect to our five project-level recommendations, the department stated that it partially agreed with two and disagreed with three. DOD partially agreed with our recommendation to define and implement policies and procedures that assign the investment boards responsibility for programs throughout the investment life cycle and specify how the business investment management system is coordinated with JCIDS, PPBE, and DAS. In particular, it stated that under its tiered accountability approach to business systems investment management, the components are currently required to review all programs throughout their investment life cycles. We do not question this requirement, and we recognize it in our report. However, consistent with our ITIM framework, the corporate investment boards should continue to review investments that meet the defined threshold criteria throughout their life cycles (i.e., when they are in operations and maintenance). In contrast, DOD’s corporate boards focus only on those investments that are in the development/modernization stage. The department also stated that a linkage is currently depicted in existing guidance among its investment selection, acquisition, and funding processes. While we do not question that this guidance contains an illustration depicting such a link, neither this guidance nor supporting procedures define how this linkage is executed (e.g., how investment funding decisions are in fact integrated with investment selection decisions). DOD’s comments appear to acknowledge this point by stating that the department has begun to define and implement a Business Capability Lifecycle concept, which is intended to integrate the investment selection and acquisition management processes for Tier 1 and enterprise systems into a single oversight process that leverages the existing IRB and DBSMC oversight framework. DOD partially agreed with our recommendation to define and implement policies and procedures that specify how cost, schedule, and benefit data are to be used in making certification and annual review decisions; define the criteria used to select investments as enterprisewide; and establish consistent and effective guidance for BEA compliance. In particular, the department agreed that additional criteria are required for selecting enterprisewide investments, noting that initial criteria have been defined and will be incorporated in the investment management process. However, the department did not agree that cost, schedule, and BEA compliance information are not sufficiently used for certification and annual review decisions, adding that such information is required in its current policies. We do not agree. Specifically, while we do not question whether investment data are provided to the DBSMC and the IRBs, the department’s policies and procedures do not include specific decision criteria that explain how these data are to be used to make consistent, repeatable selection and reselection decisions across all investments. In addition, while BEA compliance policies have been developed and are being used, the guidance is not fully defined. For example, the guidance allows programs to defer demonstrating full compliance with important BEA artifacts until the final phases of the acquisition process, at which time addressing instances of noncompliance would be more expensive and difficult. Furthermore, the compliance criteria are not consistently described in different guidance documentation. As a result, DOD risks beginning system production and deployment before ensuring that a system is sufficiently aligned to the BEA. DOD did not agree with our recommendation to define and implement policies and procedures that provide for the reselection of investments that are in operations and maintenance. According to DOD, components are required by policy to annually review all business systems, including investments for which there is no planned development or modernization spending. We agree that the annual review process does require this. However, consistent with our ITIM framework, the corporate investment boards should continue to reselect investments that meet the defined threshold criteria throughout their life cycles (i.e., when they are in operations and maintenance). In contrast, DOD’s corporate boards focus only on reselecting those investments that are in the development/modernization stage. DOD did not agree with our recommendation to define and implement policies and procedures that specify how the corporate boards use funding information in carrying out decisions on system certification and approvals. In this regard, it stated that such information is required in its current policies and considered during board deliberations. We do not agree. Our recommendation does not address whether existing policies or guidance provide for the collection of this information; our recommendation addresses the definition of policy, guidance, and supporting procedures that fall short of satisfying the best practices embodied in our ITIM framework. Specifically, while we do not question whether funding data are provided to investment decision-making bodies, the department’s policies and procedures do not include specific decision criteria that explain how these data are to be used to make consistent, repeatable selection and reselection decisions across all investments. DOD did not agree with our recommendation to define and implement policies and procedures that provide for sufficient oversight and visibility into component-level investment management activities. In particular, it stated that this recommendation contradicts the department’s “tiered accountability” approach to investment management. We do not agree. Under the department’s current policies and guidance, most DOD investments are not subject to corporate visibility and oversight, either because they do not involve development/modernization (i.e., they are in operations and maintenance) or because they do not exceed a certain dollar threshold. Our framework recognizes that effective implementation of a tiered accountability concept should include appropriate corporate visibility into and oversight of investments, either through review and approval of those investments that meet certain criteria or through awareness of a subordinate board’s investment management activities. Moreover, this visibility and oversight should extend to the entire portfolio of investments, including those that are in operations and maintenance. To ensure that this occurs, applicable policies and procedures need to explicitly cover all such investments and need to define how this is to be accomplished. With respect to our four portfolio-level recommendations, the department stated that it partially agreed with three and disagreed with one. DOD partially agreed with our recommendation to define and implement policies and procedures for creating and modifying portfolio selection criteria for business system investments. In particular, it stated that while components are responsible for developing and managing their own portfolio management processes, upcoming initiatives, such as the Business Capability Lifecycle concept, will lead to revisions in the department’s investment review policies and procedures, such as including portfolio selection criteria for enterprise systems that span components. However, while these are important steps, the concept, as defined by the department, does not apply to the thousands of investments that are not enterprisewide. DOD partially agreed with our recommendation to define and implement policies and procedures that address analyzing, selecting, and maintaining business system investment portfolios. In particular, it stated that the implementation of the Business Capability Lifecyle concept will provide the corporate boards with improved visibility into all investments in a given portfolio and a broader set of criteria for analyzing, selecting, and maintaining business system investment portfolios. DOD partially agreed with our recommendation to define and implement policies and procedures that address reviewing, evaluating, and improving the performance of its portfolio(s) by using cost, schedule, and risk indicators. In particular, it stated that while such indicators are part of the investment certification and review processes, efforts are now under way to better understand the nature and impact of program risks through application of an Enterprise Risk Assessment Methodology. While we recognize the role and value of such tools in understanding and addressing program risks, this tool is program-specific and not portfolio-focused. DOD did not agree with our recommendation to define and implement policies and procedures that address conducting postimplementation reviews and having the corporate investment boards consider the review results and develop lessons learned from them. In particular, it stated that this process should not be managed by the Deputy Secretary of Defense, and also stated that our recommendation is redundant with postimplementation reviews currently required under OMB Circular A- 130. We do not agree with DOD’s statements. Our recommendation does not call for the Deputy Secretary to manage the postimplementation review process. Rather, it provides for developing policies and procedures for performing postimplementation reviews for all tiers of business systems and having the DBSMC and IRBs, which are the corporate investment boards, consider the information gathered from these reviews and develop lessons learned. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer; the Under Secretary of Defense (Personnel and Readiness); and the Director, Defense Finance and Accounting Service. Copies of this report will be made available to other interested parties upon request. This report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine whether the Department of Defense’s (DOD) corporate investment management approach comports with relevant federal guidance. Our analysis was based on the best practices contained in GAO’s Information Technology Investment Management (ITIM) framework, and the framework’s associated evaluation methodology, and focused on DOD’s establishment of departmental-level policies and procedures for business system investments needed to assist organizations in complying with the investment management provisions of the Clinger-Cohen Act of 1996 (Stages 2 and 3). It did not include case studies to verify the implementation of established policies and procedures. To address our objective, we asked DOD to complete a self-assessment of its corporate investment management process and provide the supporting documentation. We then reviewed the results of the department’s self- assessment of Stages 2 and 3 organizational commitment practices— meaning those practices related to structures, policies, and procedures— and compared them against our ITIM framework. We also validated and updated the results of the self-assessment through document reviews and interviews with officials, such as the Director of Investment Management and the Defense Business Systems Acquisition Executive. In doing so, we reviewed written policies, procedures, and guidance and other documentation providing evidence of executed practices, including the Defense Acquisition System guidance, the Investment Review Board (IRB) Concept of Operations and Guidance, the Business Enterprise Architecture Compliance Guidance, IRB charters and meeting minutes, and the Business Transformation Guidance. We compared the evidence collected from our document reviews and interviews with the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met all of the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of all elements of a practice being fully performed or when we determined that there were significant weaknesses in DOD’s execution of the key practice. In addition, we provided DOD with the opportunity to produce evidence for the key practices rated as “not executed.” We conducted our work at DOD headquarters offices in Arlington, Virginia, from August 2006 through April 2007 in accordance with generally accepted government auditing standards. In addition to the contact person named above, key contributors to this report were Neil Doherty, Nalani Fraser, Nancy Glover, Michael Holland, Neelaxi Lakhmani (Assistant Director), Jacqueline Mai, Sabine Paul, Niti Tandon, and Jennifer Stavros-Turner. | In 1995, GAO first designated the Department of Defense's (DOD) business systems modernization program as "high-risk," and continues to do so today. In 2004, Congress passed legislation reflecting prior GAO recommendations for DOD to adopt a corporate approach to information technology (IT) business system investment management. To support GAO's legislative mandate to review DOD's efforts, GAO assessed whether the department's corporate investment management approach comports with relevant federal guidance. In doing so, GAO applied its IT Investment Management framework and associated methodology, focusing on the framework's stages related to the investment management provisions of the Clinger-Cohen Act of 1996. DOD has established the management structures needed to effectively manage its business system investments, but it has not fully defined many of the related policies and procedures that GAO's IT Investment Management framework defines. Specifically, the department has defined four of nine practices that call for project-level policies and procedures, and one of the five practices that call for portfolio-level policies and procedures. For example, DOD has established an enterprisewide IT investment board responsible for defining and implementing its business system investment governance process, documented policies and procedures for ensuring that systems support ongoing and future business needs, developed procedures for identifying and collecting information about these systems to support investment selection and control, and assigned responsibility to an individual or a group for managing the development and modification of the business system portfolio selection criteria. However, DOD has not fully documented business system investment policies and procedures for directing investment board operations, selecting new investments, reselecting ongoing investments, integrating the investment funding and the investment selection processes, and developing and maintaining a complete business system investment portfolio(s). Regarding project-level investment management practices, DOD officials said that these are performed at the component level, and that departmental policies and procedures established for overseeing components' execution of these practices are sufficient. For portfolio-level practices, however, these officials stated that they intend to improve departmental policies and procedures for business system investments by, for example, establishing a single governance structure, but plans or time frames for doing so have not been established. Until DOD fully defines departmentwide policies and procedures for both individual projects and portfolios of projects, it risks selecting and controlling these business system investments in an inconsistent, incomplete, and ad hoc manner, which in turn reduces the chances that these investments will meet mission needs in the most cost-effective manner. |
FNS’ quality control system measures the states’ performance in accurately determining food stamp eligibility and calculating benefits. Under this system, the states calculate their payment errors by annually drawing a statistically valid sample of at least 300 to 1,200 active cases, depending on the average monthly caseload; by reviewing the case information; and by making home visits to determine whether households were eligible for benefits and received the correct benefit payment. FNS regional offices validate the results by reviewing a subset of each state’s sample to determine its accuracy, making adjustments to the state’s overpayment and underpayment errors as necessary. To determine each state’s combined payment error rate, FNS adds overpayments and underpayments, then divides the sum by total food stamp benefit payments. As shown in figure 1, the national combined payment error rate for the Food Stamp Program was consistently above 9 percent from fiscal year 1993 through fiscal year 1999. About 70 percent of the food stamp payment errors resulted in overpayments to recipients, while about 30 percent resulted in underpayments. FNS’ payment error statistics do not account for the states’ efforts to recover overpayments; in fiscal year 1999, the states collected $213 million in overpayments. (See app. II for information about states’ error rates and collections of overpayments.) Errors in food stamp payments occur for a variety of reasons. For example, food stamp caseworkers may miscalculate a household’s eligibility and benefits because of the program’s complex rules for determining who are members of the household, whether the value of a household’s assets (mainly vehicles and bank accounts) is less than the maximum allowable, and the amount of a household’s earned and unearned income and deductible expenses. Concerning the latter, food stamp rules require caseworkers to determine a household’s gross monthly income and then calculate a net monthly income by determining the applicability of six allowable deductions: a standard deduction, an earned income deduction, a dependent care deduction, a medical deduction, a child support deduction, and an excess shelter cost deduction. (See app. III for the factors that state caseworkers consider in calculating a household’s excess shelter cost deduction.) The net income, along with other factors such as family size, becomes the basis for determining benefits. Other payment errors occur after benefits have been determined primarily because households do not always report changes in income that can affect their benefits and the states do not always act on reported changes, as required by food stamp law. To reduce the likelihood of payment errors, FNS regulations require that states certify household eligibility at least annually, and establish requirements for households to report changes that occur after certification. In certifying households, states are required to conduct face- to-face interviews, typically with the head of the household, and obtain pertinent documentation at least annually. In establishing reporting requirements, the states have the option of requiring households to use either (1) monthly reporting, in which households with earned income file a report on their income and other relevant information each month; or (2) change reporting, in which all households report certain changes, including income fluctuations of $25 or more, within 10 days of the change. According to FNS, many states have shifted from monthly reporting to change reporting because of the high costs associated with administering a monthly reporting system. However, change reporting is error-prone because households do not always report changes and the states do not always act on them in a timely fashion, if at all. Each of the 28 states we contacted has taken many actions to reduce payment error rates. Further, 80 percent of the states took each of five actions: (1) case file reviews by supervisors or special teams to verify the accuracy of food stamp benefit payments, (2) special training for caseworkers, (3) analyses of quality control data to identify causes of payment errors, (4) electronic database matching to identify ineligible participants and verify income and assets, and (5) use of computer software programs to assist caseworkers in determining benefits. It is difficult to link a specific state action to its effect on error rates because other factors also affect error rates. However, almost all state food stamp officials cited case file reviews by supervisors and others as being one of their most effective tools for reducing error rates. Additionally, state officials most often cited the competing pressure of implementing welfare reform as the primary challenge to reducing food stamp payment errors in recent years. The following subsection summarizes our findings on state actions to reduce payment errors. Case file reviews to verify payment accuracy: In 26 of the 28 states we contacted, supervisors or special teams reviewed case files to verify the accuracy of benefit calculations and to correct any mistakes before the state’s quality control system identified them as errors. Supervisory reviews, used by 22 states, typically require that supervisors examine a minimum number of files compiled by each caseworker. For example, Alaska requires monthly supervisory review of five cases for each experienced caseworker and all cases for each new caseworker. Furthermore, 20 states, including many of the states using supervisory review, use special teams to conduct more extensive reviews designed to identify problems in specific offices, counties, or regions. Reviewers correct mistakes before they are detected as quality control errors, where possible; identify the reasons for the mistakes; and prescribe corrective actions to prevent future errors. For example, in Genesee County, Michigan, the teams read about 2,800 case files, corrected errors in nearly 1,800, and provided countywide training in such problem areas as shelter expenses and earned income. In Massachusetts, caseworkers reviewed all case files in fiscal year 2000 because of concerns that the state’s error rate would exceed the national average and that FNS would impose financial sanctions. Massachusetts corrected errors in about 13 percent of the case files reviewed; these would have been defined as payment errors had they been identified in a quality control review. Special training for caseworkers: In addition to the training provided to new caseworkers, 27 states provided a range of training for new and experienced caseworkers aimed at reducing payment errors. For example, these states conducted training specifically targeted to calculating benefits for certain categories of food stamp households, such as those with earned income or those with legal noncitizens, for which rules are more likely to be misapplied. Many states also conducted training to update caseworkers and supervisors on food stamp policy changes that affect how benefits are calculated; new policies often introduce new calculation errors because caseworkers are unfamiliar with the revised rules for calculating benefits, according to several state officials. Analysis of quality control data: Twenty-five states conducted special analyses of their quality control databases to identify common types of errors made in counties or local offices for use in targeting corrective actions. For example, California created a quality control database for the 19 largest of its 54 counties and generated monthly reports for each of the 19 counties to use. Georgia assigned a staff member to review each identified quality control error and work with the appropriate supervisor or worker to determine why the error occurred and how it could be prevented in the future. With this process, officials said, counties are much more aware of their error cases, and now perceive quality control as a tool for reducing errors. In Michigan, an analysis of quality control data revealed that caseworkers were misinterpreting a policy that specified when to include adults living with a parent in the same household, and changes were made to clarify the policy. Electronic database matching: All 28 states matched their food stamp rolls against other state and federal computer databases to identify ineligible participants and to verify participants’ income and asset information. For example, all states are required to match their food stamp rolls with state and local prisoner rolls. In addition, most states routinely match their food stamp participants with one or more of the following: (1) their department of revenue’s “new hires” database (a listing of recently employed individuals in the state) to verify income, (2) the food stamp rolls of neighboring states to identify possible fraud, and (3) their department of motor vehicle records to verify assets. Officials in four states said the “new hires” match reduced payment errors by allowing caseworkers to independently identify a change in employment status that a household had not reported and that would likely affect its benefits. Mississippi food stamp officials said the vehicle match helped reduce payment errors because caseworkers verified the value of applicants’ vehicles as part of determining eligibility. Computer assistance in calculating benefits: Twenty-three states had developed computer software for caseworkers to use in determining an applicant’s eligibility and/or in calculating food stamp benefit amounts. Twenty-two of the states have software that determines eligibility and calculates benefits based on information caseworkers enter; the remaining states’ software is limited to calculating benefits after the caseworker has determined eligibility. These programs may also cross- check information to correct data entry errors; provide automated alerts that, for example, a household member is employed; and generate reminders for households, for example, to schedule an office visit. The most advanced software programs had online interview capabilities, which simplified the application process. Some states had automated case management systems that integrated Food Stamp Program records with their Medicaid and other assistance programs, which facilitated the administration of these programs. Some states took other actions to reduce their payment errors. For example, even though FNS regulations only require that food stamp households be recertified annually, 16 states increased the frequency with which certain types of food stamp households must provide pertinent documentation for recertifying their eligibility for food stamp benefits. In particular, 12 of the 16 states now require households with earned income to be recertified quarterly because their incomes tend to fluctuate, increasing the likelihood of payment errors. More frequent certification enables caseworkers to verify the accuracy of household income and other information, allowing caseworkers to make appropriate adjustments to the household’s benefits and possibly avoid a payment error. However, more frequent certification can also inhibit program participation because it creates additional reporting burdens for food stamp recipients. In addition to more frequent certification, five states reported that they access credit reports and public records to determine eligibility and benefits. Seven states have formed change reporting units in food stamp offices serving certain metropolitan areas, so that participants notify these centralized units, instead of caseworkers, about starting a new job or other reportable changes. Food stamp officials in 20 of the 28 states told us that they have primarily relied on case file reviews by supervisors and others to verify payment accuracy and reduce payment errors. For example, Georgia officials noted one county’s percentage of payment errors dropped by more than half as a result of the state’s requirement that management staff in 10 urban counties re-examine files after a supervisor’s review. In each of the past 3 years, Ohio food stamp administrators have reviewed up to 100 cases per county per year and have awarded additional state funding to counties with low error rates. In fiscal year 1999, the counties used $2.5 million in state funds primarily for payment accuracy initiatives. There was less consensus about the relative usefulness of other initiatives in reducing payment errors. Specifically, food stamp officials in 13 states told us that special training for caseworkers was one of their primary initiatives; officials in 8 states cited recertifying households more frequently; officials in 6 states identified the use of computer software to determine eligibility and/or benefits; officials in 5 states identified computer database matches; and officials in 4 states cited analyses of quality control data. Food stamp officials in 22 of the states we contacted cited their states’ implementation of welfare reform as a challenge to reducing error rates in recent years. In particular, implementing welfare reform programs and policy took precedence over administering the Food Stamp Program in many states—these programs competed for management attention and resources. In Connecticut, for example, caseworkers were directed to help participants find employment; therefore, the accuracy of food stamp payments was deemphasized, according to state officials. Similarly, Hawaii officials said agency leadership emphasized helping recipients to find employment and instituted various programs to accomplish this, which resulted in less attention to payment accuracy. Furthermore, officials from 14 states said welfare reform led to an increase in the number of working poor. This increased the possibility of errors because the income of these households is more likely to fluctuate than income of other food stamp households. State food stamp officials cited three other impediments to their efforts to reduce payment errors, although far less frequently. First, officials in 12 states cited a lack of resources, such as a shortage of caseworkers to manage food stamp caseloads, as a challenge to reducing error rates. Georgia, Mississippi, and Texas officials said caseworker turnover was high, and New Hampshire officials said they currently have a freeze on hiring new caseworkers. Second, officials in 10 states cited problems associated with either using, or making the transition from, outdated automated systems as challenges to reducing payment errors. For example, New Hampshire officials found that their error rate increased from 10.2 percent in fiscal year 1998 to 12.9 percent in fiscal year 1999 after they began to use a new computer system. In addition, Connecticut and Maryland officials noted that incorporating rules changes into automated systems is difficult and often results in error-prone manual workarounds until the changes are incorporated. Finally, officials in nine states told us that food stamp eligibility revisions in recent years, particularly for legal noncitizens, have increased the likelihood of errors. To encourage the states to reduce error rates, FNS has employed financial sanctions and incentives, approved waivers of reporting requirements for certain households, and promoted initiatives to improve payment accuracy through the exchange of information among the states. However, state food stamp officials told us the single most useful change for reducing error rates would be for FNS to propose legislation to simplify requirements for determining Food Stamp Program eligibility and benefits. Simplifying food stamp rules would not necessarily alter the total amount of food stamp benefits given to participants, but it may reduce the program’s administrative costs (the states spent $4.1 billion to provide $15.8 billion in food stamp benefits in fiscal year 1999). FNS officials and others expressed concern, however, that some simplification options may reduce FNS’ ability to precisely target benefits to each individual household’s needs. The three principal methods FNS has used to reduce payment errors in the states are discussed in the following subsections. As required by law, FNS imposes financial sanctions on states whose error rates exceed the national average. These states are required to either pay the sanction or provide additional state funds—beyond their normal share of administrative costs—to be reinvested in error-reduction efforts, such as additional training in calculating benefits for certain households. FNS imposed $30.6 million in sanctions on 16 states with payment error rates above the national average in fiscal year 1999 and $78.2 million in sanctions on 22 states in fiscal year 1998—all of which were reinvested in error- reduction efforts. (See app. IV.) Food stamp officials in 22 states reported that their agencies had increased their commitment to reducing payment errors in recent years; officials in 14 states stated that financial sanctions, or the threat of sanctions, was the primary reason for their increased commitment. For example, when the Texas Department of Human Services requested money to cover sanctions prior to 1995, the Texas legislature required the department to report quarterly on its progress in reducing its payment error rate. Officials in Texas, which has received enhanced funding for the past 2 fiscal years, cited the department’s commitment and accountability to the Texas legislature as primary reasons for reducing the error rate over the years and for maintaining their focus on payment accuracy. FNS also rewards states primarily on the basis of their combined payment error rate being less than or equal to 5.9 percent—well below the national average. FNS awarded $39.2 million in enhanced funding to six states in fiscal year 1999 and $27.4 million to six states in fiscal year 1998. In the past 5 years, 16 states have received enhanced funding at least once. Officials in one state told us that the enhanced funding remained in the state’s general fund, while officials in four states said the enhanced funding supplemented the state’s appropriation for use by the Food Stamp Program and other assistance programs. For example, in Arkansas, the food stamp agency used its enhanced funding for training, systems development, and equipment. Arkansas officials told us that enhanced funding was a major motivator for their agency, and they have seen an increase in efforts to reduce payment errors as a direct result. In July 1999, FNS announced that it would expand the availability of waivers of certain reporting requirements placed on food stamp households. FNS was concerned that the increase in employment among food stamp households would result in larger and more frequent income fluctuations, which would increase the risk of payment errors. FNS also was concerned that the states’ reporting requirements were particularly burdensome for the working poor and may, in effect, act as an obstacle to their participation in the program. This is because eligible households may not view food stamp benefits as worth the time and effort it takes to obtain them. As of November 2000, FNS had granted reporting waivers to 43 states, primarily for households with earned income. (See app. V.) The three principal types of waivers are explained below: The threshold reporting waiver raises the earned income changes that households must report to more than $100 per month. (Households still must report if a member gains or loses a job.) Without this waiver, households would be required to report any wage or salary change of $25 or more per month. Ohio uses this type of waiver (with a smaller $80-per-month threshold) specifically for self-employed households. Ohio credits the use of this and other types of reporting waivers to the decrease in its error rate from 11.2 percent in 1997 to 8.4 percent in 1999. The status reporting waiver limits the changes that households must report to three key events: (1) gaining or losing a job, (2) moving from part-time to full-time employment or vice versa, and (3) experiencing a change in wage rate or salary. This waiver eliminates the need for households to report fluctuations in the number of hours worked, except if a member moves from part-time to full-time employment. Texas officials cited the implementation of the status reporting waiver in 1994 as a primary reason that their error rate dropped by nearly 3 percentage points (from over 12 percent) in 1995. Texas’ error rate reached a low of about 4.6 percent in 1999. The quarterly reporting waiver eliminates the need for households with earned income to report any changes during a 3-month period, provided the household provides required documentation at the end of the period. The waiver reduces payment errors because any changes that occurred during a quarter were not considered to be errors and households more readily understood requirements for reporting changes. Food stamp officials in Arkansas, which implemented a quarterly reporting waiver in 1995, believe that their quarterly reporting waiver is a primary reason for their subsequent stable error rate. FNS expects that reporting waivers will reduce the number of payment errors made because households are more likely to report changes and, with fewer reports to process, the states will be able to process changes accurately and within required time frames. However, the lower payment error rates that result from these waivers are primarily caused by a redefinition of a payment error, without reducing the Food Stamp Program’s benefit costs. For example, a pay increase of $110 per month that is not reported until the end of the 3-month period is not a payment error under Arkansas’ quarterly reporting waiver, while it would be an error if there were no waiver. As a result, the quarterly reporting waiver may reduce FNS’ estimate of overpayments and underpayments. FNS estimated, in July 1999, that the quarterly waiver would increase food stamp benefit costs by $80 million per year, assuming that 90 percent of the states applied for the waiver. Of the 10 states that do not have a reporting waiver, 7 require monthly reporting for households with earned income. The advantage of monthly reporting is that benefits are issued on the basis of what has already occurred and been documented. In addition, regular contact with food stamp households allows caseworkers to quickly detect changes in the household’s situation. However, monthly reporting is more costly to administer and potentially can exacerbate a state’s error rate, particularly if it cannot keep up with the volume of work. A Hawaii food stamp official told us that monthly reporting contributed to recent increases in Hawaii’s error rate because caseworkers have not processed earned income changes on time, while Connecticut officials said their food stamp workers were making mistakes by rushing to meet deadlines. As part of the food stamp quality control program, FNS’ seven regional offices have assembled teams of federal and state food stamp officials to identify the causes of payment errors and ways to improve payment accuracy. Each region also has held periodic conferences in which states from other regions were invited to highlight their successes and to respond to questions about implementing their initiatives. Examples of topics at recent conferences in FNS’ northeastern region included best payment accuracy practices and targeting agency-caused errors. FNS’ regional offices also have made funds available for states to send representatives to other states to learn first-hand about initiatives to reduce payment errors. Since 1996, FNS has compiled catalogs of states’ payment accuracy practices that provide information designed to help other states develop and implement similar initiatives. Food stamp officials in all 28 states we contacted called for simplifying complex Food Stamp Program rules, and most of these states would like to see FNS involved in advocating simplification. In supporting simplification, the state officials generally cited caseworkers’ difficulty in correctly applying food stamp rules to determine eligibility and calculate benefits. For example, Maryland’s online manual for determining a household’s food stamp benefits is more than 300 pages long. Specifically, the state officials cited the need to simplify requirements for (1) determining a household’s deduction for excess shelter costs and (2) calculating a household’s earned and unearned income. Food stamp officials in 20 of the 28 states we contacted said simplifying the rules for determining a household’s allowable shelter deduction would be one of the best ways to reduce payment errors. The Food Stamp Program generally provides for a shelter deduction when a household’s monthly shelter costs exceed 50 percent of income after other deductions have been allowed. Allowable deductions include rent or mortgage payments, property taxes, homeowner’s insurance, and utility expenses. Several state officials told us that determining a household’s shelter deduction is prone to errors because, for example, caseworkers often need to (1) determine whether to pro-rate the shelter deduction if members of a food stamp household share expenses with others, (2) determine whether to use a standard utility allowance rather than actual expenses, and (3) verify shelter expenses, even though landlords may refuse to provide required documentation. Food stamp officials in 18 states told us that simplifying the rules for earned income would be one of the best options for reducing payment errors because earned income is both the most common and the costliest source of payment errors. Generally, determining earned income is prone to errors because caseworkers must use current earnings as a predictor of future earnings and the working poor do not have consistent employment and earnings. Similarly, officials in six states told us that simplifying the rules for unearned income would help reduce payment errors. In particular, state officials cited the difficulty caseworkers have in estimating child support payments that will be received during the certification period because payments are often intermittent and unpredictable. Because households are responsible for reporting changes in unearned income of $25 or more, differences between estimated and actual child support payments often result in a payment error. FNS officials and advocates for food stamp participants, however, have expressed concern about some possible options for simplifying the rules for determining eligibility and calculating benefits. For example, in determining a household’s allowable shelter deduction, if a single standard deduction were used for an entire state, households in rural areas would likely receive greater benefits than they would have using actual expenses, while households in urban areas would likely receive smaller benefits. In this case, simplification may reduce FNS’ ability to precisely target benefits to each individual household’s needs. FNS officials also pointed out that likely reductions in states’ payment error rates would reflect changes to the rules for calculating food stamp benefits rather than improved performance by the states. FNS has begun to examine alternatives for improving the Food Stamp Program, including options for simplifying requirements for determining benefits, as part of its preparations for the program’s upcoming reauthorization. More specifically, FNS hosted a series of public forums, known as the National Food Stamp Conversation 2000, in seven cities attended by program participants, caseworkers, elected officials, antihunger advocates, emergency food providers, health and nutrition specialists, food retailers, law enforcement officials, and researchers. Simplification of the Food Stamp Program was one of the issues discussed at these sessions as part of a broad-based dialogue among stakeholders about aspects of the program that have contributed to its success and features that should be strengthened to better achieve program goals. FNS is currently developing a variety of background materials that will integrate the issues and options raised in these forums. FNS has not yet begun to develop proposed legislation for congressional consideration in reauthorizing the Food Stamp Program. FNS and the states have taken actions aimed at reducing food stamp payment errors, which currently stand at about 10 percent of the program’s total benefits. Financial sanctions and enhanced funding have been at least partially successful in focusing states’ attention on minimizing errors. However, this “carrot and stick” approach can only accomplish so much, because food stamp regulations for determining eligibility and benefits are extremely complex and their application inherently error-prone and costly to administer. Furthermore, this approach, carried to extremes, can create incentives for states to take actions that may inhibit achievement of one of the agency’s basic missions—providing food assistance to those who are in need. For example, increasing the frequency that recipients must report income changes could decrease errors, but it could also have the unintended effect of discouraging participation by the eligible working poor. This would run counter not only to FNS’ basic mission but also to an overall objective of welfare reform—helping people move successfully from public assistance into the workforce. Simplifying the Food Stamp Program’s rules and regulations offers an opportunity to, among other things, reduce payment error rates and promote program participation by eligible recipients. FNS has taken initial steps in examining options for simplification through its forums with stakeholders. However, it is unclear the extent to which FNS will build on these ideas to (1) systematically develop and analyze the advantages and disadvantages of various simplification options, and (2) if warranted, submit the legislative changes needed to implement simplification proposals. To help ease program administration and potentially reduce payment errors, we recommend that the Secretary of Agriculture direct the Administrator of the Food and Nutrition Service to (1) develop and analyze options for simplifying requirements for determining program eligibility and benefits; (2) discuss the strengths and weaknesses of these options with representatives of the congressional authorizing committees; and (3) if warranted, submit legislative proposals to simplify the program. The analysis of these options should include, among other things, estimating expected program costs, effects on program participation, and the extent to which the distribution of benefits among recipients could change. We provided the U.S. Department of Agriculture with a draft of this report for review and comment. We met with Agriculture officials, including the Director of the Program Development Division within the Food and Nutrition Service’s Food Stamp Program. Department officials generally agreed with the information presented in the report and provided technical clarifications, which we incorporated as appropriate. Department officials also agreed with the thrust of our recommendations. However, they expressed reservations about the mechanics of implementing our recommendation that they discuss simplification options with representatives of the congressional authorizing committees. In particular, they noted the importance of integrating consultation on policy options with the process for developing the President’s annual budget request. In addition, they urged a broader emphasis on consideration of policy options that meet the full range of program objectives, including, for example, ending hunger, improving nutrition, and supporting work. We agree that simplification options should be discussed in the larger context of achieving program objectives. However, we believe that an early dialogue about the advantages and disadvantages of simplification options will facilitate the congressional debate on one of the most important and controversial issues for reauthorizing the Food Stamp Program. Copies of this report will be sent to the congressional committees and subcommittees responsible for the Food Stamp Program; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix VI. To examine states’ efforts to minimize food stamp payment errors, we analyzed information obtained through structured telephone interviews with state food stamp officials in 28 states. We selected the 28 states to include states with the lowest payment error rates, states with the highest error rates, and the 10 states with the most food stamp participants in fiscal year 1999. Overall, the states we interviewed included 14 states with payment error rates below the national average and 14 states with error rates above the national average. They delivered about 74 percent of all food stamp benefits in fiscal year 1999. We supplemented the structured interviews with information obtained from visits to Maryland, Massachusetts, Michigan, and Texas. To examine what the Department of Agriculture’s Food and Nutrition Service (FNS) has done and could do to help states reduce food stamp payment errors, we relied in part on information obtained from our telephone interviews, as well as with information obtained from discussions with officials at FNS’ headquarters and each of its seven regional offices. We also analyzed FNS documents and data from its quality control system. exceeding 130 percent of the monthly poverty income guideline for its household size. To qualify for this option, a state must have a certification period of 6 months or more. The threshold reporting waiver raises the earned income changes that households must report to more than $100 per month. (Households still must report if a member gains or loses a job.) Without this waiver, households would be required to report any wage or salary change of $25 or more per month. The status reporting waiver limits the income changes that households must report to three key events: (1) gaining or losing a job, (2) moving from part-time to full-time employment or vice versa, and (3) a change in the wage rate or salary. The quarterly reporting waiver eliminates the need for households with earned income to report any changes during a 3-month period, provided the household provides required documentation at the end of the period. The 5-hour reporting waiver limits changes that households must report to three key events: (1) gaining or losing a job; (2) a change in wage rate or salary; and (3) a change in hours worked of more than 5 hours per week, if this change is expected to continue for more than a month. In addition to those named above, Christine Frye, Debra Prescott, and Michelle Zapata made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | In fiscal year 2000, the Department of Agriculture's Food Stamp Program, administered jointly by the Food and Nutrition Service (FNS) and the states, provided $15 billion in benefits to an average of 17.2 million low-income persons each month. FNS, which pays the full cost of food stamp benefits and half of the states' administrative costs, promulgates program regulations and oversees program implementation. The states run the program, determining whether households meet eligibility requirements, calculating monthly benefits the households should receive, and issuing benefits to participants. FNS assesses the accuracy of states' efforts to determine eligibility and benefits levels. Because of concerns about the integrity of Food Stamp Program payments, GAO examined the states' efforts to minimize food stamp payment errors and what FNS has done and could do to encourage and assist the states reduce such errors. GAO found that all 28 states it examined had taken steps to reduce payment errors. These steps included verifying the accuracy of benefit payments calculated through supervisory and other types of casefile reviews, providing specialized training for food stamp workers, analyzing quality control data to determine causes of errors and developing corrective actions, matching food stamp rolls with other federal and state computer databases to identify ineligible participants, and using computer software to assist caseworkers in determining benefits. To reduce payment errors, FNS has imposed financial sanctions on states with high error rates and has waived some reporting requirements. |
Although IRS has no single, authoritative definition of abusive shelters, IRS generally characterizes abusive shelters as very complicated transactions that sophisticated tax professionals promote to corporations and wealthy individuals, exploiting tax loopholes and reaping large and unintended tax benefits. As the Joint Committee on Taxation has said, “taxpayers and tax administrators have struggled in determining the line between legitimate ‘tax planning’ and unacceptable ‘tax shelters.’” Even though, it continued, “there is no uniform standard as to what constitutes a tax shelter … there are statutory provisions, judicial doctrines, and administrative guidance that attempt to limit or identify transactions in which a significant purpose is the avoidance or evasion of income tax.” Abusive shelters have been promoted by some accounting firms, law firms, and investment banks. Investors in these abusive shelters range from large and small corporations to wealthy individuals. IRS approaches the tax shelter enforcement problem from both the promoter and investor perspectives. IRS promoter investigations are designed to learn (1) what abusive shelters have been promoted, if the shelters are registered, and possibly how much they cost investors, (2) who purchased the shelters and what tax savings the investors expect, and (3) whether promoters should pay penalties for their activities. IRS examines investor and other tax returns to see if income, expenses, taxes, and credits are accurately reported. In a June 2002 letter, Treasury responded to congressional questions about whether Treasury had a comprehensive strategy for combating tax avoidance. In his letter to the then Ranking Member of the Committee on Finance, then Secretary of the Treasury O’Neill addressed the actions being taken to combat abusive shelters, referring to Treasury’s March 20, 2002, enforcement proposals on the topic. The proposals said that IRS had made significant organizational improvements to coordinate its response to ongoing abusive tax shelters. Treasury, all of IRS’s operating divisions, and IRS’s Office of Chief Counsel are involved in combating abusive shelter activity. Within IRS, LMSB has primary responsibility for combating abusive tax shelter activity. LMSB’s OTSA was created in February 2000 to centralize and coordinate the IRS response nationwide. As shown in figure 1, OTSA is the focal point for IRS shelter activities, overseeing promoter tax shelter registrations; taxpayer disclosures of tax shelters; hotline tip analysis and referral; and issue coordination and interface between the Office of Chief Counsel, Treasury, the Tax Shelter Committee, the 6700 Committee (referring to section 6700 of the Internal Revenue Code), and external stakeholders. The Tax Shelter Committee oversees LMSB’s tax shelter program. The committee is composed of the Commissioner and Deputy Commissioner of LMSB, the Director of Pre-Filing and Technical Guidance, LMSB Division Counsel, five Industry Directors, the Director of International, and the Directors of Field Specialists and Research and Program Planning. The 6700 Committee serves under the Tax Shelter Committee and approves all LMSB tax shelter promoter activities. The financial services’ industry director chairs this committee. IRS’s appeals function receives and evaluates taxpayer objections to IRS examination determinations and may agree with those determinations or reduce or eliminate changes to tax returns resulting from them. The Office of Chief Counsel plays an integral role in combating shelters through summons enforcement and targeted litigation. By litigating, IRS establishes case law supporting IRS enforcement programs and aims to diminish the incentives taxpayers find for investing in tax avoidance transactions by increasing the risks and costs of IRS discovery. Abusive shelters are complex transactions that manipulate many parts of the tax code or regulations and are typically buried among “legitimate” transactions reported on tax returns. Because these transactions are often composed of many pieces located in several parts of a complex tax return, they are essentially hidden from plain sight, which contributes to the difficulty of determining the scope of the abusive shelter problem. Often lacking economic substance or a business purpose other than generating tax benefits, abusive shelters are promoted by some tax professionals, often in confidence, for significant fees, sometimes with the participation of tax-indifferent parties, such as foreign or tax-exempt entities. They may involve unnecessary steps and flow-through entities, such as partnerships, which make detection of these transactions more difficult. When a transaction has certain abusive characteristics defined by section 6111 of the Internal Revenue Code, the promoter or other tax shelter organizer is required to register it, describing the transaction and its tax benefits to the Secretary of the Treasury. This registration requirement enables Treasury and IRS to identify and evaluate questionable transactions. Under recently issued Treasury regulations, effective February 28, 2003, there are six categories of transactions for which promoters must maintain lists of investors who have entered into the transactions, and investors must disclose the transactions into which they have entered. The rules are designed to allow IRS to use information from investors to identify promoters who do not register transactions and to use promoter registrations and investor lists to identify investors who fail to disclose transactions. The six categories are transactions offered under conditions of confidentiality, transactions including contractual protections to the investor, transactions resulting in specific amounts of tax losses, transactions generating a tax benefit when the underlying asset is held transactions generating differences between financial accounts and tax accounts greater than $10 million, and “listed transactions.” A “listed transaction” is a transaction that is the same as or similar to one of the types of transactions IRS has determined to be a tax avoidance transaction. For a transaction to be a listed transaction, IRS must issue a notice, regulation, or other form of published guidance informing taxpayers of the details of the transaction. As of mid-August 2003, IRS had listed 27 kinds of abusive tax shelter transactions, a number that, as figure 2 shows, has grown more quickly in recent years than it had grown earlier. Disputes between IRS and taxpayers about the abusive nature of a transaction may be litigated. In some, but not all, cases, the courts have upheld the government position. The following cases illustrate features of abusive shelters: In 1993, a corporation began a company-owned life insurance (COLI) program in which the company purchased whole-life insurance on 36,000 employees for which the company was the sole beneficiary. The company then borrowed money against the policies at interest rates that averaged 11 percent and deducted the interest expense and administrative fees from income on its tax returns. Over 60 years, the interest costs and administrative fees would have exceeded the cash surrender value of the policies and benefits paid by several billion dollars. IRS disallowed the deductions and the case was litigated. Despite the fact that the money the company made on this arrangement may have been used to fund the company’s benefits program, or for other business purposes, the court found that the function of the program itself was only to generate tax deductions. As a result, the Tax Court sustained the IRS disallowance of deductions and concluded that the COLI program was a sham.The Eleventh Circuit Court of Appeals affirmed the Tax Court’s decision. A company had a sizable gain from the sale of a subsidiary and wanted to avoid or minimize paying tax on the gain. An investment bank proposed forming an offshore partnership with a foreign corporation (a tax-indifferent party) for the express purpose of sheltering the capital gains of its corporate client. The partnership purchased and quickly resold notes in a contingent installment sale transaction. The partnership earned a large capital gain, most of which it allocated to the foreign corporate partner. Later, related losses were allocated to the U.S. corporation, generating an approximate $100 million capital loss for the investment bank’s client. The corporation used this capital loss to shelter its U.S.-based capital gains. Both the Tax Court and the Third Circuit Court of Appeals ruled that the transaction lacked economic substance. The Third Circuit, in addition to requiring economic substance, held that a transaction must have a subjective nontax business motive to be respected for tax purposes. For this transaction, the investment bank was to earn a fee of $2 million. This was one of 11 such partnerships formed over a 1-year period from 1989 to 1990 by the investment bank. IRS has information that suggests the scope of abusive shelters totaled tens of billions of dollars over about a decade, but those estimates are based on limited data. This information comes from an OTSA database, examinations of large corporations, and a contractor study. Information contained in the OTSA database includes transactions disclosed to or discovered by IRS and estimates of potential tax losses. The tax loss estimates vary from being IRS officials’ recommended taxes based on examining some transactions to taxpayer judgments regarding potential losses in cases where examinations have not been done. In addition to being based on judgments, the database does not include any reductions resulting from examination, appeal, litigation, or other sources. Information from examinations of the largest corporations, which may overlap information in the OTSA database, shows proposed income adjustments in the tens of billions of dollars before reductions, but data were not available from IRS on the results of examinations of smaller corporations, partnerships, trusts, S corporations, or individuals. Information from IRS’s contractor study estimates an annual tax gap due to abusive shelters but has data and methodological limitations. As shown in table 1, as of September 30, 2003, an OTSA database included estimated potential tax losses of about $33 billion from investments in listed transactions, before considering any reductions resulting from examination, appeal, litigation, or other sources and another $52 billion in potential tax losses from nonlisted transactions with some characteristics of abusive shelters. This database contains information on promoters and investors and the amount of potential tax savings resulting from listed and nonlisted transactions. Nonlisted transactions are transactions that needed to be registered because they have some characteristics of abusive shelters but were not, at least yet, determined to be abusive. According to an IRS official, IRS was studying nonlisted transactions with about $12 billion in potential tax losses for possible listing. The database only includes information on abusive or possibly abusive transactions that had been disclosed to or discovered by IRS. The estimated tax losses contained in the OTSA database cover a wide range of years from at least as far back as tax year 1989 and extending even to future tax years since, for instance, improperly claimed deductions may be used in some cases to reduce future taxes. For the $29 billion in estimated tax losses associated with listed transactions contained in the January 14, 2003, database, about 82 percent of the potential tax losses were concentrated in the period from 1993 through 2002. According to data IRS provided in mid-October 2003, OTSA had information on almost 300 firms that had possibly promoted abusive shelters as well as other tax planning products that contain at least some features of abusive transactions. It was also aware of about 6,400 investors, including individuals and corporations that bought abusive shelters and other aggressive tax planning products. IRS has proposed shelter-related adjustments to large corporations’ income in examinations it has closed and in examinations still open as of early May 2003. In cases closed between October 1, 2001, and May 6, 2003, IRS proposed about $10.6 billion in abusive shelter-related adjustments to the income of 42 large corporations for tax years 1992-2000. These proposed adjustments would result in about $3.5 billion in tax revenue if the adjustments were not reduced. The corporations were in what is known as the Coordinated Industry Case (CIC) program, which includes the nation’s largest corporations.They agreed with about $1.2 billion of the $10.6 billion in proposed adjustments to income.As of early August 2003, Appeals research showed that few of the issues comprising the $9.4 billion unagreed amount had been resolved yet by Appeals or through a settlement initiative, although the database did not track all of them. For the 141 large corporations with cases still open in early May 2003, the amount of proposed shelter-related income adjustments was $47.6 billion, translating to about $16 billion in tax if not reduced. IRS did not have similar information for smaller corporations. Also, since one of the sources of information in the OTSA database is shelter-related adjustments proposed in examinations, the proposed adjustments in the CIC program may overlap the information in the OTSA database. In July 2003, an IRS contractor estimated the tax gap resulting from abusive shelters for different years. For 1993 through 1999, based on the contractor’s estimates, the average annual tax gap could have been as small as about $11.6 billion or as large as about $15.1 billion of forgone tax. However, the reliability of the contractor’s estimates is questionable because of methodological and data constraints the contractor faced when developing them. The estimates followed a September 2001 recommendation by the Treasury Inspector General for Tax Administration (TIGTA) that LMSB obtain a more precise estimate of the shelter problem to lay a better foundation for its strategy for addressing abusive shelters. In response, IRS contracted for models to predict the likelihood of finding abusive shelters within certain tax returns and to estimate the annual “tax gap” due to abusive shelters. Both IRS and contractor officials believe the contract results are more useful to predict returns with abusive shelters than they are to value the size of the abusive shelter problem. Nevertheless, as table 2 shows, the contractor produced estimates of the size of the problem for each year from 1993 through 1999. Yearly low-end estimates ranged from $9.0 billion of foregone tax in 1993 to $14.5 billion in 1999. On the other hand, the high-end estimates ranged from $12.1 billion in 1993 to $18.4 billion in 1999.Averaging the estimates over time results in the $11.6 billion to $15.1 billion range cited earlier. The tax gap model used three different kinds of data: (1) IRS’s Statistics of Income data for the largest U.S. companies, those with assets over $250 million falling within the CIC program, (2) Standard and Poor’s Compustat financial data, and (3) surveys of IRS field offices. IRS conducted surveys from 1999 through 2001 that asked field managers to identify abusive tax shelters in their open inventory of examinations-- relying on each manager’s understanding of what an abusive tax shelter is. Since survey data are included in the OTSA database, some of the same information used by the contractor appears in the OTSA information cited earlier. Treasury, IRS, the contractor, and we have concerns about the contractor estimates. First, it is difficult to determine whether these estimates might be overstating or understating the true extent of the tax gap because of the uncertainties in the underlying data and the elusive nature of the problem. In identifying abusive shelters in the IRS surveys, field managers might have anticipated that some abusive shelters existed where there were none or where the assertion of abuse might not be sustained. On the other hand, they might not have identified all the abusive shelters in their open inventory of examinations because their definitions of abusive shelters might have differed from each other. Finally, the data might not be representative of all transactions, especially those that closed, because survey responses were only to include open cases. Second, the Statistics of Income data only included U.S. corporations with assets of over $250 million falling within the CIC program. Many shelters may be reflected in tax returns of smaller corporations, partnerships, Subchapter S corporations, and wealthy individuals and were not included in this study. Since these transactions were not included in the contractor’s estimate, the resulting tax gap estimate is incomplete. Third, the estimates are based on known shelters. They were developed using 1990s’ ideas of what constituted abusive shelters. Since then, more shelters have been disclosed or identified by IRS and still others are under consideration for listing. Since the definition of an abusive shelter can change over time, and the data cannot reflect unknown or unidentified shelters, the operational definition of abusive shelters was a conservative one. While the last two concerns argue that the contractor’s estimates understate the true level of abusive shelters for recent years, the contractor’s estimates and other indicators of the problem’s size based on past data may also be of limited use as guides to current and future activity for other reasons. According to Treasury and IRS officials, the legal and economic environment has changed since the data for this study were developed. First, they said, IRS has taken many administrative actions to address abusive shelters. For instance, it is their belief that nothing puts more of a damper on taxpayer participation in a particular type of transaction than IRS listing it. Similarly, although corporate-owned life insurance transactions may heavily influence the contractor’s estimates, legislation addressed the problem in 1996 and 1997, and therefore current and future estimates would not reflect that problem—although they could reflect problems not identified in the period covered by the contractor’s study. Second, court cases have largely supported IRS’s assertions about the need for business purpose requirements and about requirements for economic substance in transactions. Third, today’s economy is not as robust as the economy in the late 1990s, generating less profit to protect. Finally, the publicity surrounding numerous corporate scandals may create a chilling effect in the market for aggressive transactions. Countering these points, however, are other opinions appearing in the press that (1) the courts could uphold some tax shelters and (2) IRS’s capacity to stem abusive shelters is limited. IRS developed a broad-based strategy for combating abusive shelters that included various features as well as elements of strategic planning. Deeming it a strategic initiative, IRS is executing a strategy incorporating four principal elements: (1) an emphasis on promoters, (2) efforts to deter, detect, and resolve abuse, (3) coordination of efforts throughout IRS, and (4) inducements provided for taxpayers to come forward and expedite case resolution. IRS is implementing a variety of initiatives designed to reduce taxpayer incentives to participate in abusive transactions and discourage promoters from marketing these transactions. Although IRS documents outline an overall strategy for combating abusive shelters, IRS has generally not yet defined long-term performance goals for the effort and the measures it would use to track progress in achieving those goals.However, IRS is planning to establish such goals and measures when it has more information on the abusive shelter activities it is currently tracking. IRS is actively pursuing abusive promoters to ensure (1) that tax strategies containing characteristics of potentially abusive shelters are registered, (2) that information about transactions is disclosed to IRS as required by sections 6111 and 6112 of the Internal Revenue Code, and (3) that, according to IRS’s OTSA manager, those who generate noncompliance change their behavior or go out of business. With 98 abusive shelter promoters approved for investigation as of June 30, 2003, IRS uses investigations to gain access to lists of the clients who buy promoters’ products and devise a roadmap to audit shelters included in the tax returns of the investors. IRS is also using promoter investigations to enforce the transaction registration requirements, which, in turn, assist in its efforts to understand, track, and close abusive shelters. IRS announced the completion of three large promoter investigations in 2001 through July 2003. They resulted in, among other things, three substantial payments and promoter promises to work with IRS to ensure ongoing compliance with shelter registration and list maintenance requirements. IRS focuses its efforts on deterring future marketing and sales of abusive tax shelters and on detecting and resolving existing shelters. TIGTA described IRS’s abusive shelter approach along the lines of deter, detect, and resolve in September 2001. IRS considers its efforts to provide guidance as early as possible to taxpayers and promoters in the form of recently proliferating IRS and Treasury determinations, notices, and rulings on abusive transactions and of registration, list maintenance, disclosure, and other requirements to be a key deterrent. (See fig. 2.) Also designed to deter abusive tax shelters, accuracy-related penalties aim at investors who use abusive shelters to substantially undervalue true tax liability. Other penalties are for promoters who market shelters that aid and abet the understatement of tax liability or who fail to register shelters. IRS’s Examination Returns Control System showed IRS assessing 21 investor penalties totaling about $73 million between July 1, 2002, and May 1, 2003, which taxpayers had not necessarily agreed to pay. During our review, Treasury included proposed legislation in the Administration’s revenue proposals to strengthen the penalties that could be used in abusive shelter situations. IRS’s ability to detect abusive shelters increased in the last 3 years due to OTSA’s hotline, through which callers provide tips about transactions or investors; disclosure, registration, and list maintenance requirements; increased attention by IRS management; and increased use of IRS examination resources to look for shelter irregularities. For instance, between May 31, 2000, and July 30, 2003, the hotline received 729 shelter- related telephone calls and e-mails, some of them leading IRS to new listed transactions, promoters, and investors. As another example, IRS expanded its disclosure requirements in June 2002 to include noncorporate taxpayers. Finally, as evidence of increased management attention, IRS established a new senior position reporting to the IRS Chief Counsel to supervise staff and lead task force initiatives to more quickly identify and deal with abusive shelters. Cases may be resolved at the examination level if taxpayers agree with IRS findings. If taxpayers do not agree, cases are resolved at the appeals level, through litigation, or by alternative dispute resolution. In addition to these detection and case resolution efforts, IRS is using Schedule K-1 data to research better methods of detecting abusive shelters that involve multiple levels of flow-through entities. These complex structures of related entities pose challenges in analyzing tax compliance by creating opportunities for taxpayers to disguise noncompliance. In the future, IRS hopes to use advanced data analysis tools such as link analysis and graph-based data mining to identify potential abusive shelters. Link analysis is the process of building networks of related entities, such as flow-through entities and Schedule K-1 recipients, in order to expose patterns and trends. Graph-based data mining, a form of link analysis, is intended to enable IRS to identify structures of known abusive shelters and find similar patterns in the population of flow-through networks to discover previously undisclosed potential abusive shelter transactions. IRS has paid a contractor $200,000 so far to assess the feasibility of these technologies and plans to spend $575,000 over the next 1.5 to 2 years to develop these concepts into models. Coordination within IRS and interface with Treasury on abusive shelters is a core objective in IRS’s plans for addressing those shelters. OTSA is the focal point for all shelter-related activity performed in the Tax Shelter Committee, the 6700 Committee, Counsel, Appeals, and LMSB. For example, if a taxpayer discloses an investment in a tax shelter to IRS, OTSA is to enter the transaction into its database, and OTSA reviews the transaction in collaboration with IRS technical advisors and counsel. OTSA may also forward it to LMSB examiners for compliance action. At the IRS-wide level, an executive steering committee provides a forum for coordinating work on both abusive shelters and abusive schemes. It meets monthly and includes participants from LMSB, the Small Business/Self Employed Division, Appeals, Counsel, and other organizations. It operates under the auspices of IRS’s Enforcement Committee, which was chartered in July 2003. Chaired by the Deputy Commissioner for Services and Enforcement, a new position created in May 2003, the Enforcement Committee is to guide IRS-wide enforcement strategies, focusing on high-visibility issues involving many divisions or potentially having significant compliance impact. Although we did not systematically measure whether coordination is facilitated by these mechanisms, we did review minutes of selected executive steering committee meetings. In doing so, we saw such evidence of coordination as the discussion of an LMSB and SB/SE working group on who would work a corporate officer case when LMSB works on a corporation. LMSB attempts to leverage its limited resources by using inducements to achieve compliance. These tools include penalty relief, “fast track” issue resolution, and various structured settlement programs that allow participating taxpayers to keep a percentage of a shelter’s benefits in exchange for conceding most benefits and expediting case resolution. For example, under a disclosure initiative that expired on April 23, 2002, taxpayers who revealed shelters and their respective promoters avoided accuracy-related penalties. IRS’s aim was to more readily identify promoters who had not registered shelters and, through the promoters, find taxpayers who had not disclosed their shelter participation. As a result of this initiative, IRS received 1,664 disclosures from 1,206 taxpayers, disclosing tens of billions of dollars of losses and deductions. IRS offered taxpayers various alternative dispute resolution mechanisms as inducements to settle abusive shelter issues with IRS, mitigating the hazards of litigation for both sides and moving more cases through the administrative system quickly. For example, from October 2001 through April 7, 2003, 17 taxpayers agreed with IRS on their respective shelter issues in the Fast Track Issue Resolution program, resolving about $1.6 billion in proposed adjustments to income (potentially about $540 million in tax). In another example, IRS announced initiatives in October 2002 to resolve disputes related to three shelters: COLI, basis- shifting shelters, and contingent liability shelters. In these initiatives, if taxpayers agreed to settle their cases with IRS by a certain date, with the last initiative closing March 5, 2003, they would pay a large percentage of the full amount IRS disallowed. A summary as of early May 2003 of the number of investors involved in the three settlement initiatives and the potential tax dollars conceded or to be conceded appears in table 3. Although IRS has outlined and begun to implement a multipart strategy for combating tax shelters, it has not yet generally defined performance goals for the effort and established the measures it would use to track progress in achieving those goals. Performance goals define what an organization is trying to achieve over time, preferably focusing on the outcome desired rather than activities or outputs. To date, according to IRS officials, their shelter-related goals cover the number of staff years to be devoted to shelter examinations and the number of shelter examinations to be closed. Also, LMSB planning documents have a few short-term goals. For example, LMSB had a short-term goal to begin compliance actions on all voluntary shelter disclosures by June 30, 2003, a goal IRS officials told us was met. IRS management officials recognize that developing other performance goals and associated measures to track progress is desirable but point to challenges they face in assessing the scope of the abusive shelter problem. Nonetheless, IRS intends to establish such goals in the future when it has more information on activities it is currently tracking. IRS has already started down this road by developing several measures that, while not tied to longer-term performance goals, are to be used in tracking its progress in combating abusive tax shelters. It devised these measures for fiscal year 2003 responding to a September 2001 TIGTA recommendation to develop performance measures so managers could better target problem areas, highlight successes, evaluate alternatives, and track whether OTSA is achieving desired outcomes. IRS is mostly tracking outputs related to case management, such as the number of tax shelter examinations closed and tax shelter return cycle time, and is using output measures of IRS program activities, such as published guidance issued and hotline contacts. IRS is also using some measures that track tax enforcement outcomes, namely adjustments proposed to tax returns from disallowing abusive shelters and tax shelter penalties proposed. Since fiscal year 2003 was the first year IRS used these measures, it had no baseline data with which to evaluate its performance measures. However, LMSB plans to evaluate its measures over time to assess their usefulness. Using admittedly limited information, IRS used a systematic decision- making process in deciding to shift a large portion of LMSB examination staff resources toward addressing abusive shelters. From fiscal year 2002 through fiscal year 2004, LMSB expected to increase the portion of its examination resources devoted to combating abusive shelters from 3 percent in 2002 to 20 percent in 2004. In doing so, it will have shifted resources out of examining the category of cases including such areas as net operating losses and claims for refunds. Even so, IRS faces challenges, especially in the near term, in addressing expected increases in its shelter workload because of the growing number of shelter cases and limited information it has on how long it takes to conduct shelter examinations. As will be described, GAO has previously raised questions about IRS’s ability to shift compliance resources as planned. At an agencywide level, IRS decided staffing resource levels to be devoted to addressing abusive shelters through a systematic planning and budgeting process based on experience and professional judgment because IRS did not and does not have a reliable measure of the abusive shelter problem. Early in calendar year 2002, IRS’s divisions completed strategic assessments in which they studied trends, issues, and priorities affecting their operations. In April 2002, IRS’s senior management team, including the Commissioner, Deputy Commissioner, division heads, and others used two rounds of considering IRS’s programs to rank the needs for new or redirected funding for fiscal year 2004. Of 33 programs considered, the program including tax shelters received the third most votes. According to an IRS official, this process also informed how funds already requested for fiscal year 2003 would actually be spent. After the senior management team reached consensus, the Commissioner issued overall planning guidance for fiscal years 2003 and 2004 to reflect the jointly set strategic direction, and the divisions wrote fiscal year 2003 and 2004 “strategy and program plans” outlining staffing resources needed. In 2002, LMSB put forward plans to increase its work on abusive shelters from 3 percent of its examination resources to 20 percent between fiscal years 2002 and 2004, assuming congressional funding. To support this shift in examination resources, LMSB needed to allocate examination resources away from other areas. One area to receive less audit coverage was industry audits. As shown in table 4, from fiscal year 2003 to fiscal year 2004, IRS planned to move resources away from specific types of mandatory examinations and from some high-risk nonmandatory returns.IRS’s strategy is to mitigate the impact of resource reallocations away from nonshelter areas by using such issue management strategies as fast-track resolution and prefiling agreements, thereby requiring less staff time to close cases and freeing staff to be used in other areas. In addition to LMSB examination staff, IRS has managers, attorneys, and others who work on abusive shelters. For instance, in February 2003, OTSA and its parent body, the Office of Pre-Filing and Technical Guidance, had 39 full-time and 34 part-time technical experts, program analysts, and managers. Also at that time, a contact list for listed transactions included 17 attorneys. These numbers did not include many of the IRS legal resources involved with abusive shelters. In addition, as of September 30, 2003, LMSB had assigned about 1,900 abusive and potentially abusive shelter transactions involving non-LMSB taxpayers to IRS’s Small Business/Self-Employed Division, which supplies examination staff resources of its own. Although IRS appeared to be on track to shift planned resources to shelter work in fiscal year 2003, it faces challenges in addressing the abusive shelter workload, especially in the near term. This is because of (1) the growing numbers of transactions and promoters to be examined and (2) limited information on how long it takes to conduct shelter examinations. From fiscal year 2002 through fiscal year 2004, LMSB planned to use 1,879 full-time equivalents (FTE) to address abusive shelters. During fiscal year 2002, LMSB used 239 FTEs to address tax returns that included abusive shelters. According to IRS’s fiscal year 2004 congressional budget justification, LMSB planned to allocate 691 and 949 FTEs in fiscal years 2003 and 2004, respectively. In a draft strategy and program plan dated September 2003, LMSB projected it would actually use 615 FTEs for shelter work in fiscal year 2003, or 88 percent of the planned amount and an increase of 157 percent over the fiscal year 2002 FTE level including this work. Because (1) the known abusive shelter workload has increased, (2) IRS has limited experience to judge how many resources will be needed to work the cases for how long a period, and (3) the workload may continue to increase, it remains uncertain whether the substantial shift of resources to shelter work will enable IRS to examine in a timely manner the growing workload associated with shelters. For instance, the number of potential examinations of listed transactions disclosed has grown since the inception of OTSA, adding significantly to IRS resources required to address the problem. Table 5 shows the number of listed transactions disclosed by taxpayers grew from 51 to 2,182 between December 31, 2000, and September 30, 2003, and other transactions disclosed to IRS grew from none to 663. The total of all listed and nonlisted LMSB-related transactions in the OTSA database, not only those disclosed by taxpayers, as of September 30, 2003, was 4,897. IRS workload from promoter investigations has also grown since May 2002. At that time, IRS planned that 7 promoter investigations would be ongoing in fiscal year 2003. As of June 30, 2003, IRS had 98 promoter investigations approved. Based on early promoter investigations, an IRS official stated that promoter investigations can take thousands of hours to develop, and several have been litigated, each requiring a large expenditure of resources. LMSB has limited information on the amount of time required to examine abusive shelter cases. LMSB developed estimates of the amount of examination time required for such cases based on its experience examining various types of shelters but acknowledged that examiners can spend hundreds or thousands of hours depending on the type of shelter examined and the facts and circumstances of the case. For example, according to an LMSB official, based on personal experience, OTSA estimated that it would take about 800 hours to examine a potentially abusive transaction reflected in the return of a CIC corporation although LMSB had little data to support the estimate. During fiscal year 2003, IRS began collecting data on examination time that it plans to use for estimating the resources needed to address its abusive shelter workload. The future abusive shelter workload also could increase, at least in the short term. For example, as IRS learns more about the use of shelters, it may identify and list new kinds of transactions as being abusive. As IRS conducts the 98 promoter investigations approved as of June 2003, more investors are likely to be identified, and investor cases could lead to identifying more promoters. In addition, IRS expanded the types of taxpayers subject to disclosure requirements to include taxpayers like individuals, partnerships, and S corporations. According to IRS officials, disclosures from these types of taxpayers are first due to IRS for filing year 2003 and generally do not yet appear in the OTSA database. In the longer term, what happens to the abusive shelter workload is less certain. To the extent that IRS actions and other factors reduce the size of the abusive shelter problem, IRS might not need to continue devoting as large a percentage of its examination resources to abusive shelters. How much and how soon such a drop may occur in abusive shelter cases is uncertain. We have previously raised questions about IRS’s ability to shift compliance resources as planned. We recently testified that many parties have expressed concern about declining IRS compliance—especially audit—and collection trends for their potential to undermine taxpayers’ motivation to fulfill their tax obligations.Concerned about these trends, IRS has sought more resources, including increased staffing for compliance and collections since fiscal year 2001. Despite receiving requested budget increases, staffing levels in key occupations were lower in 2002 than in 2000. These declines occurred for reasons such as unbudgeted expenses consuming budget increases and other operational workload increases. Based on past experience and uncertainty regarding some expected internal savings, fiscal year 2004 anticipated staff increases might not fully materialize. Thus, if IRS carries through with its intentions to increase resources devoted to abusive shelters, it may not have the desired level of resources in other areas of compliance. Abusive tax shelters represent a potentially significant, although imprecisely understood, loss in tax revenues. IRS developed and is following a broad-based, multifaceted strategy to combat abusive shelters even though it had limited data on the full scope of the problem. IRS’s strategy generally does not contain long-term performance goals and associated measures that can help Congress evaluate IRS’s progress. Although establishing performance goals and measures is inherently difficult since the scope and nature of abusive shelters is elusive, the need for such goals and measures is heightened because IRS is shifting large amounts of examination staff resources to support combating abusive shelters. IRS’s initial decisions on shifting resources might need to be reevaluated as IRS develops better information on the size of the abusive shelter problem and the amount of time it takes to examine abusive shelter cases. We encourage IRS to continue its efforts to obtain a better analytic basis for determining the resources needed to address schemes and shelters–while providing sufficient attention to other tax compliance areas–and to develop goals and measures that it and Congress can use to gauge IRS’s progress. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Michael Brostek at (202) 512-9110 or [email protected]. Individuals making key contributions to this testimony include Ralph Block, Elizabeth Fan, Amy Friedheim, Lawrence Korb, Signora May, and James Ungvarsky. Schedule K-1s are information returns that link flow-through entities with their income recipients and therefore can be used for various compliance and research purposes, such as the automated underreporter (AUR) program and profiling potential nonfilers. Partnerships, S corporations, trusts, and estates are collectively known as flow-through entities because they can legally pass net income or loss through to their partners, shareholders, and beneficiaries. Flow-through entities are required to provide IRS and each partner, shareholder, or beneficiary with a Schedule K-1 stating the individual share of net income or loss to be reported. These individuals are then responsible for reporting this income or loss on their individual income tax returns and paying any applicable tax. According to IRS in tax year 2001, over 9 million flow- through entities reported passing through almost $1 trillion to approximately 24 million partners, shareholders, or beneficiaries. IRS research efforts suggest that 6 to 15 percent of the K-1s attached to flow- through returns are currently being omitted from beneficiary, partner, and shareholder returns. To better detect such noncompliance, IRS began transcribing nonelectronically submitted Schedule K-1s for tax year 2000 at a cost of about $20 million. In 2001, IRS added Schedule K-1 document matching to its AUR program. It began matching Schedule K-1 data to individual tax returns to identify taxpayers who had underreported flow-through income and had consequently underpaid their taxes. IRS estimated that K-1 matching program costs would be about $23.5 million total for both K-1 transcription and AUR program operations and that program yield would be $36 million in direct tax assessed. IRS also estimated that if voluntary compliance improved one percent due to the matching program, approximately $1.23 billion of additional tax would be generated annually. In the first year of the program, IRS issued about 69,000 notices to taxpayers and assessed about $29 million in additional taxes directly attributable to Schedule K-1 underreporting. GAO estimates that when program assessments are compared to the costs of the program’s AUR operations, the return per dollar of the K-1 matching program was about $9.31. If the cost of transcribing the K-1 data is included, the return per dollar decreases to about $1.25. Both of these assessment-to-cost ratios are substantially lower than that for the AUR program as a whole. The AUR program returned about $25 for every dollar spent in tax year 2000. IRS has also used Schedule K-1 data to determine characteristics of potentially noncompliant taxpayer populations. Its preliminary profiling efforts identified over 227,000 business entities with almost $64 billion in Schedule K-1 income for tax year 2000 that potentially did not file tax returns. As of September 2003, IRS had begun to discuss ways of analyzing these cases to determine whether these businesses were required, but failed, to file returns, or whether inaccuracies in Schedule K-1 data produced false nonfiler leads. In addition, in response to a Treasury Inspector General for Tax Administration report issued in September 2002,the agency has begun to research the effectiveness of using information returns, such as the K-1, to identify business nonfilers. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Recent scandals involving corporations, company executives, and accounting, law, and investment banking firms heightened awareness of abusive tax shelters and highlighted the importance of the Department of the Treasury and the Internal Revenue Service (IRS) addressing them. During 1999, Treasury issued a report indicating that abusive shelters were a large and growing problem, involving billions of dollars of tax reductions. Treasury was concerned that abusive shelters could ultimately undermine the integrity of the voluntary compliance tax system. GAO's statement today is based on work done at the request of the Chairman and the Ranking Minority Member of the Senate Committee on Finance to examine IRS's strategy for dealing with abusive tax shelters. In reporting on abusive shelters, GAO is describing (1) their nature and scope; (2) IRS's strategy and enforcement mechanisms to combat them and the performance goals and measures IRS uses to track its major effort in that area; and (3) the decision-making process IRS used and the plans it has to devote more resources to addressing abusive shelters. By their nature, abusive tax shelters are varied, complex, and difficult to detect and measure. Abusive shelters manipulate many parts of the tax code or regulations and may involve steps to hide the transaction within a tax return. In recent years, IRS has been accumulating information about them and, although it does not have a reliable measure of the size of the abusive shelter problem, has come to believe that abusive shelters deserve substantially increased attention. IRS continues to gather more information to better define the scope of the problem and has data sources, all with their own limitations, that suggest abusive tax shelters total tens of billions of dollars of potential tax losses over about a decade. IRS's broad-based strategy for addressing abusive shelters included: (1) targeting promoters to head off the proliferation of shelters; (2) making efforts to deter, detect, and resolve abuse; (3) offering inducements to individuals and businesses to disclose their use of questionable tax practices; and (4) using performance indicators to measure outputs and some outcomes and intending to go down the path it has started and develop long-term performance goals and measures linked to those goals. Without these latter elements, Congress would find gauging IRS's progress difficult. In allocating resources to shelters, IRS used a systematic decision-making process that relied on admittedly limited information. It planned to shift significant resources in fiscal years 2003 and 2004 to address abusive shelters but faces challenges, especially in the near term, in addressing abusive shelters due to a growing workload and limited information on how long it takes to examine shelter cases. IRS's understanding of how many staff will be needed to address the problem over what period will continue to evolve as it gains a better understanding of the problem's scope. |
The Department of the Interior (Interior), created by the Congress in 1849, oversees and manages the nation’s publicly owned natural resources, including parks, wildlife habitat, and crude oil and natural gas resources on over 500 million acres onshore and in the waters of the Outer Continental Shelf. In this capacity, Interior is authorized to lease federal oil and gas resources and to collect the royalties associated with their production. Onshore, Interior’s Bureau of Land Management is responsible for leasing federal oil and natural gas resources, whereas offshore, MMS has leasing authority. To lease lands or waters for oil and gas exploration, companies generally must first pay the federal government a sum of money that is determined through a competitive auction. This money is called a bonus bid. After the lease is awarded and production begins, the companies must also pay royalties to MMS based on a percentage of the cash value of the oil and natural gas produced and sold. Royalty rates for onshore leases are generally 12 and a half percent whereas offshore, they range from 12 and a half percent for water depths greater than 400 meters to 16 and two-thirds percent for water depths less than 400 meters. However, the Secretary of the Interior recently announced plans to raise the royalty rate to 16 and two-thirds percent for most future leases issued in waters deeper than 400 meters. MMS also has the option of taking a percentage of the actual oil and natural gas produced, referred to as “taking royalties in kind,” and selling it themselves or using it for other purposes, such as filling the nation’s Strategic Petroleum Reserve. Based on our work to date, the Deep Water Royalty Relief Act (DWRRA) will likely cost the federal government billions of dollars in forgone royalties, but precise estimates of the costs are not possible at this time for several reasons. First, the failure of MMS to include price thresholds in the 1998 and 1999 leases and current attempts to renegotiate these leases has created uncertainty about which leases will ultimately receive relief. Second, a recent lawsuit is questioning whether MMS has the authority to set price thresholds for the leases issue from 1996 through 2000. The outcome of this litigation could dramatically affect the amount of forgone revenues. Finally, assessing the ultimate fiscal impact of royalty relief is an inherently complex task, involving uncertainty about future production and prices. In October 2004, MMS preliminarily estimated that the total costs of royalty relief for deep water leases issued under the act could be as high as $80 billion, depending on which leases ultimately received relief. MMS made assumptions about several conditions when generating this estimate and these assumptions need to be updated in 2007 to more accurately portray potential losses. In addition, the costs of forgone royalties need to be measured against any potential benefits of royalty relief, including accelerated drilling and production of oil and gas resources, increased oil and gas production, and increased fees that companies are willing to pay through bonus bids for these leases. The Congress passed DWRRA in 1995, when oil and gas prices were low and production was declining both onshore and in the shallow waters of the Gulf of Mexico. The act contains provisions to encourage the exploration and development of oil and gas resources in waters deeper than 200 meters lying largely in the western and central planning areas of the Gulf of Mexico. The act mandates that royalty relief apply to leases issued in these waters during the five years following the act’s passage— from November 28, 1995 through November 28, 2000. As a safeguard against giving away all royalties, two mechanisms are commonly used to ensure that royalty relief is limited and available only under certain conditions. The first mechanism limits royalty relief to specified volumes of oil and gas production called “royalty suspension volumes,” which are dependent upon water depth. Royalty suspension volumes establish production thresholds above which royalty relief no longer applies. That is, once total production for a lease reaches the suspension volume, the lessee must begin paying royalties. Royalty suspension volumes are expressed in barrels of oil equivalent, which is a term that allows oil and gas companies to combine oil and gas volumes into a single measure, based on the relative amounts of energy they contain. The royalty suspension volumes applicable under DWRRA are as follows: (1) not less than 17.5 million barrels of oil equivalent for leases in waters of 200 to 400 meters, (2) not less than 52.5 million barrels of oil equivalent for leases in waters of 400 to 800 meters, and (3) not less than 87.5 million barrels of oil equivalent for leases in waters greater than 800 meters. Hence, there are incentives to drill in increasingly deeper waters. Before 1994, companies drilled few wells in waters deeper than 500 meters. MMS attributes additional leasing and drilling in deep waters to the passage of these incentives but also cites other factors for increased activity, including improved three-dimensional seismic surveys, some key deep water discoveries, high deep water production rates, and the evolution of deep water development technology. After the passage of DWRRA, uncertainty existed as to how royalty suspension volumes would apply. Interior officials employed with the department when DWRRA was passed said that they recommended to the Congress that the act should state that royalty suspension volumes apply to the production volume from an entire field. However, oil and gas companies paying royalties under the act interpreted the royalty suspension volumes as applying to individual leases within a field. This is important because an oil and gas field commonly consists of more than one lease, meaning that if royalty suspension volumes are set for each lease within a field rather than for the entire field, companies are likely to owe fewer royalties. For example, if a royalty suspension volume is based on an entire field composed of three leases, a company producing oil and gas from a 210 million barrel-oil field—-where the royalty suspension volume is set at 100 million—-would be obligated to pay royalties on 110 million barrels (210 minus 100). However, if the same 210-million barrel field had the same suspension volume of 100 million barrels applied to each of the three leases, and 70 million barrels were produced from each of the three leases, no royalties would be due because no lease would have exceeded its royalty suspension volume. After passage of the act, MMS implemented royalty relief on a field-basis and was sued by the industry. Interior lost the case in the Fifth Circuit Court of Appeals. In October 2004, MMS estimated that this decision will cost the federal government up to $10 billion in forgone future royalty revenues. A second mechanism that can be used to limit royalty relief and safeguard against giving away all royalties is the price threshold. A price threshold is the price of oil or gas above which royalty relief no longer applies. Hence, royalty relief is allowed only so long as oil and gas prices remain below a certain specified price. At the time of the passage of DWRRA, oil and gas prices were low—West Texas Intermediate, a key benchmark for domestic oil, was about $18 per barrel, and the average U.S. wellhead price for natural gas was about $1.60 per million British thermal units. In an attempt to balance the desire to encourage production and ensure a fair return to the American people, MMS relied on a provision in the act which states that royalties may be suspended based on the price of production from the lease. MMS then established price thresholds of $28 per barrel for oil and $3.50 per million British thermal units for gas, with adjustments each year since 1994 for inflation, that were to be applied to leases issued under DWRRA. As with the application of royalty suspension volumes, problems arose with the application of these price thresholds. From 1996 through 2000— the five years after passage of DWRRA—MMS issued 3,401 leases under authority of the act. MMS included price thresholds in 2,370 leases issued in 1996, 1997, and 2000 but did not include price thresholds in 1,031 leases issued in 1998 and 1999. This failure to include price thresholds has been the subject of congressional hearings and investigations by Interior’s Office of the Inspector General. In October 2004, MMS estimated that the cost of not including price thresholds on the 1998 and 1999 leases could be as high as $10 billion. MMS also estimated that through 2006, about $1 billion had already been lost. To stem further losses, MMS is currently attempting to renegotiate the leases issued in 1998 and 1999 with the oil and gas companies that hold them. To date, MMS has announced successful negotiations with five of the companies holding these leases and has either not negotiated or not successfully negotiated with 50 other companies. In addition to forgone royalty revenues from leases issued in 1998 and 1999, leases issued under DWRRA in the other three years—1996, 1997, and 2000—are subject to losing royalty revenues due to legal challenges regarding price thresholds. In 2006, Kerr McGee Corporation sued MMS over the application of price thresholds to leases issued between November 28, 1995 and November 28, 2000, claiming that the act did not authorize Interior to apply price thresholds to those leases. MMS estimated in October 2004 that if price thresholds are disallowed for the leases it issued in 1996, 1997, and 2000, an additional $60 billion in royalty revenue could be lost. Trying to predict the fiscal impacts of royalty relief is a complex and time- consuming task involving considerable uncertainty. We reviewed MMS’s 2004 estimates and concluded that they had followed standard engineering and financial practices and had generated the estimates in good faith. However, any analysis of forgone royalties involves estimating how much oil and gas will be produced in the future, when it will be produced, and at what prices. While there are standard engineering techniques for predicting oil and gas volumes that will eventually be recovered from a lease that is already producing, there is always some level of uncertainty involved. Predicting how much oil and gas will be recovered from leases that are capable of producing but not yet connected to production infrastructure is more challenging but certainly possible. Predicting production from leases not yet drilled is the most challenging aspect of such an analysis, but there are standard geological, engineering, and statistical methods that can shed light on what reasonably could be expected from the inventory of 1996 through 2000 leases. Overall, the volume of oil and gas that will ultimately be produced is highly dependent upon price and technology, with higher prices and better technology inducing greater exploration, and ultimately production, from the remaining leases. Future oil prices, however, are highly uncertain, as witnessed by the rapidly increasing oil and gas prices over the past several years. It is therefore prudent to assess anticipated royalty losses using a range of oil and gas prices rather than a single assumed price, as was used in the MMS estimate. Given the degree of uncertainty in predicting future royalty revenues from deepwater oil and gas leases, we are using current data to carefully examine MMS’s 2004 estimate that up to $80 billion in future royalty revenues could be lost. There are now two additional years of production data for these leases, which will greatly improve the accuracy of estimating future production and its timing. We are also examining the impact of several variables, including changing oil and gas prices, revised estimates of the amount of oil and gas that these leases were originally expected to produce, the availability of deep water rigs to drill untested leases, and the present value of royalty payments. To fully evaluate the impacts of royalty relief, one must consider the potential benefits in addition to the costs of lost royalty revenue. For example, a potential benefit of royalty relief is that it may encourage oil and gas exploration that might not otherwise occur. Successful exploration could result in the production of additional oil and gas, which would benefit the country by increasing domestic supplies and creating employment. While GAO has not assessed the potential benefits of royalty relief, others have, including the Congressional Budget Office (CBO) in 1994, and consultants under contract with MMS in 2004. The CBO analysis was theoretical and forward-looking and concluded that the likely impact of royalty relief on new production would be very small and that the overall impact on federal royalty revenues was also likely to be small. However, CBO cautioned that the government could experience significant net losses if royalty relief was granted on leases that would have produced without the relief. The consultant’s 2004 study stated that potential benefits could include increases in the number of leases sold, increases in the number of wells drilled and fields discovered, and increases in bonus bids—the amount of money that companies are willing to pay the federal government for acquiring leases. However, questions remain about the extent to which such benefits would offset the cost of lost royalty revenues. Although leases are no longer issued under the Deep Water Royalty Relief Act of 1995, royalty relief can be provided under two existing authorities: (1) the Secretary of the Interior’s discretionary authority and (2) the Energy Policy Act of 2005. The Outer Continental Shelf Lands Act of 1953, as amended, granted the Secretary of the Interior the discretionary authority to reduce or eliminate royalties for leases issued in the Gulf of Mexico in order to promote increased production. The Secretary’s exercising of this authority can effectively relieve the oil and gas producer from paying royalties. MMS administers several royalty relief programs in the Gulf of Mexico under this discretionary authority. MMS intends for these discretionary programs to provide royalty relief for leases in deep waters that were issued after 2000, deep gas wells located in shallow waters, wells nearing the end of their productive lives, and special cases not covered by other programs. The Congress also authorized additional royalty relief under the Energy Policy Act of 2005, which mandates relief for leases issued in the Gulf of Mexico during the five years following the act’s passage, provides relief for some wells that would not have previously qualified for royalty relief, and addresses relief in certain areas of Alaska. Under discretionary authority, MMS administers a deep-water royalty relief program for leases that it issued after 2000. This program is similar to the program that DWRRA mandated for leases issued during the five years following its passage (1996 through 2000) in that royalty relief is dependent upon water depth and applicable royalty suspension volumes. However, this current program is implemented solely under the discretion of MMS on a sale-by-sale basis. Unlike under DWRRA, the price thresholds and the water depths to which royalty relief applies vary somewhat by lease sale. For example, price thresholds for leases issued in 2001 were $28 per barrel for oil and $3.50 per million British thermal units for natural gas, with adjustments for inflation since 2000. As of March 2006, MMS reported that it issued 1,897 leases with royalty relief under this discretionary authority, but only 9 of these leases were producing. To encourage the drilling of deep gas wells in the shallow waters of the Gulf of Mexico, MMS implements another program, the “deep gas in shallow water” program, under final regulations it promulgated in January 2004. MMS initiated this program to encourage additional production after noting that gas production had been steadily declining since 1997. To qualify for royalty relief, wells must be drilled in less than 200 meters of water and must produce gas from intervals below 15,000 feet. The program exempts from royalties from 15 to 25 billion cubic feet of gas per well. According to MMS’s analysis, these gas volumes approximate the smallest reservoirs that could be economically developed without the benefit of an existing platform and under full royalty rates. In 2001, MMS reported that the average size of 95 percent of the gas reservoirs below 15,000 feet was 15.7 billion cubic feet, effectively making nearly all of this production exempt from royalties had it been eligible for royalty relief at that time. This program also specifies a price threshold for natural gas of $9.91 per million British thermal units in 2006, substantially exceeding the average NYMEX futures price of $6.98 for 2006, and ensuring that all gas production is exempt from royalties in 2006. Finally, MMS administers two additional royalty relief programs in the Gulf of Mexico under its discretionary authority. One program applies to leases nearing the end of their productive lives. MMS intends that its provisions will encourage the production of low volumes of oil and gas that would not be economical without royalty relief. Lessees must apply for this program under existing regulations. MMS administers another program for special situations not covered by the other programs. Lessees who believe that other more formal programs do not provide adequate encouragement to increase production or development can request royalty relief by making their case and submitting the appropriate data. As of March 2006, no leases were receiving royalty relief under the “end of productive life,” and only three leases were receiving royalty relief under the “special situations” programs. The Congress authorized additional royalty relief under the Energy Policy Act of 2005. Royalty relief provisions are contained in three specific sections of the act, which in effect: (1) mandate royalty relief for deep water leases sold in the Gulf of Mexico during the five years following passage of the act, (2) extend royalty relief in the Gulf of Mexico to deep gas produced in waters of more than 200 meters and less than 400 meters, and (3) specify that royalty relief also applies to certain areas off the shore of Alaska. In the first two situations, the act specifies the amount of oil and/or gas production that would qualify for royalty relief and provides that the Secretary may make royalty relief dependent upon market prices. Section 345 of the Energy Policy Act of 2005 mandates royalty relief for leases located in deep waters in the central and western Gulf of Mexico sold during the five years after the act’s passage. Similar to provisions in DWRRA, specific amounts of oil and gas are exempt from royalties due to royalty suspension volumes corresponding to the depth of water in which the leases are located. However, production volumes are smaller than those authorized under DWRRA, and this specific section of the Energy Policy Act clearly states that the Secretary may place limitations on royalty relief based on market prices. For the three sales that MMS conducted since the passage of the act, MMS included prices thresholds establishing the prices above which royalty relief would no longer apply. These price thresholds were $39 per barrel for oil and $6.50 per million British thermal units for gas, adjusted upward for inflation that has occurred since 2004. The royalty-free amounts, referred to as royalty suspension volumes, are as follows: 5 million barrels of oil equivalent per lease between 400 and 800 meters; 9 million barrels of oil equivalent per lease between 800 and 1,600 meters; 12 million barrels of oil equivalent per lease between 1,600 and 2,000 meters; and 16 million barrels of oil equivalent per lease in water greater than 2,000 meters. MMS has already issued 1,105 leases under this section of the act. Section 344 of the Energy Policy Act of 2005 contains provisions that authorize royalty relief for deep gas wells in additional waters of the Gulf of Mexico that effectively expands the existing royalty-relief program for “deep gas in shallow water” that MMS administers under pre-existing regulations. The existing program has now expanded from waters less than 200 meters to waters less than 400 meters. A provision within the act exempts from royalties gas that is produced from intervals in a well below 15,000 feet so long as the well is located in waters of the specified depth. Although the act does not specifically cite the amount of gas to be exempt from royalties, it provides that this amount should not be less than the existing program, which currently ranges from 15 to 25 billion cubic feet. The act also contains an additional incentive that could encourage deeper drilling—royalty relief is authorized on not less than 35 billion cubic feet of gas produced from intervals in wells greater than 20,000 feet deep. The act also states that the Secretary may place limitations on royalty relief based on market prices. Finally, the Energy Policy Act of 2005 contains provisions addressing royalty relief in Alaska that MMS is already providing. Section 346 of the act amends the Outer Continental Shelf Lands Act of 1953 by authorizing royalty relief for oil and gas produced off the shore of Alaska. MMS has previously included royalty relief provisions within notices for sales in the Beaufort Sea of Alaska in 2003 and 2005. All of these sales offered royalty relief for anywhere from 10 million to 45 million barrels of oil, depending on the size of the lease and the depth of water. Whether leases will be eligible for royalty relief and the amount of this royalty relief is also dependent on the price of oil. There currently is no production in the Beaufort Sea. Although there have been no sales to date under this provision of the act, MMS is proposing royalty relief for a sale in the Beaufort Sea in 2007. Section 347 of the Energy Policy Act also states that the Secretary may reduce the royalty on leases within the Naval Petroleum Reserve of Alaska in order to encourage the greatest ultimate recovery of oil or gas or in the interest of conservation. Although this authority already exists under the Naval Petroleum Reserves Production Act of 1976, as amended, the Secretary must now consult with the State of Alaska, the North Slope Borough, and any Regional Corporation whose lands may be affected. In order to meet U.S. energy demands, environmentally responsible development of our nation’s oil and gas resources should be part of any national energy plan. Development, however, should not mean that the American people forgo a reasonable rate of return for the extraction and sale of these resources, especially in light of the current and long-range fiscal challenges facing our nation, high oil and gas prices, and record industry profits. Striking a balance between encouraging domestic production in order to meet the nation’s increasing energy needs and ensuring a fair rate of return for the American people will be challenging. Given the record of legal challenges and mistakes made in implementing royalty relief to date, we believe this balance must be struck in careful consideration of both the costs and benefits of all royalty relief. As the Congress continues its oversight of these important issues, GAO looks forward to supporting its efforts with additional information and analysis on royalty relief and related issues. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact me, Mark Gaffigan, at 202-512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Dan Haas, Assistant Director; Ron Belak; John Delicath; Glenn Fischer; Frank Rusco; and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Oil and gas production from federal lands and waters is vital to meeting the nation's energy needs. As such, oil and gas companies lease federal lands and waters and pay royalties to the federal government based on a percentage of the oil and gas that they produce. The Minerals Management Service (MMS), an agency in the Department of the Interior, is responsible for collecting royalties from these leases. In order to promote oil and gas production, the federal government at times and in specific cases has provided "royalty relief," waiving or reducing the royalties that companies must pay. However, as production from these leases grows and oil and gas prices have risen since a major 1995 royalty relief act, questions have emerged about the financial impacts of royalty relief. Based on our work to date, GAO's statement addresses (1) the likely fiscal impacts of royalty relief on leases issued under the Outer Continental Shelf Deep Water Royalty Relief Act of 1995 and (2) other authority for granting royalty relief that could further impact future royalty revenue. To address these issues our ongoing work has included, among other things, analyses of key production data maintained by MMS; and reviews of appropriate portions of the Outer Continental Shelf Deep Water Royalty Relief Act of 1995, the Energy Policy Act of 2005, and Interior's regulations on royalty relief. While precise estimates remain elusive at this time, our work to date shows that royalty relief under the Outer Continental Shelf Deep Water Royalty Relief Act of 1995 will likely cost billions of dollars in forgone royalty revenue--at least $1 billion of which has already been lost. In October 2004, MMS estimated that forgone royalties on deep water leases issued under the act from 1996 through 2000 could be as high as $80 billion. However, there is much uncertainty in these estimates. This uncertainty stems from ongoing legal challenges and other factors that make it unclear how many leases will ultimately receive royalty relief and the inherent complexity in forecasting future royalties. We are currently assessing MMS's estimate in light of changing oil and gas prices, revised estimates of future oil and gas production, and other factors. Additional royalty relief that can further impact future royalty revenues is currently provided under the Secretary of the Interior's discretionary authority and the Energy Policy Act of 2005. Discretionary programs include royalty relief for certain deep water leases issued after 2000, certain deep gas wells drilled in shallow waters, and wells nearing the end of their productive lives. The Energy Policy Act of 2005 mandates relief for leases issued in the Gulf of Mexico during the five years following the act's passage, provides relief for some gas wells that would not have previously qualified for royalty relief, and addresses relief in certain areas of Alaska. |
Sound financial management operations are critical to ensuring that DOD effectively manages its contracts and that funds are disbursed properly. DOD has recognized that it has serious, long-standing problems in correctly disbursing billions of dollars in payments and providing reliable financial information. In January 1991, DOD created DFAS to strengthen DOD’s financial management operations by standardizing, consolidating, and streamlining finance and accounting policies, procedures, and systems. But efforts to improve financial management through DFAS have yet to be successful, and much remains to be done to improve its performance. We have reported on a number of issues related to DOD’s financial management problems. A list of our recent products is included at the end of this report. A dramatic indicator that sound financial management operations are not in place is when contractors are returning overpayments to the paying office and that office is not aware that overpayments were made. In 1994 we reported on such contract overpayments being returned to the DFAS Columbus Center. Our examination of $392 million of the $751 million in checks processed by the DFAS Columbus Center during a 6-month period in 1993 disclosed that about $305 million, or about 78 percent, represented overpayments by the government. The overpayments principally occurred because DOD paid invoices without recovering previous progress payments or because it made duplicate payments. Underscoring our concern about such overpayments is the fact that the majority of the overpayments we examined were detected by contractors, rather than by DFAS. Our August 1994 report identified unresolved payment discrepancies, both overpayments and underpayments, at nine contractor locations and raised questions as to whether such discrepancies were widespread. As a result of that report, you requested that we obtain data on payment discrepancies from selected large and small contractors. The methodology used to conduct the data request and analyze the contractors’ responses is explained in appendix I, and a copy of the data request is shown in appendix II. In response to our data request, 374 business units of large and small contractors reported overpayments and underpayments using their accounting records. The business units responding to our request reported payment discrepancies of $857.4 million—overpayments of $231.5 million and underpayments of $625.9 million. Our analysis of the responses from selected business units showed that (1) the reported information had been drawn from their accounting records and (2) most of the units applied some judgments to extract data from their accounting records. The judgments applied in compiling the information varied among the contractors. For instance, some contractors did not report what they considered to be low-dollar invoices, such as amounts less than $10,000 or $50,000. Some contractors did not report unpaid (as opposed to partially paid) invoices and vouchers over 30 days old as underpayments. One contractor that reported $15 million in overpayments as outstanding on July 31, 1994, did not include a $7.1 million overpayment that it included in a liability account. The overall effect of the anomalies we identified in the contractors’ responses was that the reported overpayments and underpayments were less than the amounts recorded in the contractors’ accounts. Both overpayments and underpayments result in unnecessary costs to the government. Overpayments increase the government’s interest costs because funds are needlessly disbursed. Contractors are not assessed interest on overpayments until 30 days after a demand for repayment is made. For underpayments, the Prompt Payment Act requires DOD to pay an interest penalty for an invoice payment made after the due date or 30 days after the presentation of a valid invoice. The Center’s late payment penalties reported for fiscal year 1994 were about $5 million. Many contractors notified the government of payment errors but did not always return overpayments until instructed to do so. Table 1 shows that 343 of the 374 business units (92 percent) reported having a policy or practice of notifying the government paying office and/or government contracting officers when payment errors are encountered. Because of the poor state of DOD’s accounting records and control systems, overpayments might never be recovered if contractors do not report them. Our review of selected payment discrepancies showed that errors in the automated payment records cause payment errors. Center personnel, in accordance with payment procedures, pay contractor invoices as if the payment information in the system were correct, even though the information in the system is known to have a high error rate. A March 15, 1994, audit report by the DOD Inspector General reported obligation errors in 23 percent of the contracts examined and accounting data errors in 39 percent of the contracts examined. For overpayments, the most frequent cause of error (45 percent) identified by Columbus Center analysis is the incorrect recovery (liquidation) of progress payments. Progress payments are recovered in accordance with contract financing provisions when paying invoices for delivered items. Correct liquidation requires accurate records of the progress payments made regarding the delivered items. If the automated payment records are in error, the invoice will likely be paid in error unless adequate research is performed before making the payment. Research results in the avoidance of some overpayments. But research is effective only if proper underlying records are kept, and it is time-consuming and costly. In addition to being time-consuming and costly, research could delay payments beyond the time provisions of the Prompt Payment Act, and the Center would then have to pay the contractor late payment interest. Balancing the need for doing necessary research to make an accurate payment, with the cost of late payment interest is difficult for Center personnel. The Center accounts for the cost of interest on late payments but neither accounts for nor reports on the government’s cost of money to finance contract overpayments. The cost to the government of poor record-keeping and inadequate research can be significant. For example, a $7.7-million overpayment was made to a contractor because the Center’s records of progress payments had been incorrectly recorded. The error was discovered with research after the contractor notified the Center that it had been overpaid. The overpayment, outstanding for over 2 years and costing the government about $820,000 in interest, could have been avoided if proper records had been kept and adequate research had been done before the contractor was paid. In another case, a $7.5-million overpayment was outstanding for 8 years and might not have been recovered if the contractor had not notified DFAS of the overpayment. The records in this case were so error prone that contractor assistance appeared essential to recovering the overpayment. In this case, research was either not done or was totally ineffective. Researching can be a good control technique when used on large and complex contracts, but it is not a substitute for good record-keeping. The Federal Acquisition Regulation (FAR) provides for the prompt recovery of contract debt originating from overpayments. The FAR requires that a demand letter be issued as soon as the amount due the government is computed. The regulation also requires that the responsible official establish a control record for tracking the efforts to determine and collect the debt. If not recovered promptly, contract debts increase the government’s cost of funds and unnecessarily expose the government to losses because some debts become uncollectible. The Columbus Center did not comply with the FAR. It did not collect overpayments promptly when contractors and auditors reported them. The Columbus Center did not promptly recover identified overpayments because it did not (1) follow the Center’s policy of requesting contractors to immediately return identified overpayments pending a reconciliation and (2) record and track actions on reported overpayments. To evaluate the Center’s recovery actions, we researched overpayments of about $84.2 million on eight contracts for which contractors had reported large overpayments. Data on these contract overpayments are shown in table 2 and discussed in more detail in appendix III. Although the Center has had a written policy since November 5, 1993, to ask contractors to return overpayments immediately, contractor relations personnel told us they were not aware of the policy and were not asking contractors to return overpayments. For the overpayments we examined, the contractors were not asked to immediately return the overpayments after reporting them to contractor relations personnel at the Center. None of the overpayments we examined became uncollectible because of these delays; however, the delays were costly because the overpayments were outstanding for extended periods. We estimate that delays in recovering the $84.2 million in overpayments cost the government about $10.6 million in interest. Our search for records of actions taken to recover the selected overpayments showed that generally the Center had incomplete records of notifications by contractors and no record to show what collection actions, if any, were taken. In addition to being notified by contractors, the Center also identifies payment discrepancies by auditing or reconciling contracts. As of December 1994, the Center had identified 14,840 contracts that required review or reconciliation. About 4,000 contracts were determined to require complete reconciliation. These reviews and reconciliations will be completed by either Center personnel or a public accounting firm hired for that purpose. Reconciliations completed by the firm are returned to the Center for action—for example, the issuance of demand letters for the amount due the government. A cumulative report of the firm’s activities from October 1990 through April 1995 showed the following: About 4,723 contracts had been reconciled by the firm. About $76 billion in accounting adjustments were needed to correct payments that had been made from wrong accounts. About $314 million had been identified as owed to the government. About $94 million had been identified as owed by the government to contractors. Demand letters had been issued to contractors for about $152 million based on the firm’s reconciliations, with about $80 million collected, and about $17 million disputed and classified as in-process. About $19 million may not be collectable for one or more reasons, including $8 million due from contractors involved in bankruptcy. As of May 1995 the accounting firm classified $178 million of the reported $314 million due the government as “in-process.” The Center did not have debt records or other documents to show what specific collection actions, if any, had been taken to collect the $178 million. Based on our expressed concern about the status of the $178 million classified as in-process, Center officials agreed to research this matter. After researching individual contract payment files and accounts receivable files, the Center issued demand letters for $23 million that had been outstanding for over 60 days and identified other disposition actions. In June 1995, Center officials told us they could not determine what collection action, if any, had been taken on about $75 million of the $178 million. The Center continued research and reported at the end of August that it had determined the status of all but $11 million. We reviewed the collection actions on all audits completed by the public accounting firm during an 18-month period ending March 31, 1995, to determine whether the Center was promptly collecting identified overpayments. In total, 160 completed audits identified $82.1 million as being owed to the government. Records showed that the Center had issued demand letters for $36.8 million of the $82.1 million. The demand letters, on average, were sent about 3 to 4 months after the Center was informed of the overpayments and, in one case, was not sent until more than a year after notification. For the remaining $45.3 million, the Center had either no record of demand letters issued ($33.1 million) or were considering whether to accept the results of the audit ($12.2 million). Some of the audits not yet accepted had been under consideration for extended periods. Center personnel researched the $45.3 million identified as owed the government for which we were unable to find collection records. Through this additional research, the Center was able to initially identify collection actions for about $15.8 million of the $45.3 million—leaving $29.5 million owed the government for which the Center could identify no collection action. According to officials, the Center does not have statistical information on the results of audits and reconciliations performed by Center personnel. However, in those few cases where the Center had reconciled the contracts for overpayments that we examined, the Center did not take prompt collection actions. Delays in recovering identified overpayments compound the problem and suggest management as well as systems and records shortcomings. In our August 1994 report we stated that DOD does not have an effective system to identify and resolve payment discrepancies and expeditiously recover amounts owed the government. We recommended that DOD develop a comprehensive plan to mobilize resources to identify and correct payment discrepancies. We reported that such action was necessary to reduce (1) the cost to the government, (2) future payment discrepancies, and (3) the incidence of uncollectable overpayments. Both DOD and DFAS Columbus Center officials have said that they are taking significant steps to resolve specific problems identified in our reports. For example, Center officials advised us of corrective actions being taken to improve the detection and collection process, including changes to ensure that the Center’s policy of asking contractors to immediately return overpayments is implemented, including an August 1995 pilot installation of telephone and computer equipment to establish a historical record, by contract, of payment problems identified by customers; changes to prevent overpayments that occur because of incorrect progress payment liquidations, including both procedural and systems changes that are expected to improve payment research and progress payment records; changes in monitoring and reporting practices to ensure that all reconciliations that identify amounts owed the government are resolved promptly; and increases in the resources directed toward reducing the backlog of contracts requiring reconciliation by December 1995. Also, after discussing the results in this report with DOD and DFAS officials, DFAS, on July 31, 1995, directed the Columbus Center to begin surveying contractors to identify and resolve payment discrepancies. Also, the on-site personnel from the Defense Contract Management Command and the Defense Contract Audit Agency will continue to assist in the identification and resolution of payment problems. In addition to resolving specific payment problems, DOD stated it is implementing systemic solutions to prevent the types of payment problems identified in our reports. DOD said that it is making coordinated improvements in its contract writing, contract management, contract payment, and accounting systems to ensure that all payments are computed, issued, and accounted for properly. Although the actions reported by DOD and DFAS appear to be a positive step toward addressing contract payment discrepancies, we remain concerned about DOD’s ability to eliminate contract payment discrepancies, make coordinated improvements in all aspects of contract payment processes, and incorporate leading-edge business practices. We have ongoing and planned work to further evaluate DOD’s plans for improving financial management operations and will periodically monitor DOD’s progress in eliminating contract payment discrepancies. In commenting on a draft of this report, DOD said that it generally concurred with the report and offered a clarification regarding actions by the Defense Contract Management Command and the Defense Contract Audit Agency. The clarification has been incorporated. DOD’s comments are reprinted in their entirety in appendix IV. As agreed with your offices, we plan no further distribution of this report until 30 days from its issue date unless you publicly announce its contents earlier. At that time, we will send copies to the Secretary of Defense; the Director, Office of Management and Budget; and other interested congressional committees. Copies will also be made available to others upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix V. We requested business units of 204 large and small contractors to provide information on the status of their open accounts receivable with the Department of Defense (DOD) as of July 1994. The large contractors were 96 of the top 100 defense contractors as measured by DOD contract award data published by DOD’s Directorate for Information Operations and Reports. According to published DOD data, the top 100 contractors received about 62 percent of the DOD prime contracts awarded in fiscal year 1993. The 108 small contractors included the contractors classified as small businesses that had the largest amounts of contract awards as reported by the Federal Procurement Data Center. We used mailing addresses identified in a Federal Procurement Data System listing that was extracted from government contract awards. This listing contained 1,287 addresses for these 204 contractors. Since we were unable to determine which of these addresses were appropriate for contacting the accounts receivable sections for business units that had DOD contracts, we mailed data requests to all available addresses. For example, we sent data requests to six addresses in Orlando, Florida, for the same contractor because we did not know which addresses maintained the accounts receivable data. Also, we were unable to determine whether all business units with DOD contracts were included in these addresses. Each data request asked the recipient to return the request form if the recipient did not maintain accounts receivable for the business unit and to provide the address for that business unit’s accounts receivable section. We mailed an additional 75 data requests to addresses identified. We also sent follow-up reminders to addresses that did not respond to the initial mailing. Using this approach, we received 374 data responses, which included responses from at least one business unit of 139 contractors—82 large contractors and 57 small contractors. However, we did not receive responses from all business units of these contractors. This report presents the data obtained from the 374 business units and cannot be projected or generalized to about 26,000 business units paid by the DFAS Columbus Center. As shown in table I.1, business units of large contractors accounted for $223.6 million (97 percent) of the $231.5 million reported overpayments and $611.5 million (98 percent) of the $625.9 million reported underpayments. As shown in table I.2, 134 (43 percent) of the 315 large contractor business units reported overpayments of more than $1,000 and 14 (24 percent) of the small contractor business units reported overpayments of more than $1,000. These 148 business units accounted for more than 99 percent of the $231.5 million in overpayments reported. As shown in Table I.3, 185 (59 percent) large contractor business units reported underpayments of more than $1,000, and 25 (42 percent) of the small business units reported underpayments of more than $1,000. These 210 business units accounted for more than 99 percent of the $611.5 million in underpayments reported. The reported data may be affected by erroneous contractor records or by contractors’ misstatement of facts in their responses. We visited 12 business units to verify that the payment data reported in response to our data request was extracted from the business units’ accounting records. These 12 units reported $83.6 million of overpayments (36 percent of the reported total), and $62.3 million of underpayments (10 percent of the reported total). The business units visited were selected based on geographic dispersion and the amount of reported payment discrepancies, including reports of “zero” payment discrepancies. We also had telephone discussions with contractor officials at a number of other business units to help ensure complete responses to our data request. We also researched overpayments on eight contracts presented in detail in appendix III. We examined the actions taken at the Columbus Center to resolve these overpayments. We collected information on underpayments, but we did not review the resolution of underpayments. We conducted our research of selected overpayments at the Columbus Center and used all available contract records. We also interviewed payment personnel, contractor relations personnel, and supervisors (including division chiefs and directors) at the Center. We also obtained information and contract records from DOD contracting officers and discussed payment issues with them. Data obtained from contractors and contracting officers were compared with the Columbus Center records. In addition, we reviewed laws and regulations pertaining to the administration and management of contracts and contract payments, including those related to collection of contract debts. We also discussed payment errors with the Defense Logistics Agency, the Defense Contract Management Command, and the Defense Finance and Accounting Service (DFAS) officials. We conducted our review between August 1994 and June 1995 in accordance with generally accepted government auditing standards. The Chairman of the Senate Governmental Affairs Committee has asked the U. S. General Accounting Office to obtain data from contractors on the extent of underpayments and overpayments outstanding on contracts with the Department of Defense (DOD). This data will be used as part of an ongoing assignment (GAO code 705074 ) to evaluate DOD’s contract payments system. Please follow the attached instructions when accumulating the requested information and return it in the enclosed envelope by September 30, 1994. The mailing address for this request is: the DOD? U.S. General Accounting Office Suite 1500 1445 Ross Avenue Dallas, TX 75202-2783. Any questions should be directed to: Seth Taylor, Jeff Knott, or Joe Quicksall in our Dallas Regional Office at (214) 777-5600. STOP -- If you answered No to question 1 above, please do not continue but return this request in the enclosed envelope. Please do not forward to the business unit that does your billings and maintains your accounts receivable. However, please provide the address for the business unit that maintains your accounts receivables so we can verify that the unit has been included in the initial mailing of this data request. The business unit for which the information is being requested is (please make any corrections needed to business unit identification): Person to be contacted if additional information is needed: 2. Do some of your contracts with DOD provide for progress payments? PARENT COMPANY (Name and Address) 1. Does your business unit prepare contract billings and maintain accounts receivable for contracts with 3. From your accounts receivable or other appropriate 4. List the current top 3 DOD paying offices, based on dollar amount, that pay contract billings submitted by your business unit (exclude classified paying offices). Provide an estimate of the percentage of your dollar billings paid by each of these paying offices. List top 3 paying offices’ name and address 1. (July as of Date Used:_____________________) 2. 3. 5. What is your business unit’s current policy or practice regarding notifying DOD when your records indicate an error has been made in paying an invoice or progress payment request? (Briefly describe below or attach your response to this form.) 6. List the most recent annual gross dollar amount of contract billings to DOD by your business unit. Provide a list of the DOD contracts to which the above overpayments and underpayments apply. (Attach the list to this form.) Please attach any additional information or specific comments and issues concerning your DOD contract payment experiences that you believe we should consider. Thank you for your prompt response. Please retain any work sheets or records used to prepare this response. We reviewed over $84 million in overpayments on eight contracts to determine why the overpayments were made and to evaluate the efforts of the DFAS Center in Columbus, Ohio, to recover the overpayments. Most of the overpayments were outstanding more than 180 days, and one was outstanding about 7 years. Overpayments occurred mostly because prior progress payments were not properly considered when paying invoices. In general, the root of the problems could be traced to errors in the government’s payment record. For each example, we attempted to identify the dates the overpayments occurred, the reason they occurred, the date the Center was notified of them by the contractor, and the date the money was recovered. Where these dates could not be clearly determined, we estimated the dates using available records and/or interviews with the government and contractor personnel involved. 1. Contract F33657-89-C-0082 with Hughes Missile Systems, Tucson, Arizona The DFAS Center overpaid this contract by about $24.7 million in January 1994 because an invoice was paid without fully liquidating progress payments. The contractor notified the Center of the overpayment in April 1994, about 3 months after the invoice was paid incorrectly. The Center and contractor agreed to eliminate the $24.7 million overpayment by a setoff to other contractual debts rather than by a cash collection. To collect by contract setoff, the Center did not pay $24.7 million of other payment requests. The recovery by setoff was completed in October 1994, about 10 months after the overpayment was made, and 6 months after the Center was notified. Shortly after recovering the $24.7 million overpayment, the Center again overpaid this contract by $10.5 million because invoices were paid out of the sequence expected by the contractor. The out-of-sequence payment, in turn, caused an overpayment because the Center did not adequately research the payments before responding to the contractor’s refund request. The Center recovered this overpayment in about 60 days by another setoff. The overpayments and the delay in recovering the overpayment after they were identified cost the government about $1.4 million. Our review indicated that reliance on inaccurate payment records without further research was a primary cause of overpayments. The Center is researching the underlying cause of errors in the payment records. A recently completed reconciliation of this contract’s payment records showed 68 errors. The errors included 44 payments from the wrong funds, 6 overpayments, 3 duplicate payments, and 2 underpayments. In addition, the records contained six posting errors and three extension errors. Any of these errors could cause additional payment errors. 2. Contract DAAB07-92-C-G004 with ITT Aerospace, Fort Wayne, Indiana The DFAS Center overpaid about $20 million on this contract because progress payments were not liquidated at the contract rate. The incorrect liquidation began in September 1993 and continued at least through April 1994. The contractor advised the government contracting officer and Center contract relations personnel of the overpayments in November 1993. At that time, the overpayments totaled about $4.5 million. We found no record of any Center action to recover the overpayments identified in the November 1993 notification. In a January 1994 letter, the contractor requested a meeting with Center officials to resolve the continuing overpayment problem on this contract. By then, the overpayments had increased to $18.9 million, but again no action was taken to recover the overpayments. In March 1994, the Center began a limited scope examination to verify the overpayment amount that had increased to $19 million according to the contractor. Finally, in May 1994, the contractor asked the Center to issue a demand letter for the overpayment, which the contractor reported as $19.5 million. The Center issued a demand letter for about $18 million in June 1994 and recovered that amount in July 1994. The contractor returned an additional $2.1 million in October 1994. 3. Contract N00039-90-C-0165 with ITT Aerospace, Fort Wayne, Indiana The Center overpaid this contract by about $1.7 million because it did not liquidate progress payments at the contract level on invoices between May 1993 and July 1994. The contractor notified the cognizant government contracting officer in August 1994 of the overpayments. The contracting officer issued a demand letter for the $1.7 million overpayment in September 1994, and the contractor returned the overpayment the following month. The Center apparently was not aware of the payment errors until after receipt of the contractor’s check. 4. Contract N00024-88-C-5670 with ITT Gilfillan, Van Nuys, California This contractor was overpaid about $7.7 million on a December 1992 invoice because progress payments were incorrectly liquidated. The incorrect liquidation resulted from posting errors to the automated payment system. Rather than demanding return of the overpayment, the Center decided to recover the overpayment by contract setoff. When we examined the payment records in March 1995, over 2 years after the overpayment, the contractor still owed the government about $4.5 million. The recovery by setoff was approved by the reconciliation clerk, reconciliation supervisor, division chief, and associate director without research to determine whether the amount could be promptly recovered through setoff. After we questioned the wisdom of this, the Center issued a demand letter for about $4.5 million. The final collection was made in April 1995—over 2 years after the Center made the overpayment. 5. Contract F19628-84-C-0151 with Litton Systems, College Park, Maryland This contract was overpaid because invoices were paid at estimated prices until the prices were definitized in March 1993, about 8 years after deliveries started. The contract prices were definitized at less than the estimated prices used by the Center to pay the contract. Both the contracting officer and the contractor should have known that the contract was overpaid as soon as the price was definitized. The contractor began discussing the amount of overpayment with the contracting officer in August 1993. In October 1993, the contractor provided the Center with the results of its reconciliation showing the contract overpayment to be about $5.2 million. From October 1993 until at least June 1994, the contractor and Center personnel exchanged letters and telephone calls concerning the exact amount of the overpayment. When we reviewed the contract records in February 1995, we questioned the wisdom of continuing to leave the reported overpayment of $5.2 million outstanding for over 16 months while efforts were underway to research a disputed difference of about $63,000. In March 1995, the Center issued a demand letter for the $5.2 million the contractor agreed was overpaid. The contractor returned the $5.2 million 30 days after receiving the demand letter. The disputed amount will ultimately be demanded, if sufficient records are available to support the claim. 6. Contract DAAJ09-90-C-0352 with McDonnell Douglas Aerospace, Huntington Beach, California This contractor notified the DFAS Center in November 1992 of overpayments on the contract caused by not properly liquidating progress payments. The Center has no record of efforts to recover the overpayments identified in this notification or subsequent notifications by the contractor of continuing overpayments on this contract. The contractor determined the amount of overpayment was about $5.8 million and refunded that amount to the Center in March 1994. The Center’s initial review of payment records for this contract found no overpayments, and the Center returned the $5.8 million refund to the contractor in August 1994. The return of this refund was specifically approved by the reconciliation clerk, reconciliation supervisor, and the division chief. The contractor’s disagreement with this action resulted in the government contracting officer requesting the Center to reconcile the contract. The Center’s reconciliation identified an error in posting progress payments that caused the overpayment. The Center issued a demand letter for $5.8 million in November 1994. The Center received the contractor’s refund check in December 1994, over 2 years after the overpayment. 7. Contract DAAE07-84-C-A001 with Textron Lycoming, Stratford, Connecticut According to this contractor, it notified the Center in November 1993 of a $7.5 million overpayment on a completed contract. The overpayment resulted from both duplicate payments and incorrect liquidation of progress payments. The Center had no record of this notification and no record of efforts to recover the amount identified in this notification. The contractor again notified the Center in June 1994 of the overpayment. A record of this notification was in the Center’s contract file. Shortly after we visited the company in December 1994 to verify reported data, the company returned $7.5 million as a refund of the overpayment on this contract. The contractor had retained most of this amount for about 8 years—since the last shipment on the contract in January 1987. A November 1994 reconciliation disclosed that the payment records on this contract contained 125 errors. Most of the errors, 67 of 125, were contract payments made from the wrong fund control citations. In addition, there were 22 underpayments, 10 invoice payments with incorrectly liquidated progress payments, 13 overpayments, and 3 duplicate payments. The remaining errors were erroneous postings of contract entries, such as cash collections or modifications. Using these payment records without adequate research is the likely cause of the overpayment. 8. Contract DAAE07-86-C-A050 with Textron Lycoming, Stratford,Connecticut The contractor reported an overpayment of $667,130 as of July 1994 that was subsequently resolved, according to the contractor, by contract setoff in August 1994. While the contractor believes this contract is settled, the government agencies involved have been unable to reach agreement on the payment status of the contract. In December 1994, the funding station (U.S. Army Tank-Automotive and Armaments Command) believed the contract was overpaid by $10 million and requested the Columbus Center to take immediate action to collect money due from the contractor. However, as of May 1995, the Center’s automated payment records showed the contract to be underpaid by about $2.7 million. While this contract had been in and out of reconciliation during the prior 4 years by the public accounting firm employed by the Center, the reconciliation results had been inconclusive. In late May 1995, the Center and government funding station personnel initiated meetings to resolve differences in their contract records and to reach agreement on the payment status of the contract. According to a September 10, 1994, Audit Report of Errors prepared by the public accounting firm, the firm’s reconciliation disclosed that the payment records on this contract contained 1,123 errors. Most of the errors, 885 of 1,123, were contract payments made from the wrong fund control citations. The remaining errors identified during reconciliation included 197 posting errors. DOD Infrastructure: DOD’s Planned Finance and Accounting Structure Is Not Well Justified (GAO/NSIAD-95-127, Sept. 18, 1995). Financial Management: Challenges Confronting DOD’s’ Reform Initiatives (GAO/T-AIMD-95-146, May 23, 1995). Financial Management: Challenges Confronting DOD’s Reform Initiatives (GAO/T-AIMD-95-143, May 16, 1995). Defense Infrastructure: Enhancing Performance Through Better Business Practices (GAO/T-NSIAD/AIMD-95-126, Mar. 23, 1995). DOD Procurement: Overpayments and Underpayments at Selected Contractors Show Major Problem (GAO/NSIAD-94-245, Aug. 5, 1994). Defense Business Operations Fund: Improved Pricing Practices and Financial Reports Are Needed to Set Accurate Prices (GAO/AIMD-94-132, June 22, 1994). Financial Management: DOD’s Efforts to Improve Operations of the Defense Business Operations Fund (GAO/T-AIMD/NSIAD-94-146, Mar. 24, 1994). DOD Procurement: Millions in Overpayments Returned by DOD Contractors (GAO/NSIAD-94-106, Mar. 14, 1994). Financial Management: Status of the Defense Business Operations Fund (GAO/AIMD-94-80, Mar. 9, 1994). Financial Management: Strong Leadership Needed to Improve Army’s Financial Accountability (GAO/AIMD-94-12, Dec. 22, 1993). Letter to the Deputy Secretary of Defense (GAO/AIMD-94-7R, Oct. 12, 1993). Financial Management: DOD Has Not Responded Effectively to Serious, Long-standing Problems (GAO/T-AIMD-93-1, July 1, 1993). Financial Management: Opportunities to Strengthen Management of the Defense Business Operations Fund (GAO/T-AFMD-93-6, June 16, 1993). Financial Management: Navy Records Contain Billions of Dollars in Unmatched Disbursements (GAO/AFMD-93-21, June 9, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). Financial Audit: Examination of Army’s Financial Statements for Fiscal Year 1991 (GAO/AFMD-92-83, Aug. 7, 1992). Financial Management: Immediate Actions Needed to Improve Army Financial Operations and Controls (GAO/AFMD-92-82, Aug. 7, 1992). Financial Management: Defense Business Operations Fund Implementation Status (GAO/T-AFMD-92-8, Apr. 30, 1992). Financial Audit: Aggressive Actions Needed for Air Force to Meet Objectives of the CFO Act (GAO/AFMD-92-12, Feb. 19, 1992). Financial Audit: Status of Air Force Actions to Correct Deficiencies in Financial Management Systems (GAO/AFMD-91-55, May 16, 1991). Defense’s Planned Implementation of the $77 Billion Defense Business Operations Fund (GAO/T-AFMD-91-5, Apr. 30, 1991). Financial Audit: Financial Reporting and Internal Controls at the Air Logistics Centers (GAO/AFMD-91-34, Apr. 5, 1991). Financial Audit: Financial Reporting and Internal Controls at the Air Force Systems Command (GAO/AFMD-91-22, Jan. 23, 1991). Financial Audit: Air Force Does Not Effectively Account for Billions of Dollars of Resources (GAO/AFMD-90-23, Feb. 23, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed outstanding overpayments and underpayments identified in Department of Defense (DOD) contractors' records, focusing on: (1) whether DOD was detecting and recovering contract overpayments promptly; and (2) the actions taken to recover overpayments at the Defense Finance and Accounting Service (DFAS) Center in Columbus, Ohio. GAO found that: (1) the 374 business units (representing 82 large defense contractors and 57 small contractors) that responded to GAO's request for data as of July 1994 reported about $231.5 million in outstanding overpayments and about $625.9 million in underpayments; (2) the evidence suggests and contractors reported that they followed up to collect underpayments and usually notified DOD of overpayments; however, contractors did not always return overpayments unless instructed to do so; (3) the DFAS Columbus Center cannot readily detect payment discrepancies because of significant errors in its automated payment records; (4) despite these errors, Center personnel, in accordance with payment procedures, pay contractor invoices as if the payment data were correct; (5) with significant errors in the automated payment records, incorrect payments are likely to continue; (6) the Center did not properly pursue recovery after overpayments were reported by contractors or identified through reconciliation; (7) on the basis of GAO's research of $84.2 million in overpayments, the Center's delay in collecting overpayments was long and costly; (8) for those overpayments, GAO estimates that recovery delays cost the government about $10.6 million in interest; (9) even after a public accounting firm completed contract reconciliations to identify the amounts owed the government, the Center did not recover overpayments promptly; (10) in response to GAO's August 1994 recommendation that DOD mobilize resources to identify, verify, and correct payment discrepancies, DOD advised GAO in May 1995 that various actions were under way or planned to to reduce payment discrepancies and to use contractor records to facilitate reconciliations; and (11) on July 31, 1995, DFAS requested the Columbus Center to undertake a new effort to identify and resolve payment discrepancies. |
ESRD is a condition of permanent kidney failure. Treatment options include kidney transplantation and maintenance dialysis. Kidney transplants are not a practical option on a wide scale, as suitable donated organs are scarce. In contrast, dialysis is the treatment used by most beneficiaries with ESRD. Hemodialysis, the most common form of dialysis, is generally administered three times a week at facilities that provide dialysis services. During hemodialysis, a machine pumps blood through an artificial kidney, called a hemodialyzer, and returns the cleansed blood to the body. In order to receive hemodialysis treatment, patients must have a vascular access, which is a site on the body where blood is removed and returned during dialysis. One of the complications of ESRD is anemia, a condition in which an insufficient number of red blood cells is available to carry oxygen throughout the body. A diagnosis of anemia is determined through a measurement of the level of hemoglobin in the blood. To treat anemia, providers may administer ESAs intravenously in conjunction with IV iron.28,,29 30 Another complication of ESRD is hyperparathyroidism, which can result from a deficiency of vitamin D. Hyperparathyroidism is typically diagnosed based on the level of parathyroid hormone (PTH) in the blood and can lead to elevated phosphorus levels and low calcium levels in the blood as well as softening of the bones. The treatment of hyperparathyroidism includes the administration of IV vitamin D and oral drugs such as phosphate binders and calcimimetics. There are two types of ESAs—epoetin alfa (brand name Epogen®) and darbepoetin alfa (brand name Aranesp®). In 2007, Epogen accounted for about 92 percent of Medicare expenditures on ESAs. Of the approximately $2.2 billion in Medicare expenditures on injectable ESRD drugs in 2007, about 75 percent was spent on ESAs. Although iron is most commonly administered intravenously, it can also be given orally. Over the last several years, researchers and clinicians have debated how to best manage anemia in chronic kidney disease patients, including those with ESRD. Some studies have concluded that using ESAs to achieve higher-than-recommended hemoglobin targets does not reduce, and may sometimes result in, adverse cardiovascular events. See Tilman B. Drueke et al., “Normalization of Hemoglobin Level in Patients with Chronic Kidney Disease and Anemia,” The New England Journal of Medicine, vol. 355, no. 20 (2006), and Ajay K. Singh et al., “Correction of Anemia with Epoetin Alfa in Chronic Kidney Disease,” The New England Journal of Medicine, vol. 355, no. 20 (2006). Based on the results of these studies and other safety concerns, the Food and Drug Administration (FDA) issued a “black box” warning and required labeling changes for ESAs in 2007. FDA recently announced that all patients receiving ESAs must be provided a medication guide that explains the potential for adverse events while using these products. In the case of ESAs, the medication guides warn of “potential death or other serious side effects.” Other researchers have suggested that variability in dosing of ESAs across dialysis facilities treating similar patients may be evidence that some utilization of these drugs is not clinically appropriate. See Mae Thamer et al., “Dialysis Facility Ownership and Epoetin Dosing in Patients Receiving Hemodialysis,” The Journal of the American Medical Association, vol. 297, no. 15 (2007). In September 2009, CMS issued its proposed rule for the design of the new bundled payment system for dialysis care, which is required by law for services furnished on or after January 1, 2011. CMS proposed that under the new bundled payment system, Medicare would continue paying dialysis facilities a bundled payment per dialysis treatment for up to three treatments per week as it does under the current system. However, unlike the current payment system, the new bundled payment would cover ESRD drugs and other separately billable services (for example, laboratory tests related to ESRD treatment) in addition to dialysis services currently covered under the composite rate. Under CMS’s proposed rule, the ESRD drugs covered under the new bundled payment would include injectable ESRD drugs as well as oral ESRD drugs, such as calcimimetics, that are currently covered under Medicare Part D. Bundled payment systems in Medicare typically include a case-mix adjustment and may also use an outlier policy to account for differences in the cost of beneficiaries’ care. In general, a case-mix adjustment varies payments based on factors associated with beneficiaries’ expected costs of care. As a result, a case-mix adjustment typically increases bundled payments for providers who treat high-cost beneficiaries. In addition, some bundled payment systems under Medicare use an outlier policy to partially offset providers’ financial losses for treating beneficiaries whose costs of care substantially exceed what would be expected. To reduce these financial losses, an outlier policy involves making provider payments in addition to the case-mix adjusted bundled rate for these high-cost beneficiaries. The accuracy with which bundled payments are adjusted to account for differences in beneficiaries’ expected costs of care may affect beneficiaries’ access to and quality of care. In prior work, we and others have stated that if a bundled payment system’s case-mix adjustment is not designed adequately, then payments may be too low for certain groups of beneficiaries. Further, providers could respond to these inadequate payments by choosing not to treat or inappropriately limiting care for these groups, which could adversely affect these beneficiaries’ access to and quality of care. We and others have noted that underpaying for care, which could result from an inadequate case-mix adjustment, may result in care of poor quality. In particular, poor quality of care could occur under bundled payment systems if, for example, providers furnish inadequate doses of drugs in an effort to minimize cost. Beneficiaries w above average costs of care may be particularly vulnerable because providers who treat these beneficiaries face the potential of financial losses on these patients if the bundled payments are not adjusted appropriately to take these above average costs into account. The potential unintended effects of bundled payment systems on beneficiaries have led us and others to note that access to and quality under various Medicare bundled payment systems should be monitored. For example, in 1999, we noted that monitoring access to care would be necessary under Medicare’s bundled payment system for skilled nursing care to ensure that Medicare beneficiaries continued to have access to medically necessary services. Similarly, in its 2006 report, the HHS Office of Inspector General stressed the importance of monitoring quality under the bundled payment system for home health care. Our work and work by others has also noted the importance of monitoring the effect of Medicare bundled payment systems on various groups of beneficiaries. Specifically, in 2000 we and the Medicare Payment Advisory Commission (MedPAC) reported on the bundled payment system for home health care and recommended that the delivery of these services be monitored across groups of beneficiaries, such as those whose care is more costly than average. Furthermore, a study on the bundled payment system for inpatient rehabilitation services affirmed the importance of monitoring access to care for various groups of beneficiaries. Monthly Medicare expenditures per beneficiary for injectable ESRD drugs in 2007 were above average for certain demographic groups, and African Americans and persons with Medicaid coverage were among the groups for which this difference was largest. In particular, Medicare expenditures on injectable ESRD drugs in 2007 were $782 per African American beneficiary per month—about 13 percent more than the $693 spent for all Medicare beneficiaries on dialysis (see fig. 1). The above average spending per African American beneficiary was due primarily to higher spending on ESAs and IV vitamin D. Monthly Medicare spending per African American beneficiary on ESAs was about 10 percent higher than the average across all beneficiaries on dialysis, and spending on IV vitamin D was about 38 percent higher than average. Average monthly Medicare expenditures per beneficiary for other racial groups were below the average for all beneficiaries on dialysis in 2007. As a result, average monthly expenditures for African Americans were about 41 to 42 percent higher than spending for beneficiaries who classified themselves as American Indian/Alaskan Native or Asian or Pacific Islander and about 21 percent higher than for expenditures for White beneficiaries. Average monthly expenditures per beneficiary for injectable ESRD drugs were also above average for beneficiaries enrolled in both Medicare and Medicaid. Specifically, average monthly expenditures per beneficiary enrolled in Medicare and Medicaid were $735 in 2007, which was about 6 percent higher than the $693 spent across all beneficiaries on dialysis and about 12 percent higher than the $659 for Medicare beneficiaries who were not in Medicaid. This difference was mainly due to above average expenditures on ESAs and IV vitamin D for beneficiaries enrolled in both Medicare and Medicaid. For beneficiaries with both Medicare and Medicaid coverage, expenditures on ESAs were about 6 percent higher than the average for all beneficiaries in 2007, while expenditures on IV vitamin D were about 11 percent higher than average. Monthly Medicare expenditures per beneficiary for adults age 20 to 64 were generally higher than the average for all Medicare beneficiaries on dialysis. Most notably, Medicare spending per beneficiary age 20 to 44 was about 9 percent more than the monthly average for all Medicare beneficiaries on dialysis (see fig. 2). Monthly Medicare expenditures per beneficiary age 20 to 44 were also higher when compared to those of other age groups, in particular beneficiaries age 19 and under or age 75 and older. The higher-than-average spending for beneficiaries age 20 to 44 was driven primarily by above average expenditures on ESAs and IV vitamin D. Specifically, Medicare spending on ESAs per beneficiary age 20 to 44 was about 9 percent higher than the average across all beneficiaries on dialysis in 2007. Similarly, Medicare spending on IV vitamin D per beneficiary age 20 to 44 was about 12 percent higher than the average for all beneficiaries. Monthly expenditures per beneficiary in 2007 for females, non-Hispanic beneficiaries, and urban residents also exceeded the average for all beneficiaries on dialysis, but to a lesser extent than for African Americans and beneficiaries in both Medicare and Medicaid. For example, female beneficiaries had average monthly expenditures of $715, which was about 3 percent higher than the monthly average across all Medicare beneficiaries on dialysis and about 6 percent higher than monthly expenditures per male beneficiary. Similarly, the $708 that Medicare spent per month on non-Hispanic beneficiaries was about 2 percent higher than the average across all beneficiaries on dialysis and about 19 percent higher than the average for Hispanic beneficiaries. For more detailed information on Medicare expenditures for injectable ESRD drugs, by demographic characteristics, see appendix III. While we report that certain demographic groups were associated with above average Medicare expenditures for injectable ESRD drugs in 2007, we did not identify the factors that led to these differences in expenditures across groups of beneficiaries. However, we collected information from nephrology clinicians and ESRD researchers on the factors they consider likely to result in above average doses of injectable drugs—ESAs, IV iron, and IV vitamin D. A majority of the 73 clinicians and researchers who completed our Web- based data collection instrument identified clinical factors, rather than demographic characteristics, as likely to result in above average doses of injectable ESRD drugs. Specifically, at least 50 percent of these experts identified 14 such factors, including chronic blood loss, low iron stores, and recent hospitalization, as likely to result in above average doses of ESAs (see table 1). Further, a majority of the clinicians and researchers who completed our data collection instrument indicated that demographic factors were not likely to result in above average doses of ESAs. Specifically, at least 50 percent of these experts identified 16 of the 17 demographic factors, such as age, race, and socioeconomic status, as not likely to result in above average doses of ESAs (see app. IV for detailed results). These results are consistent with information from our structured interviews with nephrology clinicians, who indicated that they consider clinical factors, rather than demographic characteristics, when making dosing decisions for ESAs and other injectable ESRD drugs. The literature we reviewed on the use of ESAs provides some explanation for how clinical factors impact the dose of this drug. For example, chronic blood loss is a common occurrence among hemodialysis patients. Blood loss can increase a person’s ESA requirements by reducing the level of iron in the blood. Sources of blood loss include blood lost during the hemodialysis process, regular blood draws for laboratory testing, and gastrointestinal bleeding. As another example, the clinical literature describes how recent hospitalizations relate to ESA use. Studies demonstrate that hospitalized ESRD patients usually experience a decline in hemoglobin levels, which worsens anemia and increases posthospitalization ESA requirements. The literature offers multiple explanations for this decline in hemoglobin levels. For example, hospitalized ESRD patients commonly experience infection, inflammation, and iron deficiency. All of these conditions can contribute to increased ESA requirements. Additionally, the literature explains the effect of dialysis catheters on the use of ESAs. According to published research, the use of dialysis catheters compared to other forms of vascular access makes ESRD patients more prone to infection and inflammation, which increase ESA requirements. As with ESAs, a majority of clinicians and researchers who completed our data collection instrument identified clinical factors, such as chronic blood loss and low iron stores, as likely to result in above average doses of IV iron (see table 2). These individuals identified six clinical factors as likely to result in above average doses of IV iron. Five of these six clinical factors overlap with the clinical factors identified for ESAs. Moreover, at least 50 percent of clinicians and researchers who completed our data collection instrument identified demographic factors, such as age, race and residential location, as not likely to result in an above average dose of IV iron (see app. IV for detailed results). Also similar to ESAs, the literature on the use of IV iron provides some context for the clinical factors that are likely to result in above average doses of IV iron. For example, chronic blood loss can result in iron deficiency and increase a person’s IV iron requirement. Sources of blood loss leading to increased IV iron requirements include blood retention in the dialyzer tubing, blood testing, and gastrointestinal bleeding. Also, the literature explains that the state of having low iron stores is more common in patients on dialysis for less than 6 months than those on dialysis for longer amounts of time. As table 3 shows, a majority of the clinicians and researchers who completed our data collection instrument identified two clinical factors— hyperparathyroidism and a lack of predialysis care—and one demographic factor—low socioeconomic status—as likely to result in higher-than- average doses of IV vitamin D (see app. IV for detailed results). Hyperparathyroidism is present in almost all ESRD patients and develops early in the course of chronic kidney disease. In fact, research shows that PTH levels start to increase early in the course of chronic kidney disease and can lead to the development of hyperparathyroidism. In addition, new ESRD patients who have not received predialysis care from a nephrologist may be at greater risk of health complications. According to the clinical literature, new ESRD patients may begin dialysis treatment without receiving predialysis care from a nephrologist because they face barriers to receiving care. One such barrier is low socioeconomic status. Specifically, the literature shows that low socioeconomic status may be associated with limited access to health care services. Issued in September 2009, CMS’s proposed rule for the new bundled payment system for dialysis care identified several clinical and demographic factors that the agency proposed to use in the case-mix adjustment model required by MIPPA. The case-mix adjustment factors that CMS proposed include age, sex, body surface area, body mass index, length of time on dialysis, and comorbid conditions. CMS and UM- KECC studied the relationship between these proposed factors and the cost of dialysis care and used the results to determine how to adjust payments under the new bundled payment system. For example, based CMS’s proposed case-mix adjustment, the bundled payment for a beneficiary who has been on dialysis for fewer than 4 months would be 47 percent higher than the payment for the same beneficiary on dialysis for more than 4 months. CMS used the criteria listed be factors. Specifically, a factor low to select potential case-mix adjustment had to have a statistically significant relationship with beneficiaries of dialysis care that was large enough to result in meaningful difference in payments to providers, could not introduc poor quality care, e incentives for providers to furnish inappropriate or must be measured based on objective guidelin es, and must be based on reliable data. CMS considered some factors as potential case-mix adjusters but did not propose them because they did not meet CMS’s criteria. One example of a factor that CMS considered but did not propose as a potential case-mix adjuster is congestive heart failure. CMS officials stated that they did not propose this factor in part because of the lack of clear and objective guidelines for diagnosing this condition. As another example, a beneficiary’s prior ESA use was not proposed as a case-mix adjuster because, according to CMS officials, this factor would introduce inappropriate incentives for providers. Specifically, they concluded that if the extent of prior ESA use were a case-mix adjustment factor, a provide r would have the incentive to increase a beneficiary’s ESA dose to obta higher Medicare payments under the new bundled payment system. CMS also considered including race and ethnicity in the proposed case- mix adjustment model, but chose not to include these factors. CMS invited public comment on this decision, noting that an adjustment based on race and ethnicity may be warranted. One of the reasons CMS cited in its proposed rule for not including race and ethnicity in the proposed model was the lack of objective guidelines for classifying beneficiaries’ race or ethnicity. This absence of objective guidelines implies that there is likely to be an inconsistency across individuals in how they classify themselves into racial or ethnic categories. CMS also noted that its concerns with th e quality of data on race and ethnicity made it difficult to propose these variables as case-mix adjusters. One quality issue that CMS cited is the inconsistency over time in how Medicare data on race and ethnicity collected for one of its two sources of this information—the Renal were Management Information System (REMIS) database. Additionally, CMS cited studies indicating that information on race and ethnicity from Medicare’s second source of these data—the Medicare Enrollment Database (EDB)—may be inaccurate. These studies found that the EDB may not accurately identify beneficiaries’ race and ethnicity, particularly for beneficiaries in smaller minority groups, such as Asians and Hispanics. In addition to a case-mix adjustment model, CMS proposed using an outlier policy, as required by MIPPA, to increase payments to providers when they treat beneficiaries whose costs of dialysis care substantially exceed what would be expected. CMS proposed identifying these high- cost beneficiaries based on their cost of outlier services, which CMS defines as ESRD services that are separately billable under the current payment system for dialysis care, such as injectable ESRD drugs. The agency has noted that it is primarily the variation in the cost of outlier services that poses a financial risk to providers and that could therefore adversely affect beneficiaries’ access to and quality of dialysis care. Furthermore, according to CMS officials, the agency collects beneficiary- level data on the use of outlier services but not on those covered under the composite rate, such as the dialysis procedure. Such data would be necessary to identify beneficiaries with higher-than-expected costs for dialysis care overall. Based on CMS’s proposed outlier policy, providers could receive outlier payments when they treat beneficiaries whose costs for injectable ESRD drugs and other outlier services exceed a certain threshold. The case-mix adjustment and outlier policy may need to be recalibrated periodically. The specific parameters of these payment mechanisms initially will be based on patterns of utilization, and therefore spending, that existed before the new bundled payment system was implemented. The bundling of payments changes financial incentives for providers and is intended to encourage the efficient provision of care. To the extent that providers change how they practice after the new payment system is implemented, in response to the financial incentives of the new bundled payment system to provide dialysis care more efficiently or other factors, the parameters of the case-mix adjustment and outlier policy could become less accurate over time. As a result, CMS officials stated that they may recalibrate these payment mechanisms using data collected after implementation of the new bundled payment system. However, CMS officials noted that they had not established a time frame for this recalibration. CMS officials told us that their preliminary plans for monitoring the effects of the new bundled payment system on beneficiaries include three current CMS initiatives that focus on monitoring the quality of dialysis care (see table 4). In comments on a draft of this report, CMS reported that it plans to have a comprehensive monitoring strategy in place when the new bundled payment system is implemented on January 1, 2011. One of the three key initiatives in CMS’s preliminary monitoring plans is its network of 18 private organizations—called ESRD networks. Each network is charged with monitoring and promoting the quality of dialysis care in a geographic area, which generally covers one or more states. The networks’ monitoring responsibilities include analyzing facility-level data on quality measures to identify facilities that need assistance with quality improvement. The networks are also responsible for evaluating and addressing patient complaints. The second quality monitoring initiative that CMS plans to rely on is the Clinical Performance Measures (CPM) project. Under this project, CMS has monitored quality by collecting and analyzing data on dialysis quality measures for a nationally representative sample of beneficiaries on dialysis. CMS has used these data to report annually on comparisons of the quality of dialysis care across the country and across groups of beneficiaries. The third initiative involves monitoring the quality of individual dialysis facilities by ensuring that they comply with Medicare’s conditions for coverage that a facility must fulfill in order to receive Medicare payment for dialysis care. One of these conditions requires that a dialysis facility develop and implement a program to monitor and improve the quality of services it provides. CMS requires that this plan include the collection and monitoring of data on patient satisfaction with care and the adequacy of dialysis, among other measures. In addition to the monitoring initiatives described above, CMS has or is developing two other quality initiatives focused primarily on promoting the quality of dialysis care rather than monitoring. The first of these initiatives that CMS plans to continue under the new bundled payment system is Dialysis Facility Compare, which is a tool on the Medicare program’s Web site that allows users to compare dialysis facilities based on measures of the quality of dialysis care. By making public each facility’s quality information, Dialysis Facility Compare gives facilities the incentive to improve the quality of care they furnish. CMS is developing the second of these initiatives—a quality incentive program (QIP)—which is required by MIPPA to be implemented beginning January 1, 2012. Under the QIP, Medicare is required to reduce payments to dialysis providers by up to 2 percent if the dialysis care they furnish does not meet a total performance score based on quality standards established by CMS. CMS proposed using indicators of dialysis adequacy and anemia management to measure quality under the QIP. By linking a portion of provider payments to measures of dialysis adequacy and anemia management, the QIP would give providers a financial incentive to improve these aspects of dialysis care. However, the QIP would not address other aspects of dialysis care, such as mineral metabolism, which is related to the use of IV vitamin D, unless CMS incorporated additional measures into the program. We and others have noted the importance of monitoring quality of and access to care under bundled payment systems to help ensure that beneficiaries receive appropriate care. Although CMS intends to monitor quality under the new bundled payment system, the extent to which CMS will conduct such monitoring for various groups of beneficiaries is uncertain. CMS officials told us that it was too early in the process of developing a monitoring plan to address how they might monitor various groups of beneficiaries. CMS is developing the capacity to monitor the quality of dialysis care for groups of beneficiaries, such as those with above average costs of care. Specifically, CMS is implementing a new database called the Consolidated Renal Operations in a Web-Enabled Network (CROWNWeb), which is designed to collect CPM data as well as other clinical and demographic information for all beneficiaries with ESRD. However, because CMS is still developing its monitoring plans, it is uncertain to what extent CMS will use these data to monitor the quality of dialysis care for various groups of beneficiaries under the new bundled payment system. While CMS has initiatives it plans to use to monitor the quality of dialysis care beneficiaries receive under the new bundled payment system, these initiatives involve systematic monitoring of only one measure of beneficiaries’ access to such care. Specifically, CMS systematically monitors the extent to which beneficiaries are discharged involuntarily from facilities by requiring the networks to track these beneficiaries. To improve the networks’ ability to track these beneficiaries, CMS is developing a database designed to allow the networks to track the number of involuntary discharges based on beneficiary characteristics, such as age, race, and ethnicity. However, according to CMS officials, the agency does not systematically monitor other measures of access to dialysis care, such as the use of dialysis services. Although CMS’s monitoring initiatives do not generally focus on beneficiaries’ access to dialysis care, CMS has the data sources necessary to conduct more comprehensive monitoring of access for various groups of beneficiaries, including those with above average costs of care. In particular, one data source that CMS has available to monitor access to dialysis care is the information it generates on the characteristics of beneficiaries receiving care in dialysis facilities. This facility-level information—the Dialysis Facility Report—is compiled by UM-KECC in part from Medicare claims and the REMIS database. CMS could use these data, in addition to information it has on which facilities open or close during a given year, to compare the characteristics of beneficiaries in these facilities. This information could indicate whether facility openings and closures affect the availability of dialysis facilities for certain groups of beneficiaries more than others. CMS also has the data necessary to monitor other measures of access to care, such as changes in the use of dialysis services and shifts in the site of dialysis care. CMS collects data on the use of Medicare-covered services, such as ESRD drugs, through the process of paying claims for these services. In addition, the CROWNWeb database will contain beneficiary- level data on demographic and clinical characteristics. CMS could use these data sources to identify groups of beneficiaries whose service use is higher than average and who therefore may have above average costs of dialysis care. CMS could then use these data to monitor the use of dialysis services for groups of beneficiaries with above average costs of care. Changes in the use of dialysis services could indicate how the new bundled payment system may have affected beneficiaries’ access to these services. For example, if the use of a given dialysis-related drug declined over time for certain groups of beneficiaries but not for others, then this could prompt an assessment of whether this reduction was appropriate and whether the payment system may have caused this difference. CMS could also monitor the extent to which beneficiaries receive emergency dialysis in hospitals rather than outpatient dialysis facilities as an indicator of access to dialysis care. An increase in hospital admissions for emergency dialysis services for certain groups of beneficiaries could indicate that these groups are having difficulty gaining admission to outpatient dialysis facilities. The new bundled payment system for dialysis care—required to be implemented for services furnished on or after January 1, 2011—has the potential to improve the efficiency of care delivery, in part by reducing the financial incentive to use more injectable ESRD drugs than are necessary. However, if this new payment system causes providers to consistently experience financial losses when treating beneficiaries with above average costs, then some beneficiaries could face problems accessing dialysis care or with the quality of that care. Groups of beneficiaries with above average costs of dialysis care, whether related to clinical or demographic factors, may be more vulnerable to these types of problems. Therefore it will be important for CMS to monitor the effect of the new bundled payment system on the access to and quality of dialysis care for these beneficiaries—which is consistent with previous work on the need for such monitoring under other bundled payment systems in Medicare. Furthermore, early identification of any adverse effects of the payment system on beneficiaries will be crucial because their need for life- sustaining dialysis makes them particularly sensitive to disruptions in dialysis care. CMS recognizes the importance of monitoring the effect of its new bundled payment system on beneficiaries and is developing plans for these efforts. In commenting on a draft of this report, CMS stated that it plans to have a comprehensive monitoring strategy in place when the new bundled payment system is implemented on January 1, 2011. However, because CMS’s monitoring plans are preliminary, the extent to which CMS intends to monitor quality for various groups of beneficiaries, such as those with above average costs of care, is unclear. Furthermore, while CMS’s preliminary plans for monitoring under the new bundled payment system contain initiatives designed to monitor the quality of dialysis care, these plans involve very limited monitoring of access to these services. CMS has or is developing the tools it could use to monitor access to and quality of dialysis care for various groups of beneficiaries, including those with above average costs of dialysis care. Specifically, CMS currently collects data on the use of injectable ESRD drugs and other Medicare services that could be used to monitor access to these services. CMS is also developing a data system that will contain quality measures for each beneficiary with ESRD. CMS could draw on this capacity as it plans and conducts its monitoring efforts. Moreover, CMS could use information from these efforts to help refine the payment system over time. To help ensure that changes in Medicare payment methods for dialysis care do not adversely affect beneficiaries, we recommend that the Administrator of CMS monitor the access to and quality of dialysis care for groups of beneficiaries, particularly those with above average costs of dialysis care, under the new bundled payment system. Such monitoring should begin as soon as possible once the new bundled payment system is implemented and be used to inform potential refinements to the payment system. We received written comments on a draft version of this report from CMS and oral comments on the draft report from representatives from dialysis facility organizations and from a nephrologist specialty association. In written comments on a draft of this report, CMS agreed with our recommendation and noted that it is planning to actively monitor the effects of the new bundled payment system on all ESRD beneficiaries, including those with above average costs. CMS noted that it plans to have a comprehensive monitoring strategy in place when the payment system is implemented on January 1, 2011. In particular, CMS plans to use its existing data sources to examine overall trends in care delivery and quality to help the agency ensure that beneficiaries continue to receive quality care under the new payment system. CMS stated that it would use its existing infrastructure, including the ESRD networks, for quality oversight in the ESRD facilities. Furthermore, CMS indicated that it plans to use information from these monitoring activities for potential refinements to the new bundled payment system and the QIP. CMS noted that our statement that the agency’s preliminary plans involve limited monitoring of access to dialysis care did not reflect the agency’s current planning efforts because our assessment was based on interviews conducted prior to the publication of the ESRD proposed rule, which occurred on September 29, 2009. However, we spoke with CMS officials in December 2009 to review our evidence and findings regarding the agency’s preliminary monitoring plans, and at that time, agency officials told us that our information was accurate. CMS commented that our report suggests that clinical factors, rather than demographic characteristics, are more likely to relate to higher doses of injectable ESRD drugs, resulting in above average expenditures for certain groups of beneficiaries. CMS also noted that the case-mix adjustment model is designed to predict dialysis facility costs and be used in making payments to such facilities based on information they are able to provide on claims. CMS further noted that demographic and other factors had been determined to be statistically significant in predicting facility costs. The results of our study indicate that while Medicare expenditures on injectable ESRD drugs were related to beneficiaries’ demographic characteristics, a majority of clinicians and researchers from whom we obtained input noted that these characteristics by themselves generally were not likely to result in higher doses of injectable ESRD drugs. However, we do not draw any conclusions regarding the relative importance of demographic or clinical characteristics in predicting dialysis facility costs for the purposes of a case-mix adjustment model and payment system. Evaluating the appropriateness of CMS’s proposed case- mix adjustment factors was beyond the scope of this study. CMS provided technical comments, which we incorporated as appropriate. We have reprinted CMS’s letter in appendix V. We invited representatives of both large and small dialysis facility organizations and a nephrologist specialty association to review and provide oral comments on the draft report. The groups represented were the Kidney Care Council (KCC), the National Renal Administrators Association (NRAA), and the Renal Physicians Association (RPA). The three groups generally agreed with our message and recommendation to CMS. Their comments focused on three areas: the data and populations analyzed in the report, our findings related to beneficiaries’ demographic characteristics and clinical conditions, and the nature and timeliness of CMS’s monitoring plans. Industry representatives also provided technical comments, which we incorporated as appropriate. First, representatives from each of the organizations commented on the scope of the report by raising potential issues with the data and populations we analyzed. RPA representatives noted that our data on Medicare expenditures for injectable ESRD drugs, which were based on USRDS data for 2007, may not represent current trends in utilization and expenditures. They asserted that prescribing patterns for injectable ESRD drugs may have changed since 2007 and that this may have been due in part to safety concerns associated with ESA use. In addition, representatives from both KCC and NRAA stated that the report did not sufficiently examine the socioeconomic status of ESRD beneficiaries, including how beneficiaries with both Medicare and Medicaid coverage would fare under the new bundled payment system. An NRAA representative also noted that our report did not examine data on the poorest ESRD beneficiaries who have Medicaid coverage but do not qualify for Medicare coverage. In addition, KCC representatives noted that the report did not provide enough information on Part D drugs, which CMS proposed to cover under the new bundled payment system. Moreover, RPA representatives noted that there is a great deal of anxiety in the provider community about whether the bundled payment will be sufficient to cover the cost of these drugs. In our report, we analyzed USRDS data on Medicare expenditures for injectable ESRD drugs and demographic characteristics such as age, sex, race, and Medicaid status for 2007 because these were the most recent data available. Moreover, our analysis of data from 2003 through 2006 indicated that the results based on 2007 data were consistent with data from the previous 4 years. We acknowledge, however, that the safety concerns about ESAs could have influenced prescribing practices and that such changes could affect the relationship between expenditures on injectable ESRD drugs and demographic characteristics and have added some detail to the report on these issues. We examined beneficiaries covered by both Medicare and Medicaid because detailed information on beneficiaries’ socioeconomic status is not available. We did not examine data on beneficiaries without Medicare coverage because they are not included in the data CMS used to develop the new bundled payment system. We agree that Part D drugs will be important under the new bundled payment system. However, data on the use of these drugs, which according to CMS constituted about 14 percent of Medicare expenditures on all ESRD drugs in 2007, were not available. Second, industry representatives commented on our findings related to beneficiaries’ demographic characteristics and clinical conditions. Representatives from KCC pointed out that our findings on the relationship between Medicare expenditures on injectable ESRD drugs and beneficiaries’ demographic characteristics were consistent with published research on this topic and noted that these relationships are driven by underlying clinical factors. However, RPA representatives noted that the report did not address the reason for these observed relationships. In addition, representatives from KCC and RPA agreed with our finding that clinicians do not take beneficiaries’ demographic characteristics into account when making dosing decisions. However, KCC representatives noted that there was an apparent disconnect between the results of our first and second findings. In order to facilitate interpretation of these results, KCC representatives suggested that we include in the report a copy of the instrument used to collect information from clinicians and researchers on the factors that are likely or not likely to result in above average doses of injectable ESRD drugs. We did not address the extent to which the relationships between Medicare expenditures on injectable ESRD drugs and beneficiaries’ demographic characteristics were driven by underlying clinical factors because doing so was beyond the scope of our study. We did, however, obtain input from clinicians and ESRD researchers to gain insight into the factors that may affect the dose of these drugs for dialysis patients. We agree with KCC’s suggestion and have included the structured data collection instrument in appendix II. Finally, representatives from all three organizations agreed that it will be important to monitor the effects of the new bundled payment system on beneficiaries but expressed concern about how CMS would conduct such monitoring. Representatives from NRAA stressed the need to identify vulnerable populations, such as those with high costs of dialysis care, as part of the monitoring process. However, NRAA and RPA representatives questioned how CMS would identify these populations through its monitoring activities. In addition, KCC representatives expressed concern about the timeliness of CMS’s monitoring activities, noting that data from CMS on the provision of dialysis care can have a long lag time, which makes the information less relevant. Representatives from all three organizations expressed concerns related to CROWNWeb implementation. Specifically, both NRAA and RPA representatives noted that they view CROWNWeb as a potentially useful tool for CMS monitoring activities, but are concerned about when it would be fully implemented. NRAA representatives noted that challenges remain to making the database operational. Furthermore, representatives from KCC cautioned that if data in CROWNWeb are not collected in a consistent way across dialysis facilities, the information from this database could be unreliable. Our report recommends that CMS monitor the effect of the new payment system on beneficiaries, such as those who are vulnerable to adverse effects of the payment system because of their above average costs of dialysis care. We also point out in the report that it will be important for CMS to draw on data sources it has or is developing to identify and monitor access to and quality of dialysis care for such groups of beneficiaries. We agree with KCC representatives that CMS’s monitoring activities should be timely so that any problems resulting from the new payment system can be addressed as soon as possible after implementation. Our recommendation to CMS emphasizes the need for timely monitoring, particularly given the sensitivity of the dialysis population to potential disruptions in access to and quality of care. We also reported that CROWNWeb is a key element in CMS’s preliminary plans for its monitoring approach, and agree that it is important for CMS to develop reliable data and ensure that such data are available to use as soon as possible after the bundled payment system is implemented. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to (1) provide information on Medicare expenditures for injectable end-stage renal disease (ESRD) drugs, by beneficiaries’ demographic characteristics; (2) identify the factors that clinicians and researchers indicate are likely to result in a higher-than-average dose of injectable drugs for a dialysis patient; (3) describe the Centers for Medicare & Medicaid Services’ (CMS) approach for addressing differences among beneficiaries in the cost of dialysis care under the new bundled payment system for these services; and (4) examine CMS’s plans for monitoring the effects of the new bundled payment system on beneficiaries. To provide information on Medicare expenditures for injectable ESRD drugs, by beneficiaries’ demographic characteristics, we analyzed the most recent available data from a national data system containing information on beneficiaries with ESRD. Specifically, we obtained data from the United States Renal Data System (USRDS) on monthly Medicare expenditures per beneficiary on dialysis in 2007 for injectable ESRD drugs. We focused our analysis on erythropoiesis stimulating agents (ESA), intravenous (IV) iron, and IV vitamin D because these three types of drugs accounted for about 98 percent of the approximately $2.2 billion in Medicare expenditures on injectable ESRD drugs in 2007. We analyzed data for 326,899 Medicare beneficiaries on dialysis in 2007. The data we analyzed did not contain all of the 413,540 beneficiaries on dialysis in 2007 because we excluded beneficiaries (1) who were in Medicare managed care plans, (2) for whom Medicare was not the primary payer, or (3) for whom no claims for Medicare services provided in 2007 were submitted. We analyzed monthly Medicare expenditures per beneficiary in 2007 on ESAs, IV iron, and IV vitamin D across the following demographic characteristics available through the USRDS database: age, sex, race, ethnicity, urban/rural residential location, and whether a beneficiary was enrolled in Medicaid. Additionally, we analyzed USRDS data for 2003 through 2006 to determine whether the results for 2007 were consistent in prior years. We did not address in our expenditure analysis the extent to which the relationships we presented between demographic characteristics and Medicare expenditures reflected underlying clinical or other factors. Data on monthly Medicare expenditures per beneficiary were based on Medicare claims. The expenditure amounts that we presented did not include beneficiary cost sharing. Monthly Medicare expenditures per beneficiary were calculated by dividing Medicare expenditures for a given drug by the number of months beneficiaries were on dialysis in 2007. USRDS data on demographic characteristics—with the exception of Medicaid enrollment status—were drawn primarily from CMS’s Renal Management Information System (REMIS) database. Dialysis providers collected these data using a standardized form called the Medical Evidence Form. We used these data to present results on monthly Medicare expenditures on injectable ESRD drugs across the following age categories: 0-19, 20-44, 45-54, 55-64, 65-74, and 75 and older. We selected these age categories to capture the pediatric population (i.e., age 19 and under) and to make the number of beneficiaries within each of the remaining categories similar. The USRDS data we analyzed on race and ethnicity are based on subjective determinations of beneficiaries’ racial and ethnic identity. In addition, these data were collected using different racial and ethnic categories depending on which version of the Medical Evidence Form was used. Figure 3 demonstrates how the racial and ethnic categories on the different versions of the Medical Evidence Form link to the categories we used in this report. A beneficiary’s residence was classified as urban if it was in an area with at least 500 people per square mile, and all other areas were considered rural. Finally, USRDS data on Medicaid enrollment status were drawn from the Medicare Enrollment Database. We used beneficiaries’ Medicaid enrollment status as an indicator of their socioeconomic status because beneficiaries’ income and asset levels determine their eligibility for Medicaid, which provides financial assistance with the cost of medical care. We assessed the reliability of data from the USRDS by interviewing officials responsible for producing these data, reviewing relevant documentation, comparing the results to published sources, and examining the data for obvious errors. Although we report that CMS has concerns about using data on race and ethnicity for the purposes of adjusting bundled payments, we determined that data on these characteristics as well as other USRDS data that we used were sufficiently reliable for the descriptive analytical purposes of our study. To identify the factors that clinicians and researchers indicate are likely to result in a higher-than-average dose of injectable ESRD drugs (specifically, ESAs, IV iron, and IV vitamin D) for a dialysis patient, we developed a structured data collection approach that included interviews with relevant industry groups, clinicians, and researchers with expertise in ESRD as well as the administration of a Web-based data collection instrument to selected nephrology clinicians and ESRD researchers (see app. II for the data collection instrument). To develop this instrument and provide context for our findings, we conducted 20 structured interviews with representatives of large and small dialysis organizations and dialysis- related professional organizations, nephrology clinicians, and researchers with expertise in ESRD and also reviewed the clinical literature related to the use of the three types of drugs. We asked each interviewee about the beneficiary characteristics associated with high or low use of these drugs. We summarized the information obtained from these interviews and used it to develop the lists of clinical factors used for our data collection instrument. The demographic factors listed on the data collection instrument were those we examined in our analysis of Medicare expenditures on injectable ESRD drugs. We pretested the data collection instrument with nephrologists and revised it based on comments we received. Through the data collection instrument, clinicians and researchers were asked to identify the clinical and demographic factors that are likely to result in a higher-than-average dose of ESAs, IV iron, or IV vitamin D for a dialysis patient. In addition, individuals who completed the data collection instrument had the option of writing in factors not already listed in the instrument. We analyzed results by calculating the percentage of the 73 clinicians and researchers who completed our data collection instrument who identified a given factor as being likely or not likely to result in a higher-than-average dose of each of the three types of ESRD drugs we examined. These results represent the views of the 73 clinicians and researchers and are not generalizable to a broader population. We administered the Web-based data collection instrument to a select number of clinicians and researchers with expertise related to the factors that could impact the dose of injectable ESRD drugs. We selected these individuals in two ways. First, we obtained referrals from national, U.S.- based professional organizations that represent nephrology clinicians (i.e., nephrologists, nephrology nurses, nephrology physician assistants, and advanced practitioners specializing in nephrology) who evaluate and treat dialysis patients. We compiled an initial list of nephrology-related professional societies and associations based on our background research on ESRD. To identify additional organizations, we visited the Web site of each of these organizations and obtained a list of related organizations, if available. We selected eight organizations from these lists that met the above criteria. We asked each organization that we identified for referrals to up to 20 nephrology clinicians who have expertise related to factors that could impact the dose of ESAs, IV iron, or IV vitamin D for dialysis patients. We specified in our request that these individuals must (1) be nephrologists, nephrology nurses, physician assistants or advanced practitioners specializing in nephrology, or nephrology technicians/technologists; (2) evaluate and treat dialysis patients; and (3) reside in the United States. We also asked for referrals to major national societies or associations, other than the ones we already planned to contact, that are based in the United States and represent nephrology clinicians. If referrals to additional organizations were provided, we contacted these groups as described above and asked them for referrals to clinicians. We received referrals from the following seven organizations: American Academy of Nephrology Physicians Assistants American Nephrology Nurses’ Association American Society of Pediatric Nephrology The second way we identified clinicians and researchers was through the ESRD literature. Using multiple databases, including BIOSIS Previews®, Elsevier BIOBASE, MEDLINE, SciSearch®, EMBASE®, EMCare, and EMBASE AlertTM, we conducted a review of the literature published from 2004 through 2009 related to the use of ESAs, IV iron, and IV vitamin D to treat ESRD patients. We searched these databases for articles related to the dose of these drugs. We administered the Web-based data collection instrument in August and September 2009. We sent the instrument to 131 clinicians and researchers—the 100 referrals we received from professional organizations and an additional 31 primary authors that we identified through the literature. We received 73 completed instruments. To describe CMS’s approach for addressing differences among beneficiaries in the cost of dialysis care under the new bundled payment system for these services, we reviewed CMS’s proposed rule on the design of the new payment system. We also reviewed the Department of Health and Human Services’ report to Congress on the design of the new payment system as well as reports on this topic by the University of Michigan, Kidney Epidemiology and Cost Center (UM-KECC), which has assisted CMS with the payment system’s design. In addition, we interviewed CMS officials and representatives from UM-KECC. We also interviewed representatives of three non-Medicare payers of dialysis care—the Department of Veterans Affairs (VA) and two large health plans—to obtain contextual information about other bundled payment systems. Finally, to examine CMS’s plans for monitoring the effects of the new bundled payment system on beneficiaries’ access to and quality of dialysis care, we interviewed CMS officials and reviewed prior reports as well as CMS’s proposed rule on the design of the new bundled payment system. Table 5 presents detailed information on average monthly Medicare expenditures per beneficiary for injectable ESRD drugs in 2007, by beneficiaries’ demographic characteristics. These results are based on data for 326,899 Medicare beneficiaries on dialysis in 2007 from USRDS. See appendix I for additional detail on the methodology used to generate these results. This appendix contains additional information on the results of the data collection instrument we used to systematically collect information on the factors likely to result in a higher-than-average dose of three types of injectable dialysis-related drugs—ESAs, IV iron, and IV vitamin D (see app. II for the data collection instrument). Table 6 presents data on the factors identified by the 73 clinicians and researchers that are either likely or unlikely to result in higher-than-average doses of ESAs. Tables 7 and 8 present data on IV iron and IV vitamin D, respectively. Following these tables is a brief summary of the open-ended responses to the data collection instrument. In addition to selecting from among the list of factors in the data collection instrument, individuals completing the instrument had the option of writing in factors not already listed that they considered as likely to result in above average doses of ESAs, IV iron, or IV vitamin D. Of the 73 clinicians and researchers who completed the data collection instrument, about 23 percent wrote in additional factors for ESAs, 19 percent wrote in such information for IV iron, and about 37 percent did so for IV vitamin D. Examples of additional factors provided by clinicians and researchers for ESAs and IV iron include nonadherence to diet, hyperparathyroidism, lack of predialysis care, and smoking. Examples of factors supplied for IV vitamin D include nonadherence to phosphate binders, nonadherence to diet, and recent hospitalization. In addition to the contact named above, Jessica Farb, Assistant Director; Amyre Barker; William Black; Manuel Buentello; Krister Friday; Rich Lipinski; and Jennifer Whitworth made key contributions to this report. | Medicare covers dialysis for most individuals with end-stage renal disease (ESRD). Beginning in January 2011, the Centers for Medicare & Medicaid Services (CMS) is required to use a single payment to pay for dialysis and related services, which include injectable ESRD drugs. Questions have been raised about this new payment system's effects on the access to and quality of dialysis care for certain groups of beneficiaries, such as those who receive above average doses of injectable ESRD drugs. GAO examined (1) Medicare expenditures for injectable ESRD drugs, by demographic characteristics; (2) factors likely to result in above average doses of these drugs; (3) CMS's approach for addressing beneficiary differences in the cost of dialysis care under the new payment system; and (4) CMS's plans to monitor the new payment system's effects. GAO analyzed 2007 data--the most recent available--on Medicare ESRD expenditures and input from 73 nephrology clinicians and researchers collected using a Web-based data collection instrument. GAO also reviewed reports and CMS's proposed rule on the payment system's design and interviewed CMS officials. Certain demographic groups had above average Medicare expenditures for injectable ESRD drugs in 2007. For example, Medicare spent $782 per month on injectable ESRD drugs per African American beneficiary, which was about 13 percent more than the average across all beneficiaries on dialysis and was also higher than for other racial groups. Similarly, monthly Medicare spending per beneficiary with additional coverage through Medicaid was about 6 percent higher than the average across all beneficiaries on dialysis. Although GAO did not identify the factors that led to the differences described above, it did obtain information from 73 nephrology clinicians and researchers, selected through referrals from dialysis-related professional organizations and a literature review, on the factors that they consider likely to result in above average doses of injectable ESRD drugs. A majority of these experts identified primarily clinical factors as likely to result in above average doses of these drugs. For example, at least 50 percent of the 73 clinicians and researchers from whom GAO obtained information identified 14 factors (including chronic blood loss and low iron stores) as likely to result in above average doses of erythropoiesis stimulating agents, which accounted for about 75 percent of expenditures on injectable ESRD drugs in 2007. CMS's proposed design for the new payment system for dialysis care includes, as required by law, two payment mechanisms to address differences across beneficiaries in their costs of dialysis care. Under the first payment mechanism--a case-mix adjustment--CMS proposed to adjust payments based on characteristics such as age, sex, and certain clinical conditions that are associated with beneficiaries' costs of dialysis care. The second proposed payment mechanism--an outlier policy--involves making additional payments to providers when they treat patients whose costs of care are substantially higher than would be expected. CMS's preliminary plans for monitoring the effects of the new payment system build on existing initiatives, but it is unclear whether CMS will monitor the effects on the quality of and access to dialysis care for groups of beneficiaries. In prior work, GAO and others have emphasized the importance of monitoring both the quality of and access to care to ensure that Medicare payment system changes do not result in certain groups of beneficiaries experiencing poor care quality or problems accessing services. CMS intends to monitor the quality of dialysis care under the new payment system, but the extent to which CMS will conduct such monitoring for various groups of beneficiaries is currently unclear because CMS's plans are preliminary. Furthermore, CMS's preliminary plans for monitoring access to dialysis care are limited. However, CMS has stated that it will have a comprehensive monitoring strategy in place by January 2011. GAO obtained comments on a draft of this report from CMS and from industry groups representing both large and small dialysis providers and nephrologists. |
The Homeland Security Act of 2002 and subsequently enacted laws— including the Intelligence Reform and Terrorism Prevention Act of 2004 and the 9/11 Commission Act—assigned DHS responsibility for sharing information related to terrorism and homeland security with its state, local, and tribal partners, and authorized additional measures and funding in support of carrying out this mandate. DHS designated I&A as having responsibility for coordinating efforts to share information that pertains to the safety and security of the U.S. homeland across all levels of government, including federal, state, local, and tribal government agencies. In June 2006, DHS tasked I&A with the responsibility for managing DHS’s support to fusion centers. I&A established a State and Local Fusion Center Joint Program Management Office as the focal point for supporting fusion center operations and to maximize state and local capabilities to detect, prevent, and respond to terrorist and homeland security threats. The office was also established to improve the information flow between DHS and the fusion centers, as well as provide fusion centers with access to the federal intelligence community. Two DHS components—CBP and ICE—have responsibilities for securing the nation’s land borders against terrorism and other threats to homeland security. Specifically, CBP’s Border Patrol agents are responsible for preventing the illegal entry of people and contraband into the United States between ports of entry. This includes preventing terrorists, their weapons, and other related materials from entering the country. Border Patrol’s national strategy calls for it to improve and expand coordination and partnerships with state, local, and tribal law enforcement agencies to gain control of the nation’s borders. ICE is charged with preventing terrorist and criminal activity by targeting the people, money, and materials that support terrorist and criminal organizations. According to the agency’s 2008 annual report, ICE recognizes the need for strong partnerships with other law enforcement agencies, including those on the local level, in order to combat criminal and terrorist threats. The FBI serves as the nation’s principal counterterrorism investigative agency, and its mission includes protecting and defending the United States against terrorist threats. The FBI conducts counterterrorism investigations through its field offices and Joint Terrorism Task Forces. In addition, each FBI field office has established a Field Intelligence Group, which consists of intelligence analysts and special agents who gather and analyze information related to identified threats and criminal activity, including terrorism. Each group is to share information with other Field Intelligence Groups across the country, FBI headquarters, and other federal, state, and local law enforcement and intelligence agencies to fill gaps in intelligence. Fusion centers serve as the primary focal points within the state and local environment for the receipt and sharing of information related to terrorist and homeland security threats. In March 2006, DHS released its Support and Implementation Plan for State and Local Fusion Centers. In this plan, DHS describes its responsibility to effectively collaborate with its federal, state, and local partners to share information regarding these threats. To facilitate the effective flow of information among fusion centers, DHS, other federal partners, and the national intelligence community, the plan calls for DHS to assign trained and experienced operational and intelligence personnel to fusion centers and includes the department’s methodology for prioritizing the assignments. The plan also notes that identifying, reviewing, and sharing fusion center best practices and lessons learned is vital to the success of DHS’s overall efforts. Accordingly, it recommends that DHS develop rigorous processes to identify, review, and share these best practices and lessons learned. In December 2008, DHS issued a document entitled Interaction with State and Local Fusion Centers Concept of Operations. According to the document, each DHS component field office whose mission aligns with the priorities of the fusion center is to establish a relationship with that center. This relationship should include but not be limited to routine meetings and consistent information sharing among DHS and state and local personnel assigned to each center. The FBI’s role in and support of individual fusion centers varies depending on the level of functionality of the fusion center and the interaction between the particular center and the local FBI field office. FBI efforts to support fusion centers include assigning special agents and intelligence analysts to fusion centers, sharing information, providing space or rent for fusion center facilities in some locations, and ensuring that state and local personnel have appropriate security clearances as well as access to FBI personnel. Since September 11, 2001, several statutes have been enacted into law designed to enhance the sharing of terrorism-related information among federal, state, local, and tribal agencies, and the federal government has developed related strategies, policies, and guidelines to meet its statutory obligations. Regarding border threats, the 9/11 Commission Act contains several provisions that address the federal government’s efforts to share information with state and local fusion centers that serve border communities. For example, the act provides for the Secretary of DHS to assign, to the maximum extent practicable, officers and intelligence analysts from DHS components—including CBP and ICE—to state and local fusion centers participating in DHS’s State, Local, and Regional Fusion Center Initiative, with priority given to fusion centers located along borders of the United States. The act provides that federal officers and analysts assigned to fusion centers in general are to assist law enforcement agencies in developing a comprehensive and accurate threat picture, and to create intelligence and other information products for dissemination to law enforcement agencies. In addition, federal officers and analysts assigned to fusion centers along the borders are to have, as a primary responsibility, the creation of border intelligence products that (1) assist state, local, and tribal law enforcement agencies in efficiently helping to detect terrorists and related contraband at U.S. borders; (2) promote consistent and timely sharing of border security-relevant information among jurisdictions along the nation’s borders; and (3) enhance DHS’s situational awareness of terrorist threats in border areas. The act further directed the Secretary of DHS to create a mechanism for state, local, and tribal law enforcement officers to provide voluntary feedback to DHS on the quality and utility of the intelligence products developed under these provisions. Also, in October 2007, the President issued the National Strategy for Information Sharing. According to the strategy, an improved information sharing environment is to be constructed on a foundation of trusted partnerships at all levels of government, based on a shared commitment to detect, prevent, disrupt, preempt, and mitigate the effects of terrorism. The strategy identifies the federal government’s information sharing responsibilities to include gathering and documenting the information that state, local, and tribal agencies need to enhance their situational awareness of terrorist threats and calls for authorities at all levels of government to work together to obtain a common understanding of the information needed to prevent, deter, and respond to terrorist attacks. Specifically, the strategy requires that state, local, and tribal law enforcement agencies have access to timely, credible, and actionable information and intelligence about individuals and organizations intending to carry out attacks within the United States; their organizations and their financing; potential targets; activities that could have a nexus to terrorism; and major events or circumstances that might influence state, local, and tribal actions. The strategy also recognizes that fusion centers are vital assets that are critical to sharing information related to terrorism, and will serve as primary focal points within the state and local environment for the receipt and sharing of terrorism-related information. In October 2001, we reported on the importance of sharing information about terrorist threats, vulnerabilities, incidents, and lessons learned. Specifically, we identified best practices in building successful information sharing partnerships that could be applied to entities trying to develop the means of appropriately sharing information. Among the best practices we identified were (1) establishing trusted relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice; (2) agreeing to mechanisms for sharing information, such as outreach meetings and task forces; and (3) institutionalizing roles to help ensure continuity and diminish reliance on a single individual. Since we designated terrorism-related information sharing a high-risk area in January 2005, we have continued to monitor federal information sharing efforts. Also, as part of this monitoring, in April 2008, we reported on our assessment of the status of fusion centers and how the federal government is supporting them. Our fusion center report and subsequent testimony highlighted continuing challenges—such as the centers’ ability to access information and obtain funding—that DHS and DOJ needed to address to support the fusion centers’ role in facilitating information sharing among federal, state, and local governments. We also recognized the need for the federal government to determine and articulate its long-term fusion center role and whether it expects to provide resources to help ensure their sustainability, and we made a recommendation to that effect to which DHS agreed. At the time of this review, DHS was in the process of implementing the recommendation. In general, local and tribal officials in the border communities we contacted who reported to us that they received information directly from the local office of Border Patrol, ICE, or the FBI said it was useful for enhancing their situational awareness of crimes along the border and potential terrorist threats. Overall, where information sharing among federal, local, and tribal agencies along the borders occurred, local and tribal officials generally said they had discussed their information needs with federal agencies in the vicinity and had established information sharing partnerships with related mechanisms to share information with federal officials—consistent with the National Strategy for Information Sharing—while the agencies that reported not receiving information from federal agencies generally said they had not discussed their needs and had not established partnerships. Officials from three-quarters (15 of 20) of the local and tribal law enforcement agencies in the border communities we contacted said they received information directly from the local office of at least one federal agency (Border Patrol, ICE, or the FBI), and 9 of the 20 reported receiving information from the local office of all three of these federal agencies. However, 5 of the 20 reported that they did not receive information from any of these three agencies, in part because information sharing partnerships and related mechanisms to share information did not exist. We discuss information sharing partnerships and other factors that affect information sharing between federal agencies and local and tribal agencies in border communities later in this report. Figure 1 shows the number of local and tribal agencies that reported receiving information directly from the local office of Border Patrol, ICE, and the FBI. Overall, the local and tribal law enforcement agencies we contacted that received information from federal agencies in the vicinity found it useful in enhancing their situational awareness of border crimes and potential terrorist threats. Local and tribal law enforcement officials in 14 of 20 border communities we contacted said they received a range of information directly from local Border Patrol officials, including incident reports and alerts regarding specific individuals with potential links to criminal activity—such as illegal immigration and drug trafficking—as well as border-related threat assessments and reports of suspicious activity. According to the local and tribal officials, they received this information through direct outreach or visits, phone calls, and e-mails, as well as through issued alerts and bulletins. Of the 14 local and tribal officials that reported receiving information from Border Patrol officials in the vicinity, 12 said it was useful and enhanced their situational awareness of criminal activities and potential terrorist threats along the border and 2 did not take a position when asked about the information’s usefulness. For example, one tribal police department official reported that Border Patrol provides an area assessment that specifically targets the illicit smuggling of humans and contraband in and around the tribal lands, and depicted the threat posed by illegal activity occurring in the area. The official said that this assessment helped the department identify and emphasize those areas on which to focus. Local and tribal officials from the remaining 6 border communities we contacted said they did not receive any information directly from Border Patrol officials in the vicinity, in part because information sharing partnerships and related mechanisms to share information did not exist. Border Patrol officials in the communities we visited said they shared information related to various types of crimes with their local and tribal partners, including information related to illegal immigration and drug trafficking. The officials said this information is shared primarily through established information sharing partnerships and related mechanisms, including joint border operations and task forces, such as Integrated Border Enforcement Teams. The officials noted that they generally did not have specific terrorism-related information to share with local and tribal agencies, but that the information they share is intended to enhance situational awareness of border crimes that terrorists could potentially exploit, such as illegal immigration. Local and tribal law enforcement officials in 10 of 20 border communities we contacted said they received information from ICE officials in the vicinity, including specific persons of interest they should be on the lookout for, as well as information on drug smuggling and drug cartel activities, human smuggling, and other crimes. The officials said such information is important because it provides information that is pertinent to their immediate area. These agencies reported receiving information by e-mail or in person, as well as through participation in task forces, such as Border Enforcement Security Task Forces. For example, in one southwest border location, law enforcement officials said that the department receives information about potential criminal activities in their jurisdiction from ICE based on joint investigations it has conducted with the agency. Of the 10 local and tribal officials that reported receiving information from local ICE officials, 8 said it was useful and enhanced their situational awareness of criminal activities and potential terrorist threats along the border and 2 did not take a position when asked about the information’s usefulness. Officials from the remaining 10 local and tribal agencies we contacted said they did not receive any information from local ICE officials, in part because information sharing partnerships and related mechanisms to share information did not exist. According to ICE headquarters officials, in addition to sharing information at the local level, ICE has significantly expanded its interaction with state, local, and tribal law enforcement officials through automated systems that allow these officials to access and search certain DHS and ICE law enforcement and investigative information. Local and tribal law enforcement officials in 13 of 20 border locations we contacted said they received a range of information directly from local FBI officials, including intelligence assessments and bulletins, threat assessments and terrorism-related alerts, and information on criminal activity. Of the 13 local and tribal officials that reported receiving information from local FBI officials, 12 said it was useful and enhanced their situational awareness of potential terrorist threats along the border and 1 did not take a position when asked about the information’s usefulness. Local and tribal officials in 7 of the 20 border locations we contacted said they did not receive any information directly from local FBI officials, in part because information sharing partnerships and related mechanisms to share information did not exist. FBI officials in the border communities we visited said that they understood the desire of local and tribal law enforcement agencies to receive terrorism-related information that is specific to the border or to their geographic area in particular. However, the officials explained that in many cases, such information is classified, so the FBI can only share it with officials that have a need to know the information and have the requisite security clearances, as well as secure systems, networks, or facilities to safeguard the information. FBI officials also said that information related to ongoing investigations is generally only shared with local officials that participate in an FBI Joint Terrorism Task Force, since sharing the information outside the task force could jeopardize the investigations. Finally, the officials said that at times, terrorism-related information that is specific to the border simply may not exist. Local and tribal law enforcement officials we met with recognized that the FBI has limits on what it can share—including information that is classified—and said they had no intention of interfering with ongoing investigations. However, they also thought the FBI could better communicate when these limits were in effect and when the agency simply had no information to share. We discuss the importance of establishing information sharing partnerships to facilitate discussions between the parties and minimize expectation gaps later in this report. According to FBI officials at the locations we contacted, information that is not related to ongoing investigations is shared with local and tribal agencies through a variety of mechanisms, including task forces (e.g., Safe Trails Task Forces) and working groups; periodic outreach meetings the FBI conducts with local and tribal agencies to both share and solicit information; and through ongoing information sharing partnerships. FBI headquarters officials noted that each FBI field office—through its Field Intelligence Group—is to routinely assess the terrorism and criminal threats and risks in its geographic area of responsibility and report the results to FBI headquarters. The officials said that the assessments incorporate border-specific issues when appropriate, such as the illegal entry of possible terrorists, identification of human smuggling organizations, and the smuggling of weapons and other material which could be employed in terrorist attacks. However, the officials said that the results of the assessments are classified and are generally not shared with local and tribal officials, although in some cases selected information is declassified and distributed through alerts and bulletins. Further, according to FBI headquarters, much of the FBI’s information sharing with other law enforcement entities occurs at the officer or investigator level, often without the specific knowledge of the state and local personnel we interviewed for this report. The FBI also emphasized that most Indian Reservations and tribal law enforcement agencies are located in remote areas of the United States—100 miles or more away from an FBI office—where information sharing between FBI agents and tribal law enforcement occurs on an ad hoc basis usually focused on investigations of crimes occurring on Indian reservations. We recognize that information sharing can occur at the officer or investigator level and on an ad hoc basis. However, as discussed later in this report, limiting information sharing to the officer and investigator level will not ensure that information sharing partnerships are established between agencies. Rather, discussions at senior levels—including the county sheriffs, local police chiefs, and tribal police chiefs we met with—could help ensure continuity in information sharing and diminish reliance on any one individual, which is a best practice in building successful information sharing partnerships. FBI headquarters also noted that in addition to sharing information directly with local and tribal officials in border communities, the FBI disseminates information to these officials through information systems— such as the FBI’s Law Enforcement Online and eGuardian system—and the FBI’s participation in state and local fusion centers and other interagency task forces and intelligence centers throughout the country (e.g., High Intensity Drug Trafficking Area Investigative Support Centers). The FBI noted that it is through these venues that the FBI also accomplishes its information sharing responsibilities to other federal, state, and local partners. The National Strategy for Information Sharing identifies the federal government’s information sharing responsibilities to include gathering and documenting the information that state, local, and tribal agencies need to enhance their situational awareness of terrorist threats. Figure 2 shows the number of the local and tribal agencies in the border communities we contacted that reported discussing their information needs with federal officials in the vicinity. Overall, where local and tribal law enforcement officials in border communities had discussed their information needs with federal officials in the vicinity, they also reported receiving useful information from the federal agencies that enhanced their situational awareness of border crimes and potential terrorist threats. Specifically: Officials from 7 of the 11 localities that had discussed their information needs with Border Patrol officials in the vicinity also reported receiving useful information from them. Officials from each of the 9 localities that had discussed their information needs with ICE officials in the vicinity reported receiving useful information from them. Officials from each of the 8 localities that had discussed their information needs with FBI officials in the vicinity reported receiving useful information from them. Local and tribal officials in the border communities we contacted said they shared their information needs with federal officials through a variety of methods, including regularly scheduled meetings, periodic outreach performed by federal agencies, ad hoc meetings, and established working relationships. For example, one police chief along the southwest border said that he discussed his need for real-time information about border crimes that could affect his area with local federal agency officials. He noted that after he held these discussions, the federal officials took steps to provide his department with this type of information. Nevertheless, as shown in figure 2 above, officials from about one-half of the local and tribal agencies in the border communities we contacted reported that federal officials had not discussed information needs with them, as called for in the National Strategy for Information Sharing. Our discussions with local and tribal officials revealed that where the needs were not discussed, local and tribal agencies also were less likely to have received information from federal agencies than in the localities where needs were discussed. Specifically: Officials from 4 of the 7 localities that had not discussed their information needs with Border Patrol officials in the vicinity also reported not receiving information from them, while the other 3 had received information from Border Patrol. Officials from each of the 9 localities that had not discussed their information needs with ICE officials in the vicinity also reported not receiving information from them. Officials from 7 of the 11 localities that had not discussed their information needs with FBI officials in the vicinity also reported not receiving information from them, while the other 4 reported receiving information from the FBI. While the data above show that federal agencies shared information with local and tribal officials in several cases where information needs had not been discussed, identifying these needs could better support federal agency efforts to provide local and tribal agencies with useful information that is relevant to their jurisdiction. A primary reason why federal agencies had not identified the information needs of local and tribal agencies in many of the border communities we visited was because the methods federal agencies used to solicit the needs, while effective for some localities, were not effective for others. Specifically, Border Patrol and ICE officials said that the information needs of these agencies were generally identified through outreach meetings or through working relationships with local and tribal law enforcement officers. Where these interactions did not exist, the federal agencies generally had not identified the information needs of local and tribal agencies. Also, according to a local police chief, while information needs may be discussed between local officers and federal agents on an ad hoc basis, his department cannot rely on these interactions to ensure that federal agencies have identified the overall information needs of the department. According to FBI headquarters officials, in developing field office area assessments, Field Intelligence Group personnel are required to gather information on terrorism and criminal threats and risks from local and tribal law enforcement agency officials, wherein the information needs of these agencies would be identified. FBI headquarters also noted that through outreach meetings and participation in task forces and working groups, FBI field offices continually evaluate the information needs of their local and tribal partners, as well as their own, and take actions to identify and fill any information gaps. Despite these efforts, less than one- half of the local and tribal agencies we contacted reported discussing their information needs with FBI officials in the vicinity. By more consistently and more fully identifying the information needs of local and tribal agencies in border communities, as called for in the National Strategy for Information Sharing, federal agencies could be better positioned to provide these local and tribal agencies with useful information that enhances their situational awareness of border crimes and potential terrorist threats. The National Strategy for Information Sharing recognizes that effective information sharing comes through strong partnerships among federal, local, and tribal partners. In addition, the current strategic plans of DHS and the FBI both acknowledge the need to establish information sharing partnerships with state, local, and tribal law enforcement agencies to help the agencies fulfill their missions, roles, and responsibilities. Figure 3 shows the number of local and tribal agency officials in the border communities we contacted that reported having established or were developing an information sharing partnership with Border Patrol, ICE, and the FBI officials in the vicinity. Overall, where local and tribal law enforcement officials in border communities had established or were developing information sharing partnerships with federal officials in the vicinity, they also reported receiving information from the federal agencies that enhanced their situational awareness of border crimes and potential terrorist threats. Specifically: Officials from 13 of the 14 localities that had or were developing an information sharing partnership with Border Patrol officials in the vicinity also reported receiving information from them. Officials from 10 of 13 localities that had an information sharing partnership with ICE officials in the vicinity also reported receiving information from them, while the other 3 were not receiving information from ICE. Officials from each of the 11 localities that had an information sharing partnership with FBI officials in the vicinity also reported receiving information from them. The local and tribal agencies that had developed partnerships with federal agencies in the vicinity had established a variety of mechanisms to share information, including regularly scheduled meetings, periodic outreach performed by federal agencies, ad hoc meetings, task forces and working groups, established working relationships, phone calls, e-mails, and issued alerts and bulletins. In some locations, Border Patrol and local law enforcement officials worked together in operational efforts that provided opportunities for federal and local officials to develop information sharing partnerships. For example, Operation Border Star in Texas—a state-led, multiagency effort focused on reducing crime, such as illegal immigration and drug trafficking, in targeted regions along the Texas–Mexico border— draws resources from local law enforcement agencies, the Texas Department of Public Safety, and others to support Border Patrol. Also, in upstate New York, a county sheriff’s department conducted joint patrols with Border Patrol, which extended into Canada. The patrols are designed to prevent the illegal entry of individuals into the United States and the smuggling of contraband. These operations provide an opportunity for officers from all of the agencies to work together and facilitate information sharing. Most of the local and tribal officials that had developed information sharing partnerships with ICE officials reported establishing them through personal contacts made either while working on various task forces alongside ICE personnel or between agents and officers in both agencies. For example, one tribal police chief said that his department has a memorandum of understanding with ICE, which allows the tribal police to perform certain ICE duties in the enforcement of customs laws and facilitates information sharing between the agencies. Nevertheless, as shown in figure 3 above, officials from several local and tribal agencies in the border communities we contacted reported that they had not established information sharing partnerships with Border Patrol, ICE, or FBI officials in the vicinity. Where partnerships were not established, local and tribal agencies also were less likely to have received information from federal agencies than in the localities where partnerships were established. Specifically: Officials from each of the 5 localities that did not have an information sharing partnership with local Border Patrol officials in the vicinity also reported they had not received information from them. Officials from each of the 7 localities that did not have an information sharing partnership with ICE officials in the vicinity also reported they had not received information from them. Officials from 7 of the 9 localities that said they did not have an information sharing partnership with FBI officials also reported they had not received information from them FBI. The local and tribal officials that did not have a partnership with federal officials and were not receiving information said that effective mechanisms for sharing information—a best practice in building successful information sharing partnerships—had not been established. One reason why the officials said established mechanisms were not effective was because they did not have enough resources or funding to participate in the regular meetings or forums that Border Patrol, ICE, and FBI officials in the vicinity used to share information, establish face-to- face contact, and build trusting relationships. For example, an official from one local police department said he was aware of Border Patrol’s efforts to share information through such meetings, but the department did not have the resources needed to participate, since doing so would leave the office short one out of eight patrol officers. Officials at another location said they no longer received invitations to Border Patrol meetings. Similarly, local and tribal officials in other localities said they did not have enough resources to send individuals to participate in outreach meetings that FBI officials said were used to share information, because in some cases the meetings were held more than 100 miles away. A local county sheriff also said that the FBI’s meetings were initially productive but interest faded because of the lack of useful information that was shared during the meetings. An FBI official from another locality noted that FBI officials are sometimes limited in what they can discuss during these meetings if the local and tribal representatives in attendance do not have the appropriate security clearances or do not have a need to know about the information. These examples illustrate the importance of establishing partnerships to facilitate discussions between the parties and minimize expectation gaps regarding the availability of and limits in sharing information. Border Patrol, ICE, and FBI officials also said that information is shared with local and tribal agencies through multiagency task forces, such as ICE Border Enforcement Security Task Forces and FBI Joint Terrorism Task Forces. However, local and tribal officials—especially those in small departments in rural border communities—said these mechanisms to share information were not effective for them, because they did not have enough resources to dedicate personnel to the task forces. Police chiefs and other senior local and tribal officials recognized that ad hoc discussions between officers and investigators are also mechanisms federal agencies in the vicinity use to share information with local and tribal agencies. The officials noted, however, that limiting information sharing to the officer and investigator level is not sufficient to ensure that senior-level department officials are aware of the information, which in turn could be disseminated to other personnel within the department. For example, a police chief in a local community along the southwest border said that he does not need the FBI to brief his entire department, but the FBI should at least brief the police chief. Best practices in building information sharing partnerships call for institutionalizing information sharing through discussions at senior levels to ensure continuity in sharing and diminishing the reliance on single individuals. We recognize that developing and maintaining information sharing partnerships with the numerous local and tribal law enforcement agencies along the borders is a significant challenge, and that Border Patrol, ICE, and the FBI have made progress in this area. However, additional efforts by these federal agencies to periodically assess the extent to which partnerships and related mechanisms to share information exist, fill gaps, and address barriers to establishing such partnerships and mechanisms, could help ensure that information is shared with local and tribal law enforcement agencies that enhances their situational awareness of border crimes and potential terrorist threats. Federal agencies at two of the five fusion centers we visited were supporting fusion center efforts to develop border intelligence products that enhanced local and tribal agencies’ situational awareness of border crimes and potential terrorist threats. DHS recognizes that it needs to add personnel to fusion centers in border states to support the creation of such products, and is developing related plans, but cited funding issues and competing priorities as barriers to deploying such personnel. Further, additional DHS and FBI actions to (1) identify and market promising practices from fusion centers that develop border intelligence products and (2) obtain feedback from local and tribal officials on the utility and quality of the products and use the feedback to improve those products would strengthen future fusion center efforts to develop such products. Federal personnel at two of the five fusion centers we visited—the Arizona Counterterrorism Information Center and the New York State Intelligence Center—were routinely contributing to border intelligence products that were designed to enhance local and tribal law enforcement agencies’ situational awareness of border crimes and potential terrorist threats. Fusion center officials in these states emphasized that the physical presence of federal personnel at the fusion center—including intelligence analysts from I&A, Border Patrol, ICE, and the FBI—was critical to developing the border products, in part because their presence facilitated regular meetings with center personnel and access to federal information systems. According to local and tribal officials in the border communities we contacted in Arizona and New York, the border intelligence products they received generally enhanced their situational awareness of border-related crimes that could have a nexus to terrorism, such as drug trafficking and illegal immigration. However, the border products usually did not contain terrorism-related information that was specific to the border because such information did not exist or a link between a border crime and terrorism had not been established, according to fusion center officials. The two fusion centers also routinely generated terrorism information products that were provided to local and tribal agencies throughout the state to enhance their situational awareness of terrorist threats. Officials from the two fusion centers said that any terrorism-related information that is specific to the border would be included in both the border product and terrorism product. Below is additional information about the border intelligence products developed by the two fusion centers: Arizona Counterterrorism Information Center: The center issues a border- specific product (the “Situational Awareness Bulletin”) twice a week with input from the state’s Department of Public Safety and numerous federal agencies, including DHS’s I&A, Border Patrol, and ICE, and the FBI. The center initiated the bulletin in 2008 to enhance the situational awareness of local law enforcement officials along the Arizona border as drug-related violence on the Mexican side of the border increased. The bulletin now provides information about all types of crimes occurring in the vicinity of the border, as well as incidents from around the country and around the world. Topics have included immigration issues, burglaries at public safety offices, suspicious activities around critical infrastructure, stolen military uniforms, and stolen blank vehicle certificates of title. New York State Intelligence Center: The center’s Border Intelligence Unit issues a border-specific report quarterly with input from the New York State Police and numerous federal agencies, including DHS’s I&A, Border Patrol, and ICE, and the FBI. The report is intended to compile information on all types of crimes along the entire border between New York and Canada into one product for the convenience of local and tribal law enforcement agencies. This report covers crimes—such as illegal immigration and drug trafficking—and includes the results of joint federal and state operations conducted along the border. The report also contains news and updates on policies related to border security. According to center officials, the report grew out of recognition that various federal component agencies have offices that cover the border territory and could, therefore, collectively provide consistent intelligence information that would be helpful in enhancing the situational awareness of law enforcement agencies in border communities throughout the state. The Border Intelligence Unit also issues bulletins with actionable information on border-related crimes on an as-needed basis. In addition to the benefits that officials from the two fusion centers cited from having on-site input and collaboration from representatives of three DHS components, the FBI, and other agencies, the majority of local and tribal agencies in the border communities we contacted found the border intelligence products to be useful. Specifically, six of the seven local and tribal law enforcement agencies we contacted in Arizona and New York were receiving border intelligence products from the fusion center in their state and all six found that the products were useful or met their information needs. For example, one local law enforcement official said that his agency receives the quarterly border report developed by the New York fusion center and that he finds it useful as it sometimes contains issues directly related to his jurisdiction. The remaining locality did not comment on why the products were not received. According to officials from the other three fusion centers we visited, the presence of additional federal personnel would support their efforts to develop border intelligence products that help to provide local and tribal law enforcement agencies along the borders with situational awareness of potential terrorist threats. For example: Washington Fusion Center: The Washington state fusion center is colocated with the local Joint Terrorism Task Force, which facilitates access to FBI information, and has representatives from DHS’s I&A and ICE. According to Border Patrol headquarters officials, as of August 2009, the agency was in the process of assigning a full-time representative to the fusion center. The fusion center director noted that this official, once integrated into the center’s report development process, would contribute greatly towards producing a border intelligence product. The fusion center director added that the border intelligence product would focus on all border crime issues, including any suspected terrorist activity. Montana All Threat Intelligence Center: The Montana All Threat Intelligence Center is colocated with the local Joint Terrorism Task Force, which facilitates access to FBI information. According to the fusion center director, a CBP analyst has supported the center part-time, though most of the time that person is working at the CBP office located 90 miles away. In August 2009, Border Patrol headquarters officials said that a full-time representative had been assigned to the fusion center. The fusion center director said that he expected an analyst from I&A to be assigned to the fusion center, but was unsure when that would happen. According to the director, additional federal personnel and their ability to analyze border- related information would enhance the fusion center’s efforts to routinely produce a border intelligence product. Texas Intelligence Center: The Texas Intelligence Center is located within the Texas Department of Public Safety, and currently has representatives from I&A, ICE, and the FBI. Although the center prepares and disseminates a number of products, including a daily brief covering, among other issues, significant arrests, seizures and homeland security, it does not prepare an intelligence product that focuses on border issues. Officials at the center said that the state’s Border Security Operations Teams located along the border distribute information on border security issues to local and tribal agencies. According to the officials, the center will consider developing a border intelligence product once personnel from other appropriate agencies, such as Border Patrol, are in place at the fusion center. The director of I&A’s State and Local Fusion Center Program Management Office—the office responsible for managing the relationship between I&A and fusion centers—acknowledged the value of having personnel from DHS components physically present at fusion centers, not only for state and local law enforcement but for federal agencies as well. The director noted that deploying DHS analysts to fusion centers is critical to developing trusted partnerships, which in turn will facilitate collaboration and information sharing among federal, state, local, and tribal officials. But to date, the director explained that the office has not received the funding needed to deploy the personnel to other centers and has other competing priorities. DHS has had a plan for deploying personnel from its component agencies to fusion centers since June 2006, when the DHS Secretary signed the Support Implementation Plan for State and Local Fusion Centers. The plan calls for embedding DHS personnel with access to information, technology, and training in fusion centers to form the basis of a nationwide homeland security information network for collaboration and information sharing. According to the director of I&A’s State and Local Fusion Center Program Management Office, in part because of limited resources, the department is taking a risk-based approach to determining where to deploy officers and analysts. As such, the department considers several factors in addition to available funding, including population density, the number of critical infrastructure facilities, and the results of fusion center assessments the office conducts to determine the readiness of the center to use the department’s resources. Senior I&A officials noted that the department places some priority on deploying DHS personnel to state and local fusion centers located in border states, but that other factors also have to be considered under the department’s risk-based approach. According to DHS, as of September 2009, I&A had deployed 41 intelligence analysts to state, local, and regional fusion centers. DHS plans to have a total of about 70 I&A analysts at fusion centers by the end of fiscal year 2010 and an equal number of officers and analysts from DHS component agencies (e.g., Border Patrol and ICE). Figure 4 shows DHS personnel that were assigned to fusion centers in the 14 land border states as of August 2009. According to CBP headquarters officials, the agency has only a limited number of Border Patrol intelligence analysts, and is currently working with I&A to identify priority fusion centers. Officials from ICE’s Office of Intelligence also said that the agency is working with I&A to develop a strategy to enhance ICE participation at state and local fusion centers. Further, although the 9/11 Commission Act included an authorization for $10 million for each of the fiscal years 2008 through 2012 for DHS to carry out the State, Local, and Regional Fusion Center Initiative—including the assignment of CBP, ICE, and other DHS stakeholder personnel to fusion centers—DHS did not specifically request funding for the initiative and no funds were appropriated for fiscal years 2008 or 2009 for this specific purpose. Rather, for fiscal years 2008 and 2009, DHS reprogrammed funds from other activities to support the fusion center initiative. According to the director of I&A’s State and Local Fusion Center Program Management Office, DHS requested funding for the initiative in its fiscal year 2010 budget. Although the 9/11 Commission Act did not address FBI participation at fusion centers, FBI intelligence analysts and special agents were dedicated to fusion centers in 8 of the 14 land border states as of September 2009, in addition to FBI personnel at Joint Terrorism Task Forces or Field Intelligence Groups that were colocated with these fusion centers. The FBI noted that it has committed millions of dollars over the years to ensure that its classified computer system and other databases and equipment were deployed to support FBI personnel assigned on a full- or part-time basis to fusion centers. According to the FBI, the bureau has worked with DHS to develop uniform construction standards and security protocols specifically designed to facilitate the introduction of federal classified computer systems in fusion centers. Further, the FBI noted that it has deployed the eGuardian system—an unclassified counterterrorism tool—to fusion centers and other entities. The creation of border intelligence products—such as those developed by the Arizona and New York fusion centers—represent potential approaches that other border state fusion centers could use to target products for local and tribal law enforcement agencies in border communities. I&A has a framework in place to identify and collect promising practices at fusion centers nationwide, as called for in the department’s March 2006 Support Implementation Plan for State and Local Fusion Centers and the December 2008 Interaction with State and Local Fusion Center Concept of Operations. Specifically, the implementation plan for fusion centers recommended that rigorous processes be used to identify, review, and share information regarding promising practices and lessons learned. Consistent with that recommendation, the concept of operations identifies leveraging promising practices for information sharing and revising existing processes when necessary and advisable as one of the guiding principles of interaction with fusion centers. However, as of July 2009, I&A had not yet identified or explored promising practices related to fusion center efforts to develop border intelligence products. According to the director of I&A’s Border Security Division, such analysis has potential value but has not yet occurred because the division has been focusing on developing its own products and providing other support to fusion centers. While it is understandable that I&A would focus on its own activities, DHS could benefit from identifying promising practices related to fusion center border intelligence products because of the importance the federal government places on fusion centers to facilitate the sharing of information. By identifying such practices, DHS would be better positioned to leverage existing resources and help ensure that local and tribal agencies in border communities receive information that enhances their situational awareness of potential terrorist threats. Also, DHS had not obtained feedback on the utility and quality of the border intelligence products that its analysts in fusion centers have helped to develop. The 9/11 Commission Act requires DHS to (1) create a voluntary feedback mechanism for state, local, and tribal law enforcement officers and other consumers of the intelligence and information products developed by DHS personnel assigned to fusion centers under the act and (2) provide annual reports to committees of Congress describing the consumer feedback obtained and, if applicable, how the department has adjusted its own production of intelligence products in response to that consumer feedback. However, DHS’s December 2008 and August 2009 reports to Congress did not describe the feedback obtained on the intelligence products that its analysts in fusion centers helped to produce—including border intelligence products—or adjustments made in response to the feedback. DHS recognizes that it needs to take additional actions to obtain feedback from local and tribal law enforcement officers who are consumers of the intelligence products that I&A produces. For example, in mid-2009, I&A hired a contractor to initiate feedback pilot projects, including one currently underway to evaluate and implement processes for gathering and evaluating feedback responses. However, these projects are designed to solicit feedback on products developed by I&A and do not specifically include products that DHS personnel in fusion centers help to develop, including border intelligence products. Therefore, these projects may not support I&A efforts to obtain feedback under the 9/11 Commission Act on products that DHS personnel in fusion centers help to develop. DHS’s August 2009 report to Congress generally illustrates the value in obtaining feedback on intelligence products. For example, in one instance, the report notes that a state fusion center expressed concerns that the perspectives of three southwest border state fusion centers were not included in an assessment that I&A headquarters produced on border violence. The feedback resulted in teleconferences and other I&A actions to ensure that state and local perspectives are included in future assessments of border violence. Similarly, obtaining feedback on the border intelligence products that DHS analysts in fusion centers help to produce would support other fusion center efforts to develop such products and the department’s efforts to adjust its own production of intelligence products in response to that consumer feedback. The two fusion centers we contacted that were creating border intelligence products with the support of DHS personnel (Arizona and New York) had established their own mechanisms for obtaining feedback from local and tribal consumers of the products. Specifically, the fusion centers attached feedback forms to the border products, but have received low response rates, according to center officials. As a result, the fusion centers took other actions to solicit feedback on the border products, such as through direct outreach with local and tribal consumers of the information. Officials from both fusion centers said that the feedback has generally been positive and that the border products have been modified in response to this feedback. According to the officials, since these products are developed by the fusion centers, the centers do not routinely provide related feedback to DHS on the value of the contributions of its staff and intelligence input. However, the fusion centers’ efforts to obtain feedback on the border intelligence products—in addition to using feedback forms—demonstrate the feasibility of DHS taking additional actions to collect feedback on the products and report its findings to congressional committees under the 9/11 Commission Act. DHS agrees that it could take additional actions to collect this feedback, which could be done as part of the department’s ongoing feedback pilot projects. By working with fusion centers to obtain feedback on the border intelligence products developed, DHS could better support fusion center efforts to maintain and improve the utility and quality of information provided to local and tribal law enforcement agencies along the borders. This information could also be useful to I&A in modifying its own border intelligence products to better meet the needs of fusion centers, assist the department in making decisions on how to best utilize its limited resources at fusion centers, and be responsive to its statutory reporting requirements. Detecting the warning signs of potential terrorist activities and sharing the information with the proper agencies provides an opportunity to prevent a terrorist attack. However, most of the local and tribal officials in the border communities we contacted did not clearly know what suspicious activities federal agencies and fusion centers wanted them to report, how to report them, or to whom. The federal government is working with state and local entities to develop a standardized suspicious activity reporting process that, when implemented, could help address these issues. In the meantime, providing local and tribal officials with suspicious activity indicators that are associated with criminal activity along the borders could assist the officials in identifying potential terrorist threats. According to an October 2008 intergovernmental report on suspicious activities, fundamental to local and tribal law enforcement agencies’ efforts to detect and mitigate potential terrorist threats is ensuring that front-line personnel recognize and have the ability to document behaviors and incidents indicative of criminal activity associated with international terrorism. Unlike behaviors, activities, or situations that are clearly criminal in nature—such as car thefts, burglaries, or assaults—suspicious activity reporting involves suspicious behaviors that have been associated with terrorist activities in the past and may be predictive of future threats to public safety. Examples include surveillance, photographing of facilities, site breaches or physical intrusion, cyber attacks, and the probing of security. To varying degrees, federal agencies and fusion centers provided local and tribal agencies in the border communities we contacted with alerts, warnings, and other information that enhanced the local and tribal agencies’ situational awareness of potential terrorist threats. As an additional tool, the FBI and fusion centers in two of the five states we contacted had developed lists of suspicious activities—in the form of reference cards or brochures—to help local and tribal agencies determine what behaviors, activities, or situations are indicators of potential terrorist activities and should be reported for further analysis. However, officials from 13 of the 20 local and tribal agencies we contacted said they did not recall being provided with a list of the suspicious activities or indicators that rise to the level of potential terrorist threats and should be reported, while officials from 7 of the 20 agencies said they had received such indicators from either the FBI, the state fusion center, or another entity. According to the October 2008 intergovernmental report on suspicious activities, local law enforcement agencies are critical to efforts to protect local communities from another terrorist attack. The report also notes that to effectively conduct these duties, it is critical that the federal government ensure that local law enforcement personnel can recognize and have the ability to document behaviors and incidents indicative of criminal activity associated with domestic and international terrorism. While federal agencies and fusion centers had taken steps to disseminate or discuss terrorism-related indicators with local and tribal officials—such as through mass mailings and during outreach meetings and law enforcement conferences—these actions did not ensure that local and tribal agencies were aware of them, in part because the mechanisms used to share information were not always effective, as discussed earlier in this report. As a result of not being aware of the suspicious activity indicators, local officials in three border communities we contacted said they did not clearly know what information federal agencies and fusion centers wanted them to collect and report. Increased awareness of these indicators would better position local and tribal agencies along the border to identify and report behaviors and incidents indicative of criminal activity associated with terrorism. Also, in about half of the border communities we contacted, local and tribal agency officials were not aware of the specific processes they were to use to report terrorism-related suspicious activities or to whom this information should be reported because federal agencies had not yet defined such processes. Absent defined processes, the local and tribal officials had independently developed policies and procedures for gathering and reporting suspicious activities and they provided varying responses regarding how and to whom they would submit suspicious activities that may have a nexus to terrorism. Responses included reporting suspicious activities to a fusion center, the FBI, or another federal agency. Several local and tribal officials we contacted said they would report this information to the local federal official—e.g., Border Patrol, ICE, or the FBI—with whom they had developed a relationship. By defining reporting processes, federal, local, and tribal agencies would be in a better position to conduct more efficient collection and analysis of suspicious activities and share the results on a regional or national basis. Also, internal control standards call for management to ensure that there are adequate processes for communicating with and obtaining information from external stakeholders that may have a significant effect on the agency achieving its goals and that information should be recorded and communicated to the entities who need it in a form and within a time frame that enables them to carry out their responsibilities. At the national level, the federal government is working with state and local law enforcement entities on the National Suspicious Activity Reporting Initiative to standardize the reporting of suspicious activities that may be related to terrorism. The long-term goal of the initiative is to develop and implement consistent national policies, processes, and best practices by employing a standardized, integrated approach to gathering, documenting, processing, analyzing, and sharing information about suspicious activity that is potentially related to terrorism. One of the immediate goals of the initiative is to help ensure that suspicious activity reports with a potential connection to terrorism are expeditiously provided by local and tribal law enforcement agencies to the FBI. As of September 2009, related pilot projects were ongoing at fusion centers in 12 major cities. According to the DOJ official who is overseeing the initiative, an evaluation of the pilots will be completed by late 2009, but fully implementing the initiative across the country could take up to 2 years. Until the National Suspicious Activity Reporting Initiative is fully implemented, additional federal agency efforts to establish defined processes for local and tribal officials in border communities to report suspicious activities could help ensure that information is collected and shared with the most appropriate entity. According to the director of I&A’s Border Security Division, senior intelligence officials at fusion centers in two of the five border states we contacted, and other subject matter experts—including federal and state officials who were involved in developing suspicious activity indicators for local and tribal agencies in border communities—the suspicious activity indicators could be more useful if they also contained terrorism-related behaviors, activities, or situations that were more applicable to the border or border crimes and were periodically updated to reflect current threats. Officials from three of the local law enforcement agencies we contacted also suggested that border-specific indicators would help them link potential terrorism-related activities to crimes they are more likely to encounter along the border, such as illegal immigration and currency smuggling. However, our review of the suspicious activity indicators being utilized by the National Suspicious Activity Reporting Initiative and those that were developed by the FBI and fusion centers generally did not include indicators that were specific to the border. According to the DOJ official who was overseeing the implementation of the national initiative, the primary suspicious activity indicators that were validated by the law enforcement and intelligence community for use in the major city pilot projects were designed to be general and applicable to local and tribal officials located anywhere in the country. The official noted that the automated system that is being used by law enforcement agencies to record the suspicious activities during the pilot projects was designed to accommodate “sub-lists” that contain indicators that are applicable to specific sectors, such as the critical infrastructure sector. The official said that there was not a sub-list for border-specific indicators, but that he saw the potential for developing such a list. The official said that I&A would be the entity with the requisite expertise for developing such a list. In April 2009, I&A deployed an intelligence analyst from its Border Security Division to DHS’s Homeland Security Intelligence Support Team to develop terrorism indicators that are specific to the southwest border. According to the director of the Border Security Division, the analyst is looking for trends and patterns in terrorism-related incident reports that are generated by local and tribal law enforcement officials along the southwest border. The director said that I&A has not yet determined a final date for developing the suspicious activity indicators since there is a lot of information that has to be analyzed. The official noted that I&A is considering deploying another intelligence analyst to the northern border to perform similar analyses. According to the director of I&A’s Border Security Division, in his former position as a border analyst in the intelligence community, he worked with CBP and ICE to develop border-related indicators that were potential precursors to terrorist activities. The official noted the importance of periodically updating and consistently disseminating these indicators of terrorism-related behaviors, activities, or situations that reflect current border threats. According to Border Patrol and ICE headquarters and field personnel, neither agency had developed suspicious activity indicators that were specific to the borders. Additional DHS and FBI actions to develop, periodically update, and consistently disseminate indicators of terrorism-related activities that focus on border threats could help to maximize the utility of suspicious activity indicators as a counterterrorism tool in border communities. As discussed in the National Strategy for Information Sharing, state, local, and tribal government officials are critical to our nation’s efforts to prevent future terrorist attacks. Because these officials are often in the best position to identify potential threats that exist within their jurisdictions, they must be partners in information sharing that enhances situational awareness of border crimes and potential terrorist threats. In border communities, this partnership is particularly important because of the vulnerability to a range of criminal activity that exists along our nation’s borders. Therefore, a more robust effort by federal agencies to identify the information needs of local and tribal law enforcement agencies along the borders and periodically assess the extent to which partnerships exist and related mechanisms to share information are working—and fill gaps and address barriers where needed—could better enable federal agencies to provide useful information to their local and tribal partners that enhances situational awareness. The work of state-run fusion centers is also critical to the nation’s efforts to prevent terrorist attacks. Fusion centers in the border states we visited demonstrated a range of practices related to developing border intelligence products that could serve as a model for other fusion centers. By identifying and sharing these promising practices, DHS and the FBI could help strengthen the work of fusion centers nationally in addition to enhancing situational awareness of local and tribal law enforcement. Also, by working with the centers to obtain feedback on border intelligence products, DHS and the FBI could enhance the utility of those products that fusion centers share with local and tribal law enforcement agencies. Finally, until a national suspicious activity reporting process is in place, more consistently providing local and tribal officials in border communities with information on the suspicious terrorism-related activities they should report—including those related to border threats— and establishing processes for reporting this information could help ensure that critical information is reported and reaches the most appropriate agency to take action. To help ensure that local and tribal law enforcement agencies in border communities receive information from local federal agencies that enhances their situational awareness of border crimes and potential terrorist threats, we recommend that the Secretary of Homeland Security and Director of the FBI, as applicable, require Border Patrol, ICE, and FBI offices in border communities to take the following two actions: (1) more consistently and fully identify the local and tribal agencies’ information needs and (2) periodically assess the extent to which partnerships and related mechanisms to share information exist, fill gaps as appropriate, and address barriers to establishing such partnerships and mechanisms. To promote future efforts to develop border intelligence products within fusion centers, we recommend that the Secretary of Homeland Security and the Director of the FBI collaborate with fusion centers to take the following two actions: (1) identify and market promising practices used to prepare these products and (2) take additional actions to solicit feedback from local and tribal officials in border communities on the utility and quality of the products generated. To maximize the utility of suspicious activity indicators as a counterterrorism tool, we recommend that the Secretary of Homeland Security and the Director of the FBI collaborate with fusion centers to take the following two actions: (1) take steps to ensure that local and tribal law enforcement agencies in border communities are aware of the specific types of suspicious activities related to terrorism that they are to report and the process through which they should report this information and (2) consider developing, periodically updating, and consistently disseminating indicators of terrorism-related activities that focus on border threats. On November 10, 2009, we provided a draft of this report to DHS and DOJ for comment. In its written response, DHS noted that CBP, ICE, and I&A are continuing and expanding efforts to share information. DHS agreed with all of our recommendations in this report. Specifically, DHS agreed with our recommendation related to the need for Border Patrol and ICE to (1) more fully identify the information needs of local and tribal agencies along the borders and (2) periodically assess the extent to which partnerships and related mechanisms to share information exist. For example, CBP agreed that a systematic and standardized process to disseminate information and receive feedback is vital to situational awareness for local and tribal law enforcement partners who are within the immediate areas adjacent to the border. CBP noted that Border Patrol plans to develop a list of individuals who will serve as liaisons to local and tribal agencies and also develop a list of local and tribal contacts. According to CBP, the Border Patrol liaisons will then make initial efforts to assess the information needs of the law enforcement partners and take other actions to determine and publish guidance on information sharing. To ensure that information shared is useful, Border Patrol plans to conduct annual surveys of its partners. Border Patrol envisions that this standardized process will be in place by the end of fiscal year 2010. When implemented, the Border Patrol’s actions should meet the intent of our recommendation. ICE also agreed with the recommendation and plans to work with CBP and the FBI to enhance local and tribal law enforcement agencies’ situational awareness, but ICE did not provide details on the specific actions it will take. I&A provided, or otherwise highlighted, additional information on the current status of information sharing among federal, state, and local agencies as it pertains to border security. DHS also agreed with our recommendation related to the need for DHS and the FBI to collaborate to (1) identify and market promising practices used to prepare border intelligence products within fusion centers and (2) take additional actions to solicit feedback from local and tribal officials on the utility and quality of the products generated. According to I&A—the DHS component that has the lead in addressing this recommendation—the department has initiated the creation of a broad Joint Fusion Center Program Management Office, which represents a departmentwide effort that seeks to more closely coordinate support to fusion centers with department component agencies, including CBP and ICE. I&A also noted that its intelligence specialists that are in fusion centers also act as conveyers of information about promising practices to develop border information products. Finally, I&A noted that the department hosts the Lessons Learned and Best Practices Web site that can be utilized to promote future efforts to develop border intelligence products within fusion centers. While these actions could potentially support DHS efforts to identify and market promising practices used to prepare border intelligence products within fusion centers, I&A did not provide any specific information on the extent to which such practices have been identified and marketed. I&A’s comments also did not address what actions, if any, are ongoing or planned to solicit feedback from local and tribal officials on the utility and quality of the products generated. ICE also agreed with the recommendation and noted that it will work with the FBI to implement it, but ICE did not provide details on the specific actions it will take. Finally, DHS agreed with our recommendation related to the need for DHS and the FBI to collaborate to (1) ensure that local and tribal law enforcement agencies in border communities are aware of the suspicious activities related to terrorism they are to report and the process for reporting this information and (2) consider developing and disseminating indicators of terrorism-related activities that focus on border threats. ICE agreed with the recommendation but deferred to I&A on the implementation specifics. I&A provided additional information on the status of the National Suspicious Activity Reporting Initiative and efforts to test and evaluate related policies, procedures, and technology. According to I&A, the evaluation phase of the initiative at participating sites concluded at the end of September 2009 and a final report will be issued that will document lessons learned and best practices. I&A noted that the initiative will then be transitioned from a preoperational environment to a broader nationwide implementation. However, as discussed in our report, the DOJ official who is overseeing the initiative noted that the nationwide implementation could take up to 2 years. Therefore, our recommendation is intended for DHS and the FBI to take interim actions until the national initiative is fully implemented, such as more consistently providing local and tribal officials in border communities with information on the suspicious terrorism-related activities they should report—including those related to border threats— and establishing processes for reporting this information. The full text of DHS's written comments is reprinted in appendix II. DHS also provided technical comments, which we incorporated in this report where appropriate. On December 8, 2009, DOJ’s Audit Liaison Office, within the Justice Management Division, stated by e-mail that the department will not be submitting technical or formal comments on the draft report. As agreed with your office, we plan no further distribution of this report until 30 days from its date, unless you publicly announce its contents earlier. At that time, we will send copies to the Secretary of Homeland Security, the Attorney General, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. The objectives of our review were to determine the extent to which (1) local and tribal law enforcement agencies in border communities are receiving information from their federal partners that enhances the agencies’ situational awareness of border crimes and potential terrorist threats; (2) federal agencies are assisting fusion centers’ efforts to develop border intelligence products that enhance local and tribal agencies’ situational awareness of border crimes and potential terrorist threats; and (3) local and tribal law enforcement agencies in border communities are aware of the specific types of suspicious activities related to terrorism they are to report and to whom, and the process through which they should report this information. To identify criteria for answering these questions, we analyzed relevant laws, directives, policies, and procedures related to information sharing, such as the October 2007 National Strategy for Information Sharing and the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act). The 9/11 Commission Act provides for the establishment of a State, Local, and Regional Fusion Center Initiative at the Department of Homeland Security (DHS) and contains numerous provisions that address the federal government’s information sharing responsibilities to state and local fusion centers, including those that serve border communities. To examine the information sharing that occurs between local and tribal law enforcement agencies in border communities and federal agencies that have a local presence in these communities—U.S. Customs and Border Protection’s Border Patrol and U.S. Immigration and Customs Enforcement (ICE), and the Federal Bureau of Investigation (FBI)—we conducted site visits to five states that are geographically dispersed along the northern and southwest borders (Arizona, Montana, New York, Texas, and Washington). Within these states, we selected a nonprobability sample of 23 local and tribal law enforcement agencies to visit based on one or more of the following characteristics: locations known to be or suspected of being particularly vulnerable to illegal entry or criminal activity; land ports of entry with heavy inbound passenger traffic; locations in proximity to areas at the border where there is little or no continuous federal border enforcement presence; locations that include Native American tribal communities with lands that abut the border; locations where federal, state, and local communities have in the past, or are currently working with federal agencies to support border security either informally or through pilot programs for sharing information; locations in proximity to federal agencies at the border; and geographically dispersed locations along the northern and southwest land borders. We met with county sheriffs, local police chiefs, and tribal police chiefs from the 23 law enforcement agencies and asked them about the information they received from federal agencies in their localities. We also asked whether federal officials had discussed local and tribal officials’ information needs and had established information sharing partnerships and related mechanisms to share information with them—consistent with the National Strategy for Information Sharing and best practices described in GAO reports. After our visits, we sent follow-up questions to all 23 local and tribal agencies we visited in order to obtain consistency in how we requested and obtained information for reporting purposes. Three agencies did not respond to our follow-up efforts and were excluded from our analysis. Thus, our analysis and reporting is based on our visits and subsequent activities with the 20 local and tribal agencies that responded to our follow-up questions. We also met with local representatives of Border Patrol, ICE, and the FBI to discuss their perspectives on the information sharing that occurred, and compared this information to that provided by local and tribal agencies in order to identify barriers to sharing and related causes. Because we selected a nonprobability sample of agencies in border communities to contact, the information we obtained at these locations may not be generalized across the wider population of law enforcement agencies in border communities. However, because we selected these border communities based on the variety of their geographic location, proximity to federal agencies, and other factors, the information we gathered from these locations provided us with a general understanding of information sharing between federal agencies and state, local, and tribal law enforcement agencies along the border. To assess the extent to which federal agencies assisted fusion centers in developing border intelligence products, as discussed in the 9/11 Commission Act, we reviewed products developed by fusion centers to determine the extent to which they provided border security–relevant information. We also met with and conducted subsequent follow-up conversations with fusion center directors and other senior fusion center officials in the five states we visited (Arizona, Montana, New York, Texas and Washington) and obtained their views on the importance of developing such products and about the level of support federal agencies were providing in developing these products. We asked each of the 20 local and tribal law enforcement agencies we contacted whether they received border intelligence products from their state’s primary fusion center and, if so, we discussed their views on the usefulness of such products. We also interviewed senior officials from DHS’s Office of Intelligence and Analysis—the office responsible for coordinating the federal government’s support to fusion centers—and headquarters and field components of Border Patrol, ICE, and the FBI to discuss their efforts to support fusion centers’ development of border intelligence products, identify promising practices for developing such products, and obtain feedback from local and tribal officials on the usefulness of the products. We also reviewed applicable documents that address fusion centers, including the 9/11 Commission Act, the National Strategy for Information Sharing, fusion center guidelines, and DHS planning documents and reports. Finally, to determine the extent to which local and tribal agencies in border communities were aware of the suspicious activities they are to report, we asked officials from the 20 agencies what, if any, information federal agencies or fusion centers had provided them on the kinds of suspicious activities that could be indicators or precursors to terrorism and what processes they had in place for reporting information on these activities. In general, suspicious activity is defined as observed behavior or incidents that may be indicative of intelligence gathering or preoperational planning related to terrorism, criminal, espionage, or other illicit intentions. We also reviewed the Findings and Recommendations of the Suspicious Activity Report (SAR) Support and Implementation Project to determine the extent to which the federal government recognizes the role of suspicious activity reporting for detecting and mitigating potential terrorist threats. We compared the processes for reporting suspicious activities with GAO’s Standards for Internal Control in the Federal Government. We also examined indicators of various suspicious activities the FBI and fusion centers developed to determine if they contained border-specific content. We interviewed Department of Justice officials who were leading the national initiative to standardize suspicious activity reporting—as well as those from headquarters components of DHS and the FBI—to discuss the status of the national initiative and whether border-specific indicators were needed and are being considered as part of this initiative. We conducted this performance audit from October 2007 through December 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the person named above, Eric Erdman, Assistant Director; Frances Cook; Cindy Gilbert; Kristen Hughes; Christopher Jones; Thomas Lombardi; Ronald Salo; Edith Sohna; Adam Vogt; and Maria Wallace made key contributions to this report. | Information is a crucial tool in securing the nation's borders against crimes and potential terrorist threats, with the Department of Homeland Security's (DHS) Border Patrol and Immigration and Customs Enforcement (ICE), and the FBI, having key information sharing roles. GAO was asked to assess the extent to which (1) local and tribal officials in border communities received useful information from their federal partners, (2) federal agencies supported state fusion centers'--where states collaborate with federal agencies to improve information sharing--efforts to develop border intelligence products, and (3) local and tribal agencies were aware of the suspicious activities they are to report. To conduct this work, GAO analyzed relevant laws, directives, policies, and procedures; contacted a nongeneralizable sample of 20 agencies in border communities and five fusion centers (based on geographic location and size); and interviewed DHS and FBI officials. Officials from 15 of the 20 local and tribal law enforcement agencies in the border communities GAO contacted said they received information directly from at least one federal agency in the vicinity (Border Patrol, ICE, or the FBI) that was useful in enhancing their situational awareness of border crimes and potential terrorist threats. Nine of the 20 agencies reported receiving information from all three federal agencies. Overall, where federal officials had discussed local and tribal officials' information needs and had established information sharing partnerships and related mechanisms to share information with them--consistent with the National Strategy for Information Sharing and best practices--the majority of the local and tribal officials reported receiving useful information. However, most local and tribal officials that reported federal agencies had not discussed information needs and had not established partnerships with them also said they had not received useful information. By more fully identifying the information needs of local and tribal agencies along the borders and establishing information sharing partnerships, federal agencies could be better positioned to provide local and tribal agencies with information that enhances their situational awareness of border crimes and potential terrorist threats. Federal officials at two of the five state fusion centers we visited were supporting fusion center efforts to develop border intelligence products or reports that contained information on border crimes and potential terrorist threats, as discussed in the Implementing Recommendations of the 9/11 Commission Act of 2007. DHS recognizes that it needs to add personnel to other fusion centers in border states to, among other things, support the creation of such products, and is developing plans to do so, but cited funding issues and competing priorities as barriers. The creation of border intelligence products--such as those developed by two of the fusion centers we visited--represent potential approaches that DHS and the FBI could use to identify promising practices that other fusion centers could adopt. Identifying such practices is important because of the central role the federal government places on fusion centers to facilitate the sharing of information. Also, DHS had not obtained feedback from local and tribal officials on the utility and quality of the border intelligence products that its analysts in fusion centers have helped to develop. Additional efforts to obtain such feedback would support DHS and FBI efforts to improve the utility and quality of future products. Officials from 13 of the 20 local and tribal agencies in the border communities we contacted said that federal agencies had not defined what suspicious activities or indicators rise to the level of potential terrorist threats and should be reported to federal agencies or fusion centers. Recognizing this problem, federal agencies are participating in national efforts to standardize suspicious activity reporting. Until such efforts are implemented, defining suspicious activity indicators and current reporting processes would help better position local and tribal officials along the borders to identify and report incidents indicative of criminal activity associated with terrorist threats. |
DOD established the FIAR Plan as its strategic plan and management tool for guiding, monitoring, and reporting on the department’s ongoing financial management improvement efforts and for communicating the department’s approach to addressing its financial management weaknesses and achieving financial statement audit readiness. To implement the FIAR Plan, the DOD Comptroller issued the FIAR Guidance, which defines DOD’s strategy, goals, roles, and responsibilities and the procedures that the components need to perform to improve financial management and achieve audit readiness. DOD components are expected to prepare a FIP in accordance with the FIAR Guidance for each of their assessable units. The FIPs are intended to both guide and document financial improvement efforts. While the name FIP indicates that it is a plan, as a component implements that plan, it must document the steps performed and the results of those steps, and retain that documentation within the FIP. When a component determines that it has completed sufficient financial improvement efforts for an assessable unit to undergo an audit, it asserts audit readiness for the related assessable unit and submits the FIP documentation to the FIAR Directorate to support the conclusion of audit readiness. The FIAR Directorate is responsible for reviewing and validating the supporting documentation within the FIP to determine whether the component is audit ready. DOD’s service providers are responsible for a variety of accounting, personnel, logistics, and system development or operations services to support DOD components. Recognizing that the effectiveness of the service providers’ controls affects the auditability of the amounts reported on the components’ financial statements, DOD’s FIAR Guidance outlines the steps service providers are to perform to achieve audit readiness. Specifically, the FIAR Guidance requires service providers to work with the components to execute audit readiness activities on their systems, data, processes, internal controls, and supporting documentation that have a direct effect on the components’ audit readiness state. To support the component audit readiness efforts, a service provider is required to take either of the following steps: Develop and implement a FIP to improve its processes, systems, and controls so that it can successfully undergo a Statement on Standards for Attestation Engagements (SSAE) No. 16 examination. Specifically, the FIAR Guidance requires the service provider to implement a FIP if three or more components will rely on its processes and systems for their audit readiness assertions, and if the service provider will be able to assert audit readiness prior to the components’ targeted dates for asserting audit readiness. Directly participate in and support the component’s financial statement audit where the service provider’s processes, systems, internal controls, and supporting documentation are audited as part of the components’ financial statement audits. The FIAR Guidance service provider methodology requires the FIP to include the following five phases: Discovery, Corrective Action, Assertion/Evaluation, Validation, and SSAE No. 16 Examination. Table 1 provides a list of steps for each of the phases and the required deliverables. As presented in table 1, the service provider documents, evaluates, and tests its processes, systems, and controls during the Discovery Phase of its FIP, and designs and implements the necessary corrective action plans as part of the Corrective Action Phase. The deliverables from the service provider are then reviewed by the FIAR Directorate during the Assertion/Evaluation Phase. Based on its review of the deliverables, the FIAR Directorate determines whether the service provider is audit ready and, if so, authorizes the service provider to engage an auditor to perform an SSAE No. 16 examination. If the FIAR Directorate determines that the service provider is not audit ready, the FIAR Directorate provides feedback, which the service provider has to address before resubmitting the required deliverables for review. After the auditor completes the SSAE No. 16 examination and issues the report, the service provider submits a copy of the SSAE No. 16 examination report to the FIAR Directorate and evidence that it has implemented corrective actions to remediate the deficiencies identified by the auditor, if any. As part of the Validation Phase, the FIAR Directorate reviews the SSAE No. 16 report and supporting documentation of the implemented additional corrective actions to determine if the service provider is ready for a second SSAE No. 16 examination and, if so, authorizes the service provider to engage an auditor to perform a second SSAE No. 16 examination. If the service provider receives an unqualified opinion on the first SSAE No. 16 examination, the FIAR Directorate will not require the service provider to undergo a second audit as part of the SSAE No. 16 Examination Phase. Figure 1 illustrates a summary of the process in the FIAR Guidance related to the submission, review, and approval of the service providers’ documentation for audit readiness. DFAS is the service provider responsible for processing, accounting, and reporting contract pay for DOD components. Figure 2 illustrates the relevant systems and end-to-end process, which includes contract input, invoice entitlements, pre-validation, disbursing, Treasury reporting, accounting and reconciliation, and contract closeout and reconciliation processes. 1. Contract input: The components electronically transmit contract award data and related document images through their contract writing systems into the Mechanization of Contract Administration Services (MOCAS) system. DFAS reported that some contract awards are issued with manually produced documents, which the components mail or fax to DFAS for input into MOCAS. DFAS’s Contract Input Branch personnel validate the contract data before inputting them into MOCAS. 2. Invoice entitlements: Contractors electronically transmit invoices to DFAS for payment processing in MOCAS; however, if these invoices do not pass a series of automatic validation edits in MOCAS, they are rejected by the system. DFAS’s Entitlement branch personnel process these transactions utilizing its Entitlement Automation System (EAS). MOCAS perform edits to validate the invoices in MOCAS or EAS and compare the contract obligations, invoices, and receiving reports. DFAS’s entitlement branch personnel also utilize the Business Activity Monitoring (BAM) tool during the entitlement process to monitor and validate the contractors’ invoices. The BAM tool is a monitoring capability that DFAS uses to identify potential erroneous or improper payments. 3. Pre-validation: The Elimination of Unmatched Disbursements (EUD) system transmits invoice data to the components’ accounting systems. The components review the invoice data transmitted by EUD and approve the invoices for payment. 4. Disbursing: Once the components approve the invoices for payment, the components notify DFAS disbursing operations personnel who input the approval status into MOCAS. MOCAS processes the approved invoices to be paid either by check or electronic funds transfer (EFT). MOCAS generates a disbursement file that identifies all invoices to be paid. A certifying official reviews the disbursement file for accuracy prior to payment being made. After approval by the certifying official, DFAS’s Disbursing Operations personnel either mail the checks to contractors or transmit the EFT file to the Federal Reserve Bank to make the payment. 5. Treasury reporting: Once the disbursements are processed, MOCAS interfaces with the Defense Cash Accountability System (DCAS), which is the system used by DFAS to generate and submit monthly reports on contract pay disbursements to the Department of the Treasury (Treasury). 6. Accounting and reconciliation: DFAS’s Contract Branch personnel generate a disbursement file from MOCAS that is provided to the components to record the contract disbursements into their general ledgers. DFAS is also responsible for the reconciliation of the disbursements transactions in MOCAS to the components’ general ledgers; however, DFAS has yet to implement this process. 7. Contract closeout and reconciliation: DFAS’s Contract Branch personnel assist the components during the contract closeout and reconciliation processes, for example, with paying final vouchers and, when needed, resolving unreconciled balances on a contract. DFAS officials explained that they utilize the Standard Contract Reconciliation Tool (SCRT) to investigate differences in contract payment data between MOCAS and the components’ general ledgers upon request from the components and to process the necessary adjustments. Most of these requests are submitted to DFAS from the components during the contract closeout procedures. DFAS recognized the importance of implementing a FIP to improve its contract pay processes, systems, and controls, and performed steps required by the FIAR Guidance, such as performing internal control, information technology (IT), and substantive testing. However, we found that DFAS did not fully comply with the requirements in the FIAR Guidance to improve its contract pay processes, systems, and controls. For example, our review found that DFAS did not perform adequate planning and testing activities for the Discovery Phase of its FIP. In addition, DFAS did not provide adequate documentation for several corrective action plans to support that it has remediated identified control deficiencies. DFAS asserted in October 2013 that its contract pay controls were suitably designed and operating effectively to undergo an audit, and awarded a contract to an independent public accounting firm prior to fully remediating the deficiencies it identified during the implementation of its contract pay FIP. Without fully implementing the financial improvement steps required in the FIAR Guidance, DFAS does not have assurance that its processes, systems, and controls can produce and maintain accurate, complete, and timely financial management information for contract pay. Further, the deficiencies noted will affect the components’ ability to rely on DFAS’s controls over contract pay, ultimately increasing the risk that DOD’s goal for an auditable SBR will not be achieved in its planned time frame. Figure 3 provides a summary of the results of our review of DFAS’s contract pay FIP. DFAS developed flowcharts and narratives and performed internal control, substantive, and IT testing. Based on the testing performed during the Discovery Phase, DFAS identified a total of 399 deficiencies. Specifically, DFAS identified 20 internal control deficiencies and 379 IT control deficiencies—20 related to general controls and 359 related to application controls. However, we found that DFAS did not (1) adequately perform the required planning activities for its contract pay FIP, such as assessing the materiality of its processes and systems; (2) adequately perform the required testing; and (3) properly classify the identified deficiencies. As a result, additional deficiencies may exist that could negatively affect DFAS processes, systems, and controls that are relied upon by DOD components. DFAS developed a high-level end-to-end flowchart for contract pay that identified seven key processes and prepared detailed flowcharts and narratives for four of these seven key processes. However, DFAS did not perform all activities required by the FIAR Guidance. Specifically, based on our review of the contract pay FIP, DFAS did not: prepare a memorandum of understanding for each of the DOD components that documented roles and responsibilities for transactions, supporting documentation retention, and audit readiness activities; prepare detailed flowcharts and narratives for three of the seven key processes: (1) reporting of disbursements to Treasury, (2) accounting and reconciliation of contract pay disbursements to the components’ general ledgers, and (3) contract closeout; and assess the materiality of its processes and systems based on dollar activity and risk factors. DFAS officials stated that they coordinated with the DOD components to develop the contract pay FIP; however, DFAS did not maintain meeting minutes and was unable to provide documentation to support the components’ input or concurrence with the decisions made. DFAS is developing a Concept of Operations (CONOPS) to supplement existing mission work agreements that it has established with each component to comply with the requirements in the FIAR Guidance for the service providers to develop a memorandum of understanding. However, DFAS has not established a time frame for when the CONOPS will be completed. In addition, our review of the draft CONOPS and existing mission work agreements showed that they do not address all the requirements reflected in the FIAR Guidance. For example, these documents do not: identify the roles and responsibilities for authorizing, initiating, processing, recording, and reporting of transactions; identify the roles and responsibilities for the creation, completion, and retention of supporting documentation; and identify the supporting documentation that should be retained for each business process and transaction type. DFAS officials stated that they did not assess materiality and risk level for determining what processes, systems, and controls needed to be included in DFAS’s contract pay FIP because their approach consisted of including in the FIP the processes and systems that were common to at least three or more components. By applying this approach, they determined that the three processes that were excluded were used by two or fewer components. For example, each client has a different general ledger system; therefore, DFAS did not consider the general ledger reconciliation process to be a common service. However, this approach did not comply with the requirements in the FIAR Guidance, which requires service providers to determine the processes to be covered in the FIP based on whether the process is critical to the audit readiness efforts as defined by both materiality and risk. As a result, and as shown in figure 4, DFAS excluded from the FIP three of its key contract pay processes: (1) reporting of disbursements to Treasury, (2) accounting and reconciliation of contract pay disbursements to the components’ general ledgers, and (3) contract closeout. These processes excluded by DFAS from its FIP are intended to help ensure that the contract disbursements processed by DFAS are accurately recorded and maintained in the components’ general ledgers and that the status of DOD’s contract obligations is accurate and up-to- date. At the time of the implementation of its contract pay FIP, DFAS had not established a general ledger reconciliation process. DOD’s Financial Management Regulation (FMR) requires DFAS to reconcile disbursements transactions to the components’ general ledger, and the FIAR Guidance notes that the DOD components will not be able to successfully pass an audit without transaction-level reconciliation to the general ledger. Standards for Internal Control in the Federal Government states that control activities such as reconciliations are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. DFAS officials explained that DFAS is evaluating the three processes excluded from its contract pay FIP for each of the components to support their audit readiness efforts and that they will provide the results of these efforts to the affected components before the components assert audit readiness for contract pay. Specifically, these officials indicated that they have established a general ledger reconciliation process and plan to evaluate it and the other two processes (i.e., the reporting of disbursements to Treasury and contract closeout processes) in support of the Departments of the Navy, Air Force, and Army with a completion date of June 2014. However, DFAS did not provide sufficient documentation for us to assess the scope and methodology of these efforts or to confirm the completion status. Without an adequately scoped and planned FIP, DFAS will not be able to ensure that it is covering all key processes that will materially affect the timeliness, accuracy, and reliability of its contract pay transaction data. As a result, even though DFAS has already asserted audit readiness, DFAS does not have assurance that its FIP will satisfy the needs of the components or provide the expected benefits to the department-wide efforts to assert audit readiness for contract pay as a key element of the SBR. DFAS performed both internal control and substantive testing; however, DFAS did not validate the populations of transactions used to perform the testing. Therefore, DFAS’s test results cannot be generalized to support the assertion that its controls, and its transaction activities and balances, are audit ready. The FIAR Guidance requires service providers to validate the population of transactions to be tested prior to performing internal control and substantive testing by reconciling the population to the general ledger and assessing it for invalid transactions, abnormal balances, and missing data fields. As noted earlier, at the time of the implementation of its contract pay FIP, DFAS had not established a general ledger reconciliation process. In response to our inquiries, DFAS officials stated that they had validated the populations and provided to us a copy of a data reliability assessment. According to the FIAR Guidance, a data reliability assessment is intended to document a comparison of the transaction data to the components’ general ledgers and data mining performed to identify any outliers. However, the data reliability assessment provided by DFAS did not contain such a comparison or address data mining activities. Instead, the data reliability assessment provided background information on the Shared Data Warehouse (SDW), which is the database used by DFAS to generate the samples of transactions tested. SDW was developed by DFAS as a tool to generate reports for the disbursements recorded in MOCAS because MOCAS has limited query capabilities. As a result, SDW is used by DFAS to store contract administration and payment data collected from MOCAS, conduct queries, and produce reports. Because SDW is a database that stores data from MOCAS, this comparison is not an adequate reconciliation and, in essence, represents a comparison of the transactions recorded in MOCAS to MOCAS itself. An effective reconciliation process would involve comparing transactions to supporting documentation, systems of record, or both to ensure the completeness, validity, and accuracy of financial information. Even if DFAS had performed an adequate reconciliation process, according to the data reliability assessment that DFAS provided, the population of transactions validated by DFAS only covered the disbursements for 1 day, not the population of data for the entire fiscal year that was used by DFAS to select the samples that were tested. DFAS did not identify any deficiencies related to its substantive testing of the contract disbursements recorded in MOCAS and identified 20 deficiencies related to its internal control testing. However, because DFAS did not validate the population used to perform internal control and substantive testing, additional deficiencies may exist in DFAS’s contract pay controls and errors may exist in the recorded transactions activity and balances. We found that DFAS did not perform sufficient general and application controls testing. Further, DFAS did not develop an audit plan or strategy for its application-level testing. As a result, DFAS did not have support for the scope of its application-level testing, such as its rationale for excluding a significant number of the controls from the testing of several of the systems DFAS classified as key for contract pay, even though the FIAR Guidance requires consideration of such controls. For the controls it did test, DFAS found numerous deficiencies that needed to be addressed. Specifically, DFAS found issues with 20 entity-level general controls and 359 application-level controls. General controls: DFAS tested 122 of the 261 entity-level general controls identified in the FIAR Guidance; however, it did not determine whether the remaining 139 controls were relevant and should have been tested. DFAS officials told us that they decided to focus the entity-level testing on the 122 controls identified by the FIAR Guidance as having the highest relevance for a financial statement audit because of resource constraints. Based on the entity-level controls that were tested, DFAS identified 20 general control deficiencies at the entity level that were related to either the design or operation of controls, such as inappropriate segregation of duties and inadequate monitoring of system access privileges. However, because of the limited testing performed, additional deficiencies may exist that were not identified. DFAS officials acknowledged that they needed to assess the other 139 entity-level controls and planned to perform such an assessment during fiscal year 2014. However, as stated previously, DFAS asserted in October 2013 that its contract pay process was audit ready and did so without having assessed these 139 entity-level controls. Without effective entity-level general controls, application-level controls may be rendered ineffective by circumvention or modification. As a result, these deficiencies can materially affect the effectiveness of DFAS application- level controls. For example, edits designed to preclude users from entering unreasonably large dollar amounts in a payment processing system can be an effective application control. However, this control cannot be relied on if the general controls permit unauthorized program modifications that might allow some payments to be exempt from the edit. Application-level controls: DFAS performed application-level testing for the six system applications it determined to be key to its contract pay systems. However, DFAS did not develop audit plans or strategies to guide its application-level control testing for all six systems and did not perform sufficient testing for three of its systems—BAM, SCRT, and EUD- Accounting Pre-validation Module (APVM). The FIAR Guidance requires service providers to follow the Federal Information System Controls Audit Manual (FISCAM) to test the IT controls of the systems and applications that are necessary to achieve audit readiness. FISCAM requires a written audit program or strategy that describes the objective, scope, and methodology for the testing of IT controls. Entities are required to use the information documented in the audit plan or strategy to determine the nature, timing, and extent of the IT test procedures. DFAS officials explained that they did not document a plan or strategy for application- level controls because they were performing self-assessments and not audits. They also stated that some of their staff members did not know how to perform a FISCAM audit and that this was a learning experience. However, the FIAR Guidance requires DOD components to follow a process similar to an audit to obtain sufficient evidence that the organization is audit ready. DFAS officials stated that they recognized that the assessments could be improved, but noted that the FIAR Directorate had validated the results of its application-level testing. In addition, DFAS did not perform sufficient application-level testing for BAM, SCRT, and EUD-APVM. Out of the 163 controls required by the FIAR Guidance to be considered for each system, DFAS tested 40 controls for EUD-APVM, 32 for BAM, and 9 for SCRT. DFAS provided us a document to support how it selected the key controls that were tested for these systems and its reasoning for excluding from the testing most of the controls that are required by the FIAR Guidance. However, this document did not adequately support DFAS’s scope and methodology for testing these systems. For example, the document stated that either limited or no testing was performed of certain control areas, such as the application-level general controls for Security Management and Contingency Planning, because those controls were tested at the entity or system level; however, DFAS’s review of entity-level controls did not cover any application-related controls. Further, as stated earlier, DFAS did not perform sufficient testing of its entity-level controls. Although the Defense Information Systems Agency (DISA)—which is responsible for the mainframe platforms where DFAS’s contract pay systems are executed and maintained—received an unqualified opinion on its SSAE No. 16 examination, this examination did not cover DFAS’s application- level controls. DISA’s SSAE No. 16 report also recognized the need for its user entities to implement complementary controls in different areas, including backup and recovery management. As a result, the application- level testing performed by DFAS for BAM, SCRT, and EUD-APVM was not sufficient and did not comply with the FIAR Guidance. Based on its limited testing of application-level controls, DFAS identified a total of 359 deficiencies. For example, DFAS found deficiencies in its access controls, such as a lack of processes to ensure that users’ system access is authorized and limited to job responsibilities. DFAS also found a lack of adequate policies and procedures to ensure proper segregation of duties and related monitoring processes. Because DFAS did not use a documented plan or strategy, and did not have adequate evidence on whether its application-level control testing was adequately designed, it did not obtain the necessary assurance that its contract pay data are valid, complete, and accurate. This increases the risk that additional deficiencies exist that were not identified during the application-level testing, which in turn hinders DFAS’s ability to remediate existing deficiencies thus adversely affecting audit readiness. DFAS did not coordinate and work with the components to assess the impact of the identified deficiencies on the components’ audit readiness efforts and classify the deficiencies as control deficiencies, significant deficiencies, or material weaknesses as required by the FIAR Guidance. DFAS officials explained that they classified the identified deficiencies into high-, medium-, or low-risk categories based on their assessment of the risk to DFAS not being able to achieve its control objectives. These officials indicated that they did not follow the FIAR Guidance for risk classification because SSAE No. 16 states that the service provider will not be able to determine the impact of the identified deficiencies on the components’ financial statements. DFAS officials also stated that in order for them to classify the deficiencies as control deficiencies, significant deficiencies, or material weaknesses as required by the FIAR Guidance, they would need to obtain information from the components regarding their processes and controls affected by the identified deficiencies. The FIAR Guidance recognizes that this coordination is needed to determine the effect of the identified deficiencies on the components’ financial statements, which is the intent of DOD’s overall FIAR effort. Further, the FIAR Guidance states that because of the complexities inherent in DOD component and service provider relationships and associated audit readiness interdependencies, it is essential that such coordination is documented in a memorandum of understanding. While an SSAE No. 16 examination is intended to provide assurance regarding the control environment of the service providers, the FIAR effort is intended, among other things, to provide assurance that the components are ready for a financial statement audit. To do this, the components must be aware of the impact of the deficiencies in the service provider’s control environment so that they can assess their risks and identify and implement compensating controls if needed. Because DFAS did not adequately classify the identified deficiencies and assess their related impact to the components, DOD components will not be able to obtain a complete understanding of the impact of the deficiencies identified by DFAS on their own control environments and design and implement compensating controls to mitigate the effect of DFAS’s control deficiencies on their financial operations. DFAS notified the FIAR Directorate that it had implemented the necessary corrective action plans and developed an audit readiness strategy; however, we found that DFAS did not (1) take the necessary corrective actions or maintain sufficient documentation for 18 of 25 deficiencies DFAS reported as remediated that we reviewed and (2) properly update the Corrective Action Phase section of its FIP status report. DFAS’s audit strategy consisted of its contract pay FIP undergoing an SSAE No. 16 examination and, as stated earlier, DFAS evaluating the three processes excluded from its contract pay FIP for each of the components to support their audit readiness efforts. However, DFAS did not provide documentation (an updated CONOPS or memorandum of understanding) to show that it had coordinated with the components to determine how it would support their audit readiness efforts for those processes excluded from the FIP as required by the FIAR Guidance. Further, additional deficiencies may exist in DFAS’s contract pay processes and systems that were not considered during the Corrective Action Phase because, as discussed previously, DFAS did not (1) validate the population used to perform internal control and substantive testing and (2) perform sufficient general control and application-level testing. As a result of these deficiencies, DFAS’s contract pay FIP did not provide sufficient assurance that all the deficiencies that may materially affect the accuracy and reliability of its contract pay transaction data had been fully remediated. The FIAR Directorate reviewed the DFAS’s supporting documentation for its contract pay FIP and authorized DFAS to undergo an SSAE No. 16 examination. DFAS reported that it had developed and implemented corrective actions to remediate 393 of the 399 deficiencies it identified as part of the Discovery Phase. DFAS officials stated that for the 6 deficiencies that were not remediated as part of the contract pay FIP, DFAS will either address the deficiencies subsequent to its audit readiness assertion or rely on other components to address these deficiencies. The FIAR Guidance requires service providers to remediate each identified deficiency before asserting that they are audit ready. In addition, 2 of these 6 deficiencies were determined by the FIAR Directorate to be material. However, DFAS did not provide evidence that these deficiencies were remediated before asserting audit readiness for contract pay. We selected a nongeneralizable sample of 25 control deficiencies DFAS reported as remediated to determine whether DFAS had adequately implemented corrective actions to remediate the identified deficiencies. Of these 25 deficiencies, we found that DFAS had adequately developed and implemented the necessary corrective action plans for 7. We found the following for the remaining 18 deficiencies: For 3 deficiencies, DFAS did not develop corrective action plans. For example, DFAS reported 1 of these deficiencies as closed because it planned to rely on the Defense Contract Management Agency (DCMA) to remediate the identified weaknesses. Although DFAS provided documentation of DCMA’s agreement to address this deficiency, DFAS did not provide documentation to support that this deficiency had been remediated by DCMA. In addition, DFAS reported as closed 2 deficiencies related to the reconciliation of its contract pay activity with the components’ general ledger because, as stated earlier, it decided not to address this reconciliation as part of its contract pay FIP. DOD’s FMR and the FIAR Guidance require DFAS to reconcile disbursement transactions to the components’ general ledgers, and the FIAR Guidance notes the DOD components will not be able to successfully pass an audit without transaction-level reconciliation to their general ledgers. Standards for Internal Control in the Federal Government states that control activities such as reconciliations are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. For eight deficiencies, the corrective action plans developed by DFAS were not adequate. Corrective action plans should include, among other things, the responsible point of contact, the root causes of the deficiency, and resource needs. However, these corrective action plans did not adequately describe the root causes of the identified deficiencies that needed to be corrected. For example, half of these corrective action plans only described the control requirements from FISCAM but did not describe the underlying root cause of the deficiencies identified by DFAS. As a result, these corrective action plans do not provide sufficient information to perform an independent review to determine whether an implemented corrective action remediated the identified deficiency. For the remaining 7 deficiencies, DFAS did not provide adequate documentation to support that the corrective action plans were adequately implemented. For example, DFAS provided us a copy of a documented procedure as support for the implementation of one of its corrective action plans; however, the documented procedure provided by DFAS was not relevant to the identified deficiency. In addition, DFAS did not provide support that a corrective action had been tested and had successfully remediated the deficiency, and for another deficiency the test results showed that it had not been successfully remediated by the implemented corrective action. Further, the corrective action plan for another deficiency noted that it would not be fully remediated until February 2014, which was 4 months after DFAS asserted audit readiness. DFAS stated that the actions taken to address these 18 deficiencies were appropriate. However, we found that in 3 of the 18 instances, corrective actions had not been taken as required by the FIAR Guidance and that the documentation provided by DFAS for the other 15 deficiencies was insufficient. Without implementing adequate corrective action plans, DFAS lacks sufficient assurance that these identified control deficiencies were remediated, which will negatively affect the accuracy and reliability of its contract pay transaction data. DFAS submitted its monthly FIP status report for the department to monitor its progress in meeting interim and long-term goals. However, we found that DFAS’s status reports were not accurate and complete. For example, although DFAS has reported since November 2012 on its FIP status report that its Corrective Action Phase was completed in August 2012, DFAS did not assert its Corrective Action Phase as complete until October 2013. Further, DFAS did not include in the status report the information required by the FIAR Guidance for the Corrective Action Phase, such as the identified weaknesses by classification (e.g., material weaknesses), and respective corrective actions with targeted completion dates. DFAS officials explained that they did not update the contract pay FIP status report to include the information required by the FIAR Guidance for the Corrective Action Phase because of limitations in the software used to maintain the FIP. They explained that the software does not allow them to make significant updates to the FIP and they would have to develop a work-around to update the FIP, such as creating a new project in the software with the required updates. However, this information is key for DOD’s oversight of the components’ audit readiness efforts, as it is used by DOD’s key stakeholders and governing bodies for financial improvement and audit readiness to oversee the FIAR effort and is reported publicly on a biannual basis. Further, because the status information reported by DFAS is inaccurate and incomplete, it could misinform stakeholders as to the status of DFAS’s audit readiness efforts and negatively affect the adequacy and effectiveness of the components’ audit readiness plans for contract pay. DFAS notified the FIAR Directorate that it had implemented the necessary corrective action plans and developed an audit readiness strategy. The FIAR Directorate reviewed the DFAS’s supporting documentation for its contract pay FIP and authorized DFAS to undergo an SSAE No. 16 examination. DFAS’s audit strategy consisted of undergoing an SSAE No. 16 examination for its contract pay FIP and, as stated earlier, evaluating the three processes excluded from its contract pay FIP for each of the components to support their audit readiness efforts. However, DFAS did not provide documentation (an updated CONOPS or memorandum of understanding) to show that it had coordinated with the components to determine how it would support their audit readiness efforts for those processes excluded from the FIP as required by the FIAR Guidance. For example, because DFAS has not implemented a memorandum of understanding with the components, it is unclear whether the Army implemented the necessary compensating controls in the absence of assurance from DFAS that its contract pay processes, systems, and controls were designed and operating as intended. As stated earlier, DFAS has not completed its evaluation of the three processes that were excluded from its contract pay FIP for the components, including the Department of the Army; however, the Army asserted in June 2013 that its processes, systems, and controls for contract pay were audit ready. In addition, DFAS did not assert audit readiness of the processes, systems, and controls included in its contract pay FIP until October 2013. Thus, the usefulness of DFAS’s efforts in support of the Army’s and other components’ audit readiness efforts remains questionable. DFAS recognized the importance of implementing a FIP to improve its contract pay processes, systems, and controls and performed steps required by the FIAR Guidance, such as performing internal control, IT, and substantive testing. However, DFAS did not fully comply with the requirements in the FIAR Guidance for the Discovery and Corrective Action Phases; therefore, the FIP did not support DFAS’s October 2013 assertion that its contract pay controls were suitably designed and operating effectively. As a result, DFAS did not have assurance that its processes, systems, and controls can produce and maintain accurate, complete, and timely financial management information for the approximately $200 billion of contract pay disbursements it annually processes on behalf of DOD components. For example, DFAS did not perform adequate planning and testing activities for the Discovery Phase of its FIP. In addition, DFAS did not provide adequate documentation demonstrating that it had remediated certain identified deficiencies. Although DFAS asserted audit readiness, correcting the weaknesses identified in this report can help ensure that it effectively carries out its contract pay mission and implements, maintains, and sustains the necessary financial improvements to its contract pay processes, systems, and controls. Until DFAS does so, its ability to properly process, record, and maintain accurate and reliable contract pay transaction data is questionable. To ensure that DFAS is able to obtain the necessary assurance that its contract pay end-to-end process can produce, maintain, and sustain accurate, complete, and timely information in support of the components’ and DOD-wide financial improvement and audit readiness efforts, we recommend that the Under Secretary of Defense (Comptroller)/Chief Financial Officer direct the Director of the Defense Finance and Accounting Service to take the following nine actions: Address deficiencies in its Discovery Phase planning activities for contract pay by performing the following: Document its contract pay end-to-end process by developing the necessary flowcharts and narratives for those processes excluded from the FIP. Assess the materiality (i.e., dollar activity and risk factors) of its processes, systems, and controls. Complete a memorandum of understanding with each of the components. Address deficiencies in its Discovery Phase testing activities by performing the following: Validate the completeness and accuracy of the populations of transactions used to perform testing. Consider and assess the design and operational effectiveness of the entity-level general controls that were not tested by DFAS, as appropriate. Document and execute an audit strategy or plan for application-level testing of system controls. Coordinate with the components to classify all identified deficiencies as control deficiencies, significant deficiencies, and material weaknesses. Address deficiencies in its Corrective Action Phase activities by performing the following: Assess the population of implemented corrective action plans to determine whether the deficiencies we found in our nongeneralizable sample of DFAS’s corrective action plans are more wide spread in the population. Revise its FIAR status reports to accurately reflect the current status of its audit readiness efforts. We provided a draft of this report to DOD for comment. In its written comments, reprinted in appendix II, DOD concurred with our recommendations. DOD also described planned and ongoing actions that DFAS and the FIAR Directorate are taking to address the recommendations, including developing procedures for the processes excluded from DFAS’s contract pay FIP; performing a materiality assessment of processes, systems, and controls; completing a memorandum of understanding to document roles and responsibilities for each component; validating the completeness and accuracy of populations of transactions used to perform testing; and reviewing and certifying corrective actions. DOD also stated that significant progress had been made but much work remained to be accomplished to include applying lessons learned in implementing the FIAR Guidance during audit preparations, as our recommendations indicated. Further, DOD commented that there had been positive results and it was expecting a favorable opinion from the ongoing independent public accountant examination being conducted under SSAE No. 16. However, as discussed in our report, the scope of DFAS’s SSAE No. 16 examination was limited and did not cover all key processes that will materially affect the timeliness, accuracy, and reliability of its contract pay transaction data. Therefore, until DFAS completes its other efforts, such as establishing a general ledger reconciliation process, it does not have reasonable assurance that its SSAE No. 16 examination will satisfy the needs of the components or provide the expected benefits to the department-wide effort to assert audit readiness for contract pay as a key element of the SBR. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller)/Chief Financial Officer, the Director of the Defense Finance and Accounting Service, the Director of DFAS-Columbus, the Director of the Office of Management and Budget, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. To determine the extent to which the Defense Finance and Accounting Service (DFAS) implemented its contract pay Financial Improvement Plan (FIP) in accordance with the Financial Improvement and Audit Readiness (FIAR) Guidance, we compared DFAS’s contract pay FIP with the FIAR Guidance to determine if the FIP contained all steps and supporting documentation that the FIAR Guidance requires the components to complete. Using the FIAR Guidance, we analyzed DFAS’s FIP supporting documentation, such as process narratives and flowcharts, and test plans and test results. We also analyzed DFAS’s efforts to address deficiencies identified during testing. Specifically, we selected a nongeneralizable sample of 25 deficiencies that were reported on the FIAR Directorate’s Tracking Sheet as of September 23, 2013. To ensure the reliability of the data reported on the Tracking Sheet, we (1) interviewed FIAR Directorate officials to obtain an understanding of the process they followed to monitor and validate DFAS’s efforts to remediate identified deficiencies and (2) reviewed the actions taken to ensure that all deficiencies identified during the testing were included in the Tracking Sheet. We also reviewed the data on the Tracking Sheet for outliers, such as the deficiencies reported on the Tracking Sheet as not being fully remediated or controls tested for which DFAS did not identify any deficiencies. As a result, we excluded 174 items from the total of 542 items on the Tracking Sheet for a population of 368 deficiencies. From this population, we selected a random sample of 20 deficiencies with noted corrective action plans that were designated as remediated by DFAS as of September 23, 2013. We also selected from the population of 368 deficiencies an additional 5 deficiencies: (1) 2 to include deficiencies associated with DFAS’s testing of general controls that were not included in the initial random sample and (2) 3 deficiencies identified by DFAS as remediated with a corrective action plan where the FIAR Directorate noted that the controls tested did not apply to DFAS’s contract pay FIP. We also interviewed officials from DFAS’s Office of Audit Readiness, DFAS’s Internal Review, and the FIAR Directorate to obtain explanations and clarifications on the results of our evaluation of the FIP. We conducted this performance audit from May 2012 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Arkelga Braxton (Assistant Director), Greg Marchand (Assistant General Counsel), Jason Kirwan, Omar V. Torres (Auditor-in-Charge), Jason Kelly, Sabrina Rivera, and Heather Rasmussen made key contributions to this report. | The National Defense Authorization Act for Fiscal Year 2013 mandated that DOD's FIAR Plan include the goal of validating that DOD's Statement of Budgetary Resources (SBR) is audit ready by no later than September 30, 2014. DOD identified contract pay as one of the key elements of its SBR. DFAS, the service provider responsible for the department's contract pay, asserted that its processes, systems, and controls over contract pay were suitably designed and operating effectively to undergo an audit. DOD's FIAR Guidance provides a methodology DOD components are required to follow to develop and implement FIPs to improve financial management and assert audit readiness. The FIP is a framework for planning, executing, and tracking the steps and supporting documentation necessary to achieve auditability. GAO is mandated to audit the U.S. government's consolidated financial statements, including activities of executive branch agencies such as DOD. This report discusses the extent to which DFAS implemented its contract pay FIP in accordance with the FIAR Guidance. GAO reviewed the FIP and related work products, such as process flowcharts, test plans, and test results, and interviewed DFAS and DOD officials. The Defense Finance and Accounting Service (DFAS) is responsible for processing and disbursing nearly $200 billion annually in contract payments (contract pay) for the Department of Defense (DOD). DFAS recognized the importance of implementing a Financial Improvement Plan (FIP) to improve its contract pay processes, systems, and controls, and performed steps required by DOD's Financial Improvement and Audit Readiness (FIAR) Guidance, such as performing testing of internal controls, and substantive processes. However, GAO found that DFAS did not fully implement the steps required by the FIAR Guidance. GAO found numerous deficiencies in the implementation of DFAS's contract pay FIP, including the following: DFAS did not adequately perform certain planning activities for its contract pay FIP as required by the FIAR Guidance. For example, DFAS did not assess the dollar activity and risk factors of its processes, systems, and controls, which resulted in the exclusion of three key processes from the FIP, including the reconciliation of its contract pay data to the components' general ledgers. Standards for Internal Control in the Federal Government states that control activities such as reconciliations are an integral part of an entity's planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. As result, DFAS did not obtain sufficient assurance that the contract disbursements are accurately recorded and maintained in the components' general ledgers, and that the status of DOD's contract obligations is accurate and up-to-date. DFAS did not adequately perform required testing of its contract pay controls, processes, and balances. For example, DFAS did not adequately validate the populations used to perform substantive and internal control testing as required by the FIAR Guidance. DFAS officials stated that they validated that the population that was tested; however, GAO found that the process followed by DFAS for validating the population did not include a reconciliation of the population to the components' general ledgers. As a result, additional deficiencies may exist in DFAS's contract pay controls and additional errors may exist in the recorded transactions activity and balances, which affects the components' ability to rely on DFAS's controls over contract pay. DFAS did not provide adequate documentation to support that it had remediated all of the identified control deficiencies that DFAS stated had been corrected. GAO's review of a nongeneralizable sample of 25 of these deficiencies found that in 3 instances, corrective actions had not been taken as required, and in 15 other instances, the documentation provided by DFAS did not sufficiently support that the identified deficiencies were remediated. DFAS had adequately developed and implemented the necessary corrective action plans for 7 of the deficiencies GAO reviewed. Although DFAS has asserted audit readiness, until it corrects the deficiencies and fully implements its FIP in accordance with the FIAR Guidance, its ability to process, record, and maintain accurate and reliable contract pay transaction data is questionable. Therefore, DFAS does not have assurance that its FIP will satisfy the needs of the components or provide the expected benefits to department-wide audit readiness efforts. GAO is making nine recommendations for DFAS to fully implement the requirements in the FIAR Guidance in the areas of planning, testing, and corrective actions. DOD concurred with the recommendations and described its actions to address them. |
“Plug-ins” refer to vehicles that can be plugged into an electrical outlet to charge the car’s battery. The option to plug in and charge is also the basic difference between a plug-in and a “conventional hybrid,” which uses both gasoline and stored energy in a battery to power the vehicle. Battery technology plays an important role in the development of plug-ins. Nickel metal hydride batteries—such as those currently used in existing conventional hybrid vehicles—can only store enough energy for limited all-electric driving without the batteries being made so large as to affect the vehicle’s fuel economy. As a result, many manufacturers are developing lithium-ion batteries because they have the potential to store more energy and are typically smaller and lighter than batteries currently in use. Plug-ins are expected to come equipped with a 110-volt plug that can be used with any standard electrical outlet. Some manufacturers also plan to make 220-volt charging an option, which requires the same type of outlet as used for household appliances like clothing dryers. With a 110-volt plug, manufacturers estimate that most plug-ins will reach a full charge if the vehicle were plugged in overnight (estimates are 8 hours depending on the size of the battery). A 220-volt plug can reduce that time by at least half. Technologies to further shorten the length of time needed to charge a plug-in are being explored. See figure 1 for descriptions of several types of plug-ins. Plug-in hybrid electric vehicle (PHEV) Saturn Vue Green Line (plug-in version) Electric motor (internal combustion engine can charge the battery but does not turn the wheels) Electric motor and internal combustion engine Neighborhood electric vehicle (NEV) Electric motor (maximum speed 25 mph) These plug-ins are powered differently: Plug-in hybrid electric vehicles (referred to as “plug-in hybrids” in this report) have both an internal combustion engine and a battery pack that can power the vehicle. Unlike conventional hybrid vehicles, plug-in hybrids offer drivers an “all-electric range” of driving powered by the battery, with an internal combustion engine that extends the overall r of the vehicle. Plug-in hybrids can be designed to use the two power sources in different ways. For example, as shown in figure 1, the plug-in version of the Saturn Vue Green Line can use its electric motor or gasoline-powered engine either separately or simultaneously to drive the vehicle’s wheels. The Chevrolet Volt only uses power from the elec motor to drive the wheels. The gasoline engine in the Volt is used to generate additional power for the electric motor, but it does not use gasoline to power the wheels. Emission Vehicle program—which has a goal of increasing the number of low-emission vehicles in California and was recently modified and includes plug-in hybrids, conventional hybrids, and all-electric vehicles. The federal government is also trying to reduce petroleum consumption in federal fleet vehicles by requiring agencies to take several actions and by setting a number of goals and requirements for federal agencies, as follows: Begin acquiring plug-in hybrid electric vehicles: Executive Order 13423 sets a goal for federal agencies operating fleets of 20 or more vehicles to begin using plug-in hybrids when these vehicles become commercially available and can be purchased at a cost reasonably comparable to conventional vehicles based on life-cycle costs. Acquire low greenhouse gas emitting vehicles: T and Security Act of 2007 (EISA) prohibits agencies from acquiring any light-duty motor vehicle or medium-duty passenger vehicle that is not a “low greenhouse gas emitting vehicle.” Decrease petroleum consumption: EISA of decreasing annual vehicle petroleum consumption at least 20 percent relative to a baseline established by the Energy Secretary for fiscal year 2005. e alternative fuel vehicles (AFV): The Energy Policy Act of 1992 (EPAct 1992) requires that 75 percent of all vehicles acquired by the federal fleet in fiscal year 1999 and beyond be AFVs. Eligible vehicles include any vehicle designed to operate on at least one alternative fuel including electric vehicles and plug-in hybrids. GSA considers neighborhood electric vehicles to be equipment, rather than ve hicles; acquiring them does not help agencies meet the AFV acquisition requirement. Use alternative fuel with AFVs: The Energy Policy Act of 2005 (EPAct 2005) requires that all AFVs be fueled with alternative fuel. However, DOE guidance grants an agency a waiver from meeting the requirement if it can prove that alternative fuel is not available within 5 miles of or 15 minutes from a vehicle’s address, or if the cost of alternative fuel exceeds that of conventional fuel. Increase consumption of alternative fuels: EISA requires that no later than October 2015 and each year thereafter, agencies must achieve a 10 percent increase in vehicle alternative fuel consumption relative to a baseline established by the Energy Secretary for fiscal year 2005. The American Recovery and Reinvestment Act of 2009 (Recovery Act) appropriated funding to help agencies meet some of these goals and requirements. For example, it provided $300 million for GSA to purchase vehicles with higher fuel economy. Several federal agencies and offices play key roles in ensuring agency compliance with fleet related requirements. The Council on Environmental Quality is responsible for issuing instructions regarding implementation of Executive Order 13423. DOE is responsible for issuing guidance to agencies relative to EPAct 1992 and 2005, and EISA; compiles an annual report on agencies’ progress in meeting facility and fleet energy requirements that it submits to Congress; and promotes the development of plug-in technology. For example, DOE’s Vehicle Technologies Program is actively evaluating plug-in hybrid technology and researching the most critical technical barriers to commercialization. Moreover, DOE performs battery testing and evaluation, vehicle simulation, and plug-in hybrid system testing through its work at Argonne and Idaho National Laboratories. DOE also provides financial support to promote the development of plug-in hybrid technology. For example, the department will contribute up to $30 million over 3 years for three cost-shared plug-in hybrid demonstration and development projects. These projects are expected to accelerate the development of plug-in hybrids capable of traveling up to 40 miles on electricity only without recharge. The Office of Management and Budget (OMB) oversees agencies’ implementation of fleet goals. Specifically, it provides recommendations to help agencies overcome barriers in meeting these goals and requirements through transportation management scorecards it issues semiannually. These scorecards track agencies’ performance on a number of indicators. GSA is responsible for acquiring vehicles for agencies to use in the federal fleet. Federal agencies may choose to purchase or lease vehicles for their motor vehicle fleets. With the exception of USPS, which can acquire its own vehicles or use GSA, agencies that choose to purchase vehicles are required by federal regulation to obtain them through GSA, which is able to acquire vehicles at significant discounts. Although federal agencies may lease vehicles from whatever source they choose, including commercial lessors, most agencies lease from GSA because of the significant discounts it is able to offer. In addition to motor vehicles, GSA also lists specialized vehicles, such as neighborhood electric vehicles, on its supply schedules. Lastly, GSA also provides fleet management consulting services and guidance for federal agencies. Three additional organizations of federal fleet managers exist to help agencies manage their fleets and facilitate information sharing. The Interagency Committee for Alternative Fuels and Low-Emission Vehicles (INTERFUEL) offers a forum for fleet managers to understand statutory requirements and rule-making processes, discuss policy implications and barriers, and develop comments on legislation, executive orders, and new regulations related to the use of alternative fuels and reductions in petroleum consumption among the federal fleet. The Federal Fleet Policy Council (FEDFLEET) consists of representatives from agencies operating a federal motor vehicle fleet and provides a focal point to federal agencies for the coordination of vehicle management problems, plans, and programs common to all federal fleets. Finally, the Motor Vehicle Executive Council establishes a long-term strategic vision for the management of government wide motor vehicles and develops interagency planning in conjunction with FEDFLEET. The federal fleet currently numbers about 645,000 vehicles, according to fiscal year 2008 data––the most recent data available—and includes a wide range of vehicles from large trucks to small sedans, many of which are alternative fuel vehicles such as flex-fuel vehicles, which can be fueled with gasoline or ethanol (E85). The fleet may be roughly divided into three sectors: DOD as a whole operates 30 percent of the fleet, USPS operates 34 percent of the fleet, and all other civilian agencies operate the remaining 36 percent. From fiscal years 2004 through 2008, the overall size of the fleet increased about 4 percent. Most vehicles in the federal fleet are owned by the agencies that operate them—for example, in fiscal year 2008 about 69 percent of vehicles were owned. The remaining 31 percent were leased almost entirely from GSA rather than commercial lessors. The number of leased vehicles as a proportion of the overall fleet remained essentially unchanged from fiscal years 2004 through 2008, showing a slight overall increase of 1 percent. In addition, federal agencies placed orders for 70,865 vehicles through GSA in fiscal year 2008, or approximately 11 percent of the overall fleet. This figure includes vehicles purchased by GSA for lease to agencies, as well as those purchased by USPS. The majority of vehicles in the federal fleet are light duty trucks—44 percent—with passenger vehicles making up 36 percent of the fleet, and medium and heavy duty trucks, buses, and ambulances making up the remaining 20 percent. The adoption of plug-ins could result in several benefits by reducing petroleum consumption, such as reduced emissions of greenhouse gases and air pollutants. However, the environmental benefits depend on whether the electricity used to power plug-ins emits fewer greenhouse gases and pollutants than the fuel it replaces, as well as on consumers adopting plug-ins, who may be deterred if plug-ins are not cost-effective. The cost-effectiveness of plug-ins will be determined by the cost of batteries and trends in the price of gasoline relative to the price of electricity to charge the vehicles. Through their potential to make substantial reductions in oil consumption, plug-ins could produce environmental benefits such as reducing greenhouse gas emissions. All-electric vehicles will consume no gasoline, and the fuel economy of plug-in hybrids is expected to be high, which means these vehicles will consume limited amounts of gasoline. For example, in tests that mimic the driving patterns of a typical driver, a test fleet of hybrids converted to plug-in hybrids operated by Google’s RechargeIT program averaged 93.5 mpg. Plug-in hybrids also have the potential to operate without consuming any gasoline. Specifically, planned plug-in hybrids will be able to operate on electric power for 10 miles to about 40 miles, depending on the specific design of the vehicle. The vehicle would consume no petroleum at all if drivers could limit their driving between charges to the vehicle’s all-electric range. Burning fossil fuels, including gasoline, accounts for most of the world’s manmade greenhouse gas emissions, primarily carbon dioxide (CO), which have been linked to global climate change. According to the Environmental Protection Agency (EPA), the transportation sector accounted for about 28 percent of the total U.S. greenhouse gas emissions produced in 2006. That number rises to 36 percent if nonroad mobile sources such as construction, farm, lawn, and garden equipment and upstream transportation fuel-related emissions such as extraction, shipping, refining, and distribution are included. Within transportation, passenger cars and light duty trucks, which include sport utility vehicles (SUV), minivans, and other vehicles commonly used for personal transportation, produced 62 percent of greenhouse gas emissions. Recent research suggests that plug-ins could produce substantial reductions in CO reductions—depending on the size of the vehicle and energy source used to generate electricity—when plug-in hybrids driven within their all- electric range (in this case either 20 or 60 miles) were compared with gasoline-powered vehicles (see table 1). As the table indicates, reductions in CO before it is emitted into the atmosphere. However, shifting to these sources will require new power plants that can be expensive to build, as well as investments to develop, test, and equip coal and other fossil fuel plants with carbon sequestration technology. In addition, the construction of new nuclear plants can be controversial because of public concern about safety. Similarly, construction of some renewable energy sources, such as wind turbines, can be controversial. In addition, in regions of the country that are heavily reliant on coal for power generation, conventional hybrids might offer greater CO emissions than a conventional hybrid. Thus, in the immediate future, plug-ins could be used to reduce greenhouse gas emissions—relative to conventional hybrids—in regions of the country where electricity is already generated from low-carbon energy sources. For example, a plug-in vehicle charging in a coal-reliant state may not reduce greenhouse gas emissions relative to a conventional hybrid. But a plug-in charging in a state that relies heavily on hydropower would substantially reduce greenhouse gas emissions. However, developing policy or incentives to encourage consumers to buy plug-ins only in regions with low-carbon energy sources could be difficult and may not correspond with manufacturers’ business plans. Plug-ins could also reduce emissions that affect air quality. About 50 percent of Americans live in areas where levels of one or more air pollutants are high enough to affect public health. Research we reviewed indicated that plug-ins could shift air pollutant emissions away from population centers even if there was no change in the fuel used to generate electricity (e.g., if low-emitting renewable sources were not substituted for higher-emitting sources). For example, a study from the University of Texas modeled the potential impact plug-in hybrids could have on the formation of smog in a region of the country that relies heavily on coal for power generation. Specifically, the study estimated that using plug-in hybrids substantially reduced smog in major cities if they were charged at night. These benefits remained even if nighttime power generation had to be increased to full capacity to meet additional demand. One potential downside the study identified was that rural areas near power plants could experience an increase in the overall amount of airborne emissions. However, since power generation would be increased at night, pollutants would not be exposed to sunlight, which would limit the production of smog. This benefit would depend on consumers adopting a substantial number of plug-ins. Finally, plug-in vehicles, which are expected to use lithium-ion batteries, could also provide environmental benefits by reducing toxic waste that would otherwise be generated from car batteries. Compared with lead acid batteries in gasoline vehicles and nickel metal hydride batteries used in conventional hybrid vehicles, lithium-ion batteries produce insignificant levels of toxic waste, which means they are less likely to pose environmental challenges in disposal. However, extracting lithium from locations where it is abundant, such as in South America, could pose environmental challenges that would damage the ecosystems in these areas. Furthermore, lithium-ion batteries can pose challenges and potential costs and risks related to safety and transport. For example, lithium-ion batteries have previously posed a risk of “thermal runaway,” in which the batteries overheat and catch fire. Mitigating this safety issue is a priority of battery manufacturers, and one battery manufacturer we visited showed us several innovations to ensure that this would not be a risk while operating the vehicle. In addition, because of the current risks, there are restrictions on the transportation of lithium-ion batteries, which could pose challenges for consumers—including the federal government—in maintaining these vehicles. Besides offering environmental benefits, reduced oil consumption from plug-ins could help to limit U.S. vulnerability to supply reductions and subsequent oil price shocks. A study by the EPRI estimated that if plug-in hybrid vehicles grew to compose about 62 percent of the cars on the road, they could help save about 3.7 million barrels of oil per day by 2050 (about 9.3 million barrels of oil were consumed per day by automobiles in the United States in 2007). Research from the National Renewable Energy Laboratory found that a plug-in hybrid with a 60-mile all-electric range could reduce gasoline consumption by 53 percent to 64 percent o a gasoline vehicle. By comparison, a conventional hybrid compared with ver the same gasoline vehicle would reduce consumption by 21 percent to percent. Since 1973, supply constraints have contributed to several energy price shocks. The most recent price spike not only increased basic costs for consumers but also increased operating costs for organizations like USPS, which operates a large fleet of vehicles. Although gas prices declined steeply in late 2008 (see fig. 2), worldwide demand for oil is expected to ate 2008 (see fig. 2), worldwide demand for oil is expected to grow, and gas prices are expected to rebound as economic conditions grow, and gas prices are expected to rebound as economic conditions improve. improve. The administration, in an effort to strengthen national security, has set as one of its objectives decreasing U.S. reliance on foreign sources of energy. According to the Energy Information Administration, in 2007 about 58 percent of the oil consumed in the United States was imported. Through their potential to reduce oil consumption overall, plug-ins could help to reduce consumption of oil coming from foreign sources, but they could also create a reliance on another foreign resource. Specifically, most of the world’s reserves of lithium, which is needed to manufacture batteries for plug-ins, are located abroad, predominately in South America and China (see table 2). The United States has supplies of lithium, but if demand for lithium exceeded domestic supplies, or if lithium from overseas is less expensive, the United States could substitute reliance on one foreign resource (oil) for another (lithium). The consequences of relying on foreign sources of lithium could vary. On one hand, to the extent that this product is less expensive and readily available, as has often been the case for foreign sources of oil, manufacturers would be able to produce batteries at lower cost. On the other hand, if lithium supplies prove unstable—for example, due to political unrest in the countries in which they are located—or follow a similar pattern of price shocks as has oil, cost and risk for battery and plug-in manufacturers would increase. Furthermore, manufacturing batteries to mass produce plug-ins could be limited by the amount of lithium that can be extracted and produced. According to EPA officials, there is considerable disagreement on the ultimate worldwide supply of lithium, making it difficult to determine how many (or how few) batteries for plug-in vehicles could be manufactured in the long term. In addition, while current levels of global production (mining and refining) of lithium are measurable, other uncertainties—such as how much lithium will be needed in each battery—make it difficult to determine whether current levels of lithium production will need to be increased to meet demand. Despite these issues, reliance on foreign sources of lithium may not pose the same dependence issues as oil. For example, industry officials told us that lithium, including that from spent car batteries, is highly recyclable, so some future demand could be met by ensuring that sufficient recycling processes are in place. Industry officials also noted that the current recycling process used for car batteries—which has a high rate of participation by consumers, auto dealerships, and parts suppliers—could be adapted to lithium ion batteries. In addition, technology such as ultracapacitors, which are energy storage devises that are an alternative to batteries and that do not need lithium, or batteries that use materials besides lithium, which are being researched by at least one auto manufacturer, could be used in plug-ins. If these options prove viable, it would help avoid reliance on a single commodity for the production of plug-ins. Environmental and other benefits will depend on consumers adopting plug- ins, and consumers may be deterred if plug-ins are not cost-effective. The cost of lithium based batteries will make plug-ins more expensive than other vehicles, including conventional hybrids. According to industry participants we interviewed and recent research, the current cost of lithium batteries is about $1,000 to $1,300 per kilowatt hour. Depending on the size of the battery pack, which is a key factor in the all-electric range of plug-in hybrids and all-electric vehicles, the additional cost per vehicle can be substantial at this price. Ultimately, however, these batteries may become more affordable. A study by Carnegie Mellon University researchers found that if the cost of lithium batteries could be reduced to $250 per kilowatt hour, plug-in hybrids could become cost competitive with both conventional hybrids and gasoline vehicles. Industry observers from one organization we interviewed thought that $250 is an aggressive target, while a report from the Massachusetts Institute of Technology indicated that this price could be attainable in 20 to 30 years as manufacturers achieve economies of scale. However, if this price could be achieved, it would substantially reduce the cost battery packs add to the price of plug-in vehicle. Table 3 illustrates how the total cost of a battery pack can change depending on its size and the per kilowatt hour cost. Until the cost of batteries comes down, the Carnegie Mellon study concluded, the weight and size of the battery is a key consideration in the extent to which plug-in hybrids are cost-effective methods of reducing greenhouse gas emissions. For example, this study concluded that plug in hybrids with smaller batteries that are charged frequently—every 10 miles or fewer—are less expensive and release fewer greenhouse gases than conventional hybrids, but plug-in hybrids with larger batteries and all- electric ranges may not offer the same advantages. General Motors has contested the per kilowatt hour cost of batteries used in the Carnegie Mellon study, stating that the cost of the Volt’s battery pack is hundreds less than $1,000 per kilowatt hour—the baseline case used in the study to evaluate cost-effectiveness. General Motors further noted that its battery research team has already started work on new concepts that will further decrease the cost of the Volt battery pack substantially in a second-generation Volt pack. Gasoline and electricity costs will also determine whether plug-ins are cost-effective. Specifically, even if plug-ins have higher upfront costs, lower overall fueling costs relative to a gasoline-powered vehicle could offset the purchase price over time. For this to occur, the price of gasoline must be high relative to the cost of electricity to charge the vehicles. However, gasoline prices have varied greatly in the last few years, and if consumers do not believe that prices will return to previous highs, they may be unwilling to purchase a plug-in. Also, if power companies construct new power plants, including plants that use low-carbon power sources, these investments may increase the cost of electricity, which could offset the savings from reduced gasoline consumption, making plug- ins less appealing to consumers. Manufacturers plan to introduce several types of plug-in vehicles over the next 6 years. However, certain factors, such as the limitations of current battery technology, could delay availability of plug-ins, and the current financial situation could prevent consumers from purchasing plug-ins. The federal government has taken steps to encourage the development and manufacturing of plug-ins and has additional options for furthering this goal. Plug-in vehicles are not widely available. Currently available plug-ins include neighborhood electric vehicles, which have limited uses, and all- electric vehicles being made in limited numbers by small auto manufacturers. In addition, kits are currently available that allow consumers to convert conventional hybrids into plug-in hybrid vehicles, although there are several problems with more widespread adoption of conversions. First and foremost, a conversion typically voids the warranty on the vehicle. Second, not all of the conversion kits available have been crash tested to ensure they will meet safety requirements set by the National Highway Traffic Safety Administration for operating a vehicle on public roads. Third, EPA officials noted that conversions constitute tampering with emissions control systems, which creates an uncertified vehicle, can lead to increased emissions, and may cause warning lights to fail even if there is a serious problem with the engine or emissions system. Although officials stated that companies can certify a converted vehicle and obtain a certificate of conformity for their product, which would enable them to legally sell their plug-in hybrids, none of the companies offering conversions have done so. Finally, conversion kits cost at least $10,000, in addition to the cost of the vehicle. These factors could create a deterrent for consumers who might otherwise consider converting their vehicles and, according to GSA and DOE officials, have prevented the federal fleet from using this option to save fuel. However, both domestic and foreign auto manufacturers have announced plans to develop plug-in hybrids and mass produce additional all-electric vehicles. In the near term—2009 through 2012—plug-ins are expected to include sports cars, compact sedans, SUVs, at least one all-electric pickup truck, and a commercial all-electric van. In 2013 and 2014, the number of models of cars and SUVs—both plug-in hybrids and all-electric vehicles— will expand, and a minivan may be introduced (see table 4). Information from the Association of International Automobile Manufacturers suggests that Asian manufacturers will focus on producing all-electric and conventional hybrid vehicles and that only one plug-in hybrid is currently being planned. Domestic auto manufacturers are planning more plug-in hybrids, in addition to all-electric vehicles, and plan to expand conventional hybrid technology to existing gasoline-fueled models. However, the bankruptcy and restructuring of Chrysler and General Motors could affect these plans. As explained in the note in table 4, we received information on these plans directly from Chrysler, General Motors, and other auto manufacturers. The planned vehicles will have a range of capacities. The expected all-electric driving range of plug-in hybrids varies from a low of 10 miles per charge for the planned plug-in version of the Saturn VUE to a 50-mile all-electric range per charge for the Fisker Automotive Karma. Many of the planned all-electric vehicles are expected to have a driving range of about 100 miles on a single charge, although Tesla Motors plans to introduce an all-electric sedan with a range of 300 miles. As discussed earlier, the larger batteries necessary for plug-in vehicles will result in these initial vehicles being considerably more expensive than comparable vehicles. For example, Phoenix Motorcars’ all- electric pickup truck is expected to retail for $47,500, which is about 81 percent higher than the $26,175 suggested retail price of the comparably sized Ford F-150 pickup truck. Similarly the Chevrolet Volt is expected to retail for about $40,000 when it is first marketed, and it will be sized somewhere between a Chevrolet Cobalt and Pontiac G6. The Volt’s retail price is about $25,000 higher than the Chevrolet Cobalt and about $20,000 more than the Pontiac G6. Achieving economies of scale to help lower the cost of plug-in batteries will be difficult. For example, industry experts told us that manufacturing high-quality batteries requires considerable skill and sophisticated, precision-oriented manufacturing processes. Inadequate manufacturing processes will likely result in batteries that are more likely to fail. In addition, industry officials told us that most battery component manufacturing and assembly of battery packs is done abroad, and there is limited manufacturing capacity worldwide. While some manufacturers have announced plans to establish battery plants domestically, the capital investments will be significant. Congress established a program to assist companies interested in developing these plants in the Recovery Act. In addition, some industry participants told us that the purchasing power of the federal fleet could help manufacturers achieve economies of scale in battery manufacturing. However, with a total purchase of about 70,000 vehicles in 2008, and with only about 20,000 passenger sedans being purchased annually, the purchasing power of the federal government is small relative to the overall auto market. For example, about 13 million vehicles were sold in the United States in 2008 and about 16 million in 2007. In addition, questions about the potential longevity of lithium batteries remain and have caused at least one prominent manufacturer to be conservative in its plans to develop plug-ins. In early tests, and under testing conditions, lithium-ion batteries have been shown to last for a sufficient number of charging cycles to enable plug-ins to have a comparable lifetime to conventional automobiles. However, if the batteries prove unreliable in real world conditions, manufacturers could be exposed to significant costs associated with warranties. In addition, if consumers believe they may have to replace the battery after the warranty expires, the cost of doing so may discourage them from buying plug-ins or could drive down vehicle resale prices. As plug-ins reach a significant level of market penetration, additional infrastructure to charge them will likely be needed. One study estimated that about 40 percent of consumers do not have access to an outlet near their vehicle at home. Consumers without ready access to an outlet, such as those who only have street parking, would need public charging infrastructure, which manufacturers and others told us could be installed at the relatively low cost of perhaps a few thousand dollars for a new charging box. By comparison, ethanol (E85), another alternative to petroleum, has struggled to make inroads as an alternative transportation fuel, in part because it can cost up to $62,400 to install a new E85 fuel pump. However, public charging infrastructure would require establishment of a new system for building outlets and billing for the power dispensed, whereas fueling stations for gasoline vehicles are already widely available. In addition, plug-ins could increase demand for electrical power and, over time, power companies may have to generate more electricity to meet this demand, depending on when and how often vehicles were charged. Results from a Duke University study suggested that if plug-in hybrids reached 56 percent of the cars on the road by 2030, they would require an increase in electricity production, much of which would likely come from additional coal plants. Although an increase in coal consumption would produce additional carbon dioxide emissions, the study noted that if this increased consumption came during off peak hours, power companies would likely build additional capacity that produces electricity more efficiently and—excluding upfront capital costs—at lower cost on a daily operational basis. In the near term, a study by the World Wildlife Federation using 2005 levels of power generation estimated that 1 million plug-in hybrids would demand 0.04 percent of the nation’s power. In addition, a 2006 analysis by the Pacific National Laboratory estimated that, if plug-ins were charged during off-peak hours, about 84 percent of cars, SUVs, and pickup trucks on the road in 2001 could be supported without building new electricity-generation capacity. The variations in these studies are a consequence of different assumptions, and ultimately only real-world experience will show the actual demand for power. Thus, a large number of plug-ins could be put into use with available power, if consumers charge their plug-ins during off-peak hours. To encourage consumers to do so, cheaper rates for electricity could be charged after a certain hour at night. However, power companies would need to be able to apply different rates during off-peak hours and would need to make this cost advantage evident to consumers on their bills or through some other means, such as new technology. Such technology, or “smart charging infrastructure,” would likely need to include features that allow consumers to indicate by what time the car needs to be charged and a way to meter and bill consumers different prices for on- and off-peak consumption. Power companies, start-ups, and others have been working on smart charging infrastructure, but it is still under development. The economic recession has put the auto industry under significant financial stress, which could affect plans to introduce and mass-produce plug-ins over the next few years. In addition, if the following conditions are still present when manufacturers introduce plug-ins, consumers may also be discouraged from purchasing these vehicles. Declining sales: Auto sales declined in 2008 and early 2009, and while most auto manufacturers have been affected, declines have been more substantial for the “Detroit 3”—Chrysler, Ford, and General Motors. For example, Detroit 3 sales in the United States dropped by nearly 50 percent from February 2008 through February 2009, whereas U.S. sales for Honda, Nissan, and Toyota dropped 39 percent during this period. To stabilize their operations, Chrysler and General Motors will receive a total of about $13 billion and $50 billion in assistance, respectively, pending approval of the bankruptcy court and finalization of related transactions. To the extent that auto manufacturers have limited cash to continue developing plug-ins, as well as the capital to build or retrofit manufacturing plants to produce them, the development and availability of plug-ins could be hindered. Reduced consumer confidence: Deteriorating financial, real estate, and labor markets have reduced consumer confidence, which could make it difficult for manufacturers to market plug-in vehicles because of their significant price premium compared with less expensive gasoline-powered vehicles in the same class. Tight credit markets: Tightening credit markets have also limited the availability of loans for consumers to finance car purchases, even from the financial arms of the car companies. Should this continue, consumers may have difficulty financing the purchase of a plug-in. In addition to these issues, the recent spike and decline in gasoline prices may make it more difficult to market plug-ins in that consumers may be doubtful that they will recoup the high upfront costs of plug-ins through fuel savings over the life of the vehicle. However, industry stakeholders and researchers have pointed out that, in addition to fuel savings, buyers also consider performance, styling, and other intangibles—such as whether the vehicle makes a statement about its owner being “green”— when choosing between vehicles. The federal government has historically played a role in the research and development of plug-in vehicle technology and has recently provided grant funding for plug-in hybrid test fleets: Funding for basic research to develop technology: DOE funds basic research to develop battery technology for vehicles as well as other components necessary for electric-powered vehicles. DOE’s annual budget for such research was about $101 million in fiscal year 2009. In addition, the national laboratories have ongoing work related to plug-ins. Argonne National Laboratory has been designated by DOE as the lead laboratory and is testing and evaluating plug-in vehicle technology, including batteries, components, and vehicles, to shed light on the reliability of the technology over its expected life. Cost sharing for test fleets: DOE also supports the introduction of plug-in hybrid test fleets. For example, the Idaho National Laboratory is coordinating the collection and analysis of data from more than 150 converted plug-in hybrids deployed across the United States to understand the effects of real-world use on the technology. To initiate this test fleet, DOE established partnerships with organizations such as power companies, local government agencies, and others across the United States and Canada. DOE covered half the cost of converting a conventional hybrid to a plug-in hybrid, as well as the cost of the devices to collect and transmit data on fuel economy, charging patterns, and driver behavior back to the lab. In addition, DOE is administering a $30 million grant program to facilitate the deployment of demonstration vehicles to accelerate improvements to plug-in vehicle technology. The program offers funding to a team of businesses, including an auto manufacturer and battery development company that is willing to cover half of the cost of the demonstration fleet and data collection. In addition to research and development, the federal government has also taken steps to encourage the development and manufacture of plug-ins through a variety of programs, several of which were initiated by the Recovery Act. While these programs are designed to either directly or indirectly support the development and manufacture of plug-ins, they are still being implemented. Loans for modernizing manufacturing plants: The government has sought to help manufacturers manage the capital costs associated with producing advanced technology vehicles. In 2007, Congress established the Advanced Technology Vehicle Manufacturing (ATVM) loan program, which offers low-cost loans to auto manufacturers and component parts suppliers to retool aging plants or build new plants that will lead to the production of advanced vehicles that are at least 25 percent more fuel efficient than current vehicles for sale or advanced technology components for these new vehicles. Officials from the ATVM program noted that applicants include a wide range of technologies, from making improvements to components for gasoline vehicles to major technological breakthroughs in advanced vehicle technology. This program received an appropriation in the fall of 2008 of $7.5 billion, and DOE, which is tasked with administering the program, plans to offer the first round of loans in June 2009. In addition, Title XVII of the Energy Policy Act of 2005 established a loan guarantee program for innovative energy technologies. Congress has authorized this program to provide up to a total of $22.5 billion of loan guarantees for a category of renewable or energy efficient systems and manufacturing projects that could include production facilities for alternative fuel vehicles. Under the program, borrowers must pay the subsidy costs of the loan guarantees unless Congress appropriates funds to cover the costs, and it has not done so for alternative fuel vehicle production facilities. Battery manufacturing: To encourage the development of domestic manufacturing of advanced technology batteries, the Recovery Act appropriated $2 billion in grants for manufacturing batteries and related components. Battery technology to be targeted includes, but is not limited to, lithium-ion batteries, hybrid electrical systems, and related software. DOE will administer the program and released the solicitation on March 19, 2009. Direct funding to purchase fuel-efficient vehicles for the federal fleet: The Recovery Act appropriated $300 million to GSA for capital expenses associated with acquiring vehicles with high fuel economy, including conventional hybrids, plug-in hybrids, and all-electric vehicles. These funds must be used by September 30, 2011. GSA’s April plan to Congress states that GSA intends to spend this funding by September 30, 2009, to help stimulate the economy and purchase more fuel-efficient vehicles. As of June 1, 2009, GSA officials told us that they had obligated $287.5 million, ordering 3,100 vehicles in April and 14,105 on June 1. Because GSA will spend most of the funding before many plug-ins are commercially available, it does not plan to purchase this technology, save for a few hundred neighborhood electric vehicles. Tax credits for consumers purchasing plug-ins: The Recovery Act established a tax credit to consumers for the purchase of a plug-in vehicle. The credit increases with the size of the battery up to $7,500 but is not applicable for vehicles over 14,000 pounds. In addition, the Recovery Act established a credit of up to $2,500 for two-wheeled, three-wheeled, and low-speed four-wheeled plug-in vehicles, such as neighborhood electric vehicles, and establishes a credit of 10 percent of the cost of converting a vehicle—up to $4,000—for the conversion of existing vehicles to run on battery power. One study has indicated that smaller batteries that are more frequently charged may be more cost-effective solutions for reducing greenhouse gas emissions, but this tax credit program benefits plug-ins with larger batteries. In addition, tax incentives aimed at consumers with the oldest and least fuel-efficient vehicles can encourage them to retire these vehicles and replace them with plug-ins, thus resulting in a greater public benefit than replacing vehicles with average or higher fuel economy. However, the existing tax credit program is not designed with the replacement vehicle in mind but rather focuses on encouraging the adoption of plug-ins regardless of the vehicles they would replace. Transportation electrification: DOE is utilizing $400 million of funding from the Recovery Act to support the integration of electric-drive vehicles and technologies into the United States’ transportation sector. The Funding Opportunity Announcement that was released by DOE on March 19, 2009, includes a request for proposals to establish wide-scale demonstrations of electric-drive vehicles, including plug-in hybrid electric and battery electric vehicles. Several additional steps the federal government could take to encourage the development, manufacture, and commercialization of plug-ins emerged consistently during our discussions with experts and reviews of recent literature. Most of these options would impose costs on the federal government or society at large and therefore would require additional analysis to determine whether the potential benefit would be worth the cost. To reduce cost and risk of investing in battery technology and manufacturing for auto manufacturers, the government could share the cost of honoring warranties for plug-in batteries. However, if batteries prove to be unreliable, the government would be exposed to additional costs. To mitigate consumer reluctance to buy vehicles from a financially distressed company, Treasury provided $280 million to Chrysler and $360 million to General Motors to back warranties of these companies. As of June 2009, Treasury officials noted that Chrysler and General Motors continue to support their warranties and Treasury believes that the money provided to them will be returned to Treasury. We were not able to find estimates of the cost of this approach if it were to be applied to plug-in vehicles. Furthermore, if such funding were directed to troubled manufacturers, these costs would be in addition to the $17.4 billion already provided by the government to Chrysler and General Motors through the Troubled Asset Relief Program. Such a program could also be used to assist start-up companies specializing in all-electric vehicles, but we were not able to estimate the potential risk to the government. To reduce the cost of batteries by broadening the market for lithium batteries, the federal government could encourage the development of secondary uses for battery packs. Industry officials told us that lithium-ion batteries can be used to store energy—for example, from renewable sources like wind—which could then be used during a period of peak demand. These officials noted that both new batteries, and batteries that no longer had a useful life for a plug-in vehicle but that nonetheless could still retain a charge, could be used for this purpose. However, power companies also stand to benefit from developing this technology, and officials from some of the companies with whom we spoke indicated they were exploring this idea, which suggests that if government refrains from sponsoring such development, the private sector may do so. To encourage the continued development of low-carbon electricity, the government could institute a carbon pricing program, such as a carbon cap-and-trade program or carbon tax. If a cap-and-trade program, or carbon tax, were applied to transportation fuels, it could make the life- cycle costs of plug-ins more competitive with other vehicles, depending on its effect in changing the price differential of gasoline relative to electricity. An energy bill that includes a carbon cap-and-trade program was introduced in the 111th Congress, and the administration has indicated an interest in supporting such a program. Some economists advocate using revenue from a cap-and-trade program to lower income taxes, which could offset some of the increased cost consumers would experience from higher fuel prices. To enhance consumer acceptance of the technology and once reasonably accurate information on the performance of plug-ins is available, the government could play a role in providing consumer education. At the most basic level, the government could provide information to help consumers make the decision to invest in plug-ins by, for example, showing the extent to which fuel savings may offset the initial higher cost of plug-ins. In addition, it could inform consumers of potential electrical updates that may be needed in a home, such as a dedicated circuit for charging a plug-in, to prevent consumers from becoming frustrated once they bring their vehicles home. Finally, the government could provide information to help consumers use the technology more wisely. For example, it could explain the effects of driving style on plug-in hybrid fuel economy and the potential cost savings of charging during off-peak hours. The government already provides similar types of information on vehicles through sources such as its fuel economy Web site. Government may also need to both provide and standardize how some information on the performance of vehicles is communicated to consumers. For example, car companies are currently required to post EPA-validated fuel economy labels on new cars, but consumers may need other kinds of metrics about plug-ins, such as the length of time it takes to charge one with a 110- or 220-volt plug, and how far the different vehicles can go before they require charging or will begin to rely on gasoline for additional power. Such options could increase the regulatory role played by the federal government. However, EPA already plays a role in providing information on vehicle fuel economy and may be able to adapt current processes to include information on plug-ins. In the longer term, government could help facilitate smart charging by helping to develop the necessary infrastructure, which includes meters and a standardized communications between power companies and consumers. This would help ensure the electrical grid could accommodate widespread use of plug-ins. Federal rules and regulations may be needed to support these standards. Once plug-ins become commercially available, agencies will face challenges related to cost, availability, planning, and federal requirements. Agencies may have difficulty making the decision to invest in these vehicles instead of less expensive gasoline vehicles, given that they have limited information to help them take the longer-term costs into account using life-cycle analysis. Agencies also have not formulated plans for incorporating plug-ins into their fleets, largely because information they would need is not yet available. Finally, agencies may have difficulty meeting the federal goal of acquiring plug-in hybrids, as it conflicts with some federal requirements and agencies lack guidance on how to negotiate this situation. Just as the high initial cost of plug-ins may hinder consumer adoption of these vehicles, it will also limit agencies ability to acquire them. Plug-ins are likely to cost significantly more than comparably sized gasoline- powered vehicles, and because the upfront cost of a vehicle is a key factor when agencies select a vehicle, federal customers will likely not be able to purchase or lease many of these vehicles without additional funding to help cover costs. Thus, as a practical matter, agencies’ budgets will determine the extent to which they can integrate plug-in hybrids and all- electric vehicles into their fleets. GSA typically negotiates with auto manufacturers for significantly discounted prices for the vehicles it purchases and leases for federal agencies—typically more than 40 percent below the manufacturer’s suggested retail price. (See app. II for more information on GSA procurement processes.) For example, GSA offers agencies a Ford F-150 pickup truck for $15,111 (about an $11,000 discount to the suggested retail price), a Chevrolet Cobalt for $12,600 (about a $2,400 discount), and a 4-cylinder Pontiac G6 for $14,000 (about a $6,000 discount). GSA officials did not think they would be able to obtain the usual discount for early plug-ins since auto manufacturers are often reluctant to offer the same discounts for new model lines because they can better recover their start-up costs in the retail market. Therefore, since discounted plug-in hybrids will not likely be offered to the government, the cost differential between plug-ins and comparable vehicles—including other alternative fuel vehicles such as flex-fuel vehicles—could be even greater for the government than it would be for an individual consumer. The additional expense of plug-in hybrids and all-electric vehicles could also make it more difficult to incorporate leased plug-ins into the fleet. GSA officials said that their authorization limits the agency’s ability to replace existing vehicles with plug-ins in its leasing program, at least initially. Because the high cost of plug-ins will stretch thin GSA’s revolving fund’s ability to absorb costs over the life of the lease, GSA would need additional funding upfront to cover the higher costs of plug-ins. It could subsequently recover some of these costs by setting the lease rates for agencies at a level that would replenish these funds. However, this additional cost would cause lease rates for plug-ins to not be competitive with lease rates for similarly sized vehicles. In addition, GSA determines its lease rates for vehicles not just based on the initial price but also the price they can get for the vehicle in the used car market. However, uncertainties regarding the resale value of plug-ins will make it difficult for GSA to lower the lease rate based on the amount of money it could recoup through resale. Executive Order 13423 directs agencies to begin purchasing plug-in hybrids once they are reasonably comparable on a life-cycle cost basis with conventional vehicles. A life-cycle cost analysis includes factors such as the expected total fuel and maintenance costs of a vehicle over the years that the agency would operate it. This helps the purchaser determine the best long-term value for the investment. The Federal Acquisition Regulation does not explicitly require agencies to perform life- cycle cost analysis for their acquisitions, including vehicles they acquire, although agencies are free to do so. Among the agencies we reviewed, the use of life-cycle cost analysis varied, and according to FEDFLEET, an organization representing federal fleet managers, most agencies do not use life-cycle costing when evaluating which vehicles to purchase. When selecting vehicles, fleet managers with whom we spoke said they primarily consider mission needs, upfront costs, and federal goals and requirements, rather than long-term savings. However, of the agencies we reviewed, only agencies within DOD—the Air Force, Navy, and Marine Corps—reported that they evaluate life-cycle costs to differentiate between multiple vehicles that met the agencies’ needs. In order to conduct analysis of life-cycle costs, agencies need access to information that would enable such an analysis, such as estimates of lifetime fuel economy, and ongoing maintenance and repair data for specific vehicles. GSA officials told us that some information on life-cycle costs of vehicles is available though a database that houses information on fuel consumption reported by agencies, and that GSA Fleet would have some information on lifetime maintenance costs of some vehicles. In addition, life-cycle cost estimates for existing vehicles are available from public sources of automotive information. However, such information for specific vehicles is not readily available from GSA. For example, AutoChoice, a Web site developed by GSA to provide information to agencies on vehicles available for purchase, includes information about upfront costs and vehicle performance characteristics (such as engine size and fuel economy) but does not include information on total cost of ownership, such as estimated lifetime fuel or maintenance costs. For comparable conventional gasoline vehicles in the same class, differences in life-cycle costs may not be significant, but differences could arise when comparing a conventional gasoline-powered vehicle to a plug- in hybrid or all-electric vehicle, depending on a number of factors. However, since plug-in hybrids are not currently available in the marketplace, much of the information about their lifetime ownership costs is unknown. First, the fuel economy of planned plug-in hybrids has not been announced and will vary greatly depending on how agencies plan to use them. For example, plug-in hybrids used only within the all-electric range will use no gasoline at all, while plug-in hybrids used for long- distance driving may not offer fuel economy much better than a conventional hybrid or highly fuel-efficient gasoline-powered vehicle. Secondly, their maintenance costs could be significantly more or less than conventional technology. For example, failure of vehicle batteries––which will likely be the vehicles’ most expensive component––after warranties expire could entail significant costs for agencies. In addition, some maintenance issues may involve proprietary considerations or require additional specialized training for maintenance staff among agencies that service their own vehicles. Conversely, to the extent that plug-in vehicles will have fewer moving parts, they may offer significantly lower maintenance costs over the life of the vehicle. Finally, another important factor in determining vehicle life-cycle costs is resale value, which is also uncertain in the case of plug-ins. GSA officials said that past experience with advanced technology vehicles underscored the risk federal agencies might face when trying to resell the vehicles. For example, when GSA attempted to resell some of its compressed natural gas vehicles in the 1990s, there was no market for them and the resale value was essentially zero. By comparison, information from public sources of automotive data suggests that the projected value of a Toyota Prius, a conventional hybrid, will hold up well over time compared with similarly sized gasoline vehicles. We believe these uncertainties make it difficult for fleet managers to plan for the integration of plug-in hybrids in the early years of their commercialization and pose challenges for agencies in complying with the executive order. In addition, to compare plug-in hybrids with other vehicles available to them, agencies will need to make certain assumptions that can materially affect the estimation of whether the vehicles are comparable on a life-cycle cost basis. For example, factors such as agency policies about when and how often vehicles are charged, driving behavior and the types of trips plug-in hybrids are predominantly used for, and the potential for training needed to service the vehicles all can influence the costs of the vehicle to the agency over its lifetime. Currently there is no guidance on how to deal with these uncertainties and no further information about the performance of the vehicles. GSA and DOD have started to explore options that would allow the agencies to acquire and use neighborhood electric vehicles while minimizing some of the risk associated with the uncertainties described above. Specifically, GSA, on behalf of the Department of the Army, is currently negotiating “pass-through lease agreements” in which it would lease neighborhood electric vehicles directly from manufacturers and pass the leases on to the customer. In its effort to reduce petroleum consumption, the Army would like to order 4,000 neighborhood electric vehicles over a 3-year period beginning in 2009 and replace gas-powered vehicles, where appropriate, on a one-for-one basis. Leasing, rather than purchasing, the neighborhood electric vehicles will help mitigate risks associated with their maintenance and their minimal resale value, according to GSA and DOD officials. The cost of the leases could be higher if manufacturers adjust the rate to account for risk associated with expected costs and performance of plug-in vehicles. However, if the government leased these vehicles, it would avoid liability of ownership, especially with regard to the maintenance and resale challenges GSA and federal agencies would otherwise face. GSA has not yet explored the possibility of leasing other plug-ins directly from manufacturers; however, GSA officials thought this option would be worth exploring. Auto manufacturers may not make a high volume or wide range of plug-in vehicle models available to the federal government. The vehicles GSA is able to provide to its customers are limited to the models automakers are willing to sell to the government. Those offered have generally been limited to models that have been on the market for several years and are no longer at the peak of their retail sales. In addition, foreign manufacturers historically have not entered into procurement contracts with GSA. GSA officials informed us that although they have regularly pursued discussions with Toyota and Honda, both manufacturers have declined to submit proposals because of franchising and licensing agreements with their dealers in the United States. Of the large manufacturers that have announced plans to market plug-in hybrids in the next several years, only GM has said it would make these available to the government, but it has not indicated the quantities it would provide. The availability of plug-ins through smaller start-up manufacturers is also uncertain. For example, Phoenix Motorcars is marketing its all-electric pickup truck and SUV to fleets, and its first production run is scheduled to begin in 2009. GSA officials noted, however, that the Phoenix vehicles were not yet in production when it met with auto manufacturers to plan for fiscal year 2010. Almost all of the agency officials we interviewed stated they have not developed plans for incorporating plug-ins into their fleets, in some cases because of the uncertainties surrounding plug-ins. The Government Performance and Results Act of 1993 (GPRA) requires executive branch agencies to clearly establish their missions and goals. In guidance GAO developed to assist agencies implement GPRA, we stated that plans can help clarify organizational priorities and unify agency staff in pursuit of shared goals, like integrating plug-ins into the federal fleet. As we have mentioned in previous reports, plans can help clarify organizational priorities and unify agency staff in pursuit of shared goals, like integrating plug-ins into the federal fleet. These plans also must be updated to reflect changing circumstances and should include a number of key elements, such as (1) approaches for achieving long-term goals; (2) linkages to goals; (3) frameworks for aligning agency activities, processes, and resources to attain goals; (4) consideration of external factors; and (5) reliable performance data needed to set goals, evaluate results, and improve performance. Agency officials told us that the uncertainties surrounding plug-ins, as discussed throughout this report, prevent them from developing plans for integrating plug-ins into their fleets. For example, agency officials reported that the performance characteristics of plug-ins—such as fuel economy, length of time to charge, and range—are still in question. While there is some preliminary information on performance characteristics and potential benefits, agencies cannot determine with certainty whether the vehicles will meet their mission, which is one of the most important criteria in purchasing vehicles. In addition, according to FEDFLEET, plug- in hybrids are a suitable option for agencies located in metropolitan areas, on military bases, and federal centers, but agency fleet managers noted that plug-in hybrids may not be appropriate for agency missions located in remote areas or that require long-distance driving without assurance that charging infrastructure will be accessible. Finally, the compact size of the first plug-in hybrids expected on the market may be problematic. For example, USPS officials stated that they are unlikely to acquire plug-in hybrids with limited cargo capacity, such as the Chevy Volt, but viewed plug-in vans with larger cargo space as an option. Agencies are also uncertain how to plan for the integration of plug-ins because they have not determined whether additional charging infrastructure would be needed at federal facilities to accommodate the use of plug-ins. The first generation of plug-ins is expected to use ordinary plugs and outlets to recharge the vehicles, and agency officials expected that small numbers of plug-ins would not pose considerable infrastructure challenges. However, many agency officials we interviewed stated that they had yet to conduct any assessment of their current facilities to determine the extent to which they could support plug-ins and, thus, what modifications might be necessary. For example, according to several agency officials, federal agencies located in a commercially leased space may not have access to additional electrical infrastructure necessary to support vehicle charging, or the building owner may not be willing to provide it. Also, as the number of plug-ins used by federal agencies increases, it will likely become necessary to upgrade the facility’s electrical service to accommodate the growing demand. In addition, some agencies with their own charging facilities may need to collaborate with the local utility to ensure transformers serving the building can manage additional load. Agencies may also need to collaborate with local power companies and be prepared to install smart charging capability to ensure that electrical power is being used in the most efficient manner possible. Finally, some officials emphasized that they may need funding for additional infrastructure, such as charging stations. Because of these uncertainties, agency officials informed us that it would be extremely difficult to develop a plan that successfully incorporates plug-ins into their mission and uses these vehicles as effectively as possible. Agencies also face a challenge posed by the patchwork of existing federal requirements that covers energy use and vehicle acquisitions. In deciding whether to acquire plug-in hybrids and all-electric vehicles, agencies must also consider how this decision will affect their ability to meet these other requirements, some of which conflict with one another. These requirements are intended to further several important objectives, including reducing petroleum consumption and encouraging the use of alternative fuel vehicles and alternative fuel in the federal fleet. However, the current set of requirements does not provide agencies with a means to set priorities for these objectives and make complex decisions such as what vehicles to acquire under what circumstances. Using plug-in vehicles could create several challenges related to meeting energy reduction and fuel consumption goals. Consumption of electricity by plug-ins could conflict with energy reduction requirements for facilities: Under Executive Order 13423 agencies are expected to reduce energy intensity in federal facilities by 3 percent per year through the end of fiscal year 2015; further, EISA requires a reduction in energy intensity in facilities by 30 percent by the end of fiscal year 2015, relative to the baseline of their energy use in fiscal year 2003. Energy intensity is defined as energy consumed per gross square foot of facilities. Because plug-ins are expected to rely on electricity from federal facilities while charging, they could increase energy consumption, particularly if plug-ins are used in large numbers. Such an increase could create a conflict with the requirement in EISA for federal facilities to reduce energy consumption of facilities. If agencies do not have a means to determine the electricity used by plug-ins, they will have no way of subtracting vehicle usage from facility usage to track their progress in meeting the facility requirement. Without means to measure electricity used to “fuel” plug-ins, agencies may underestimate progress toward alternative fuel consumption requirements: EISA requires agencies to increase alternative fuel use by 10 percent annually. The electricity used to charge plug-in hybrids and all- electric vehicles, except neighborhood electric vehicles, can count toward this requirement. But according to agency officials, facilities are generally not equipped with dedicated meters or other means of measuring the amount of electricity used by vehicles. According to the DOE official responsible for federal fleet policy, electricity used by plug-in hybrids and all-electric vehicles could be estimated, but there is currently no guidance for how to do this. The lack of guidance regarding alternative fuel use for plug-in hybrids could hamper agencies’ ability to meet the 100-percent alternative fueling requirement: EPAct 2005 requires that alternative fuel vehicles be fueled with alternative fuel 100 percent of the time, unless they qualify for a waiver. In the case of flex-fuel vehicles that are fueled by ethanol (E85) and gasoline, agencies can qualify for a waiver to use gasoline in flex-fuel vehicles if E85 is not readily available or costs too much. DOE guidance allows exceptions under certain conditions—for example, agencies may use gasoline, instead of E85, to complete the mission at hand if E85 is unavailable. According to DOE officials, similar guidance will be necessary to address conditions when alternative fuel, specifically electricity, is unavailable for plug-in hybrids. The lack of guidance regarding the electricity used by neighborhood electric vehicles could lead to inaccuracies in alternative fuel consumption reporting: According to DOE, neighborhood electric vehicles do not qualify as alternative fuel vehicles under EPAct 1992. However, because neighborhood electric vehicles are fueled with electricity, without a means of accounting for their electricity use separately from that of plug-in hybrids and other all-electric vehicles, agencies could be improperly counting the electricity used by neighborhood electric vehicles as alternative fuel. Neighborhood electric vehicles can, however, help agencies meet their petroleum reduction targets, and DOD and GSA plan to put more of these vehicles into use. DOE has not provided guidance to agencies on this subject. DOE’s official responsible for fleet policy noted that because so few neighborhood electric vehicles have been used to date, the lack of policy has not been a problem. Now that neighborhood electric vehicles are becoming more popular, he said, DOE has begun developing guidance specifying how to account for the electricity used in neighborhood electric vehicles. In addition, the various federal requirements that pertain to energy use and vehicle acquisitions do not provide agencies with a clear way to set priorities and effectively address conflicts between these requirements. Until they are more affordable, plug-ins are unlikely to be the most cost- effective type of AFV for reducing petroleum consumption: EPAct 1992 requires that at least 75 percent of all new vehicle acquisitions by agencies for EPAct-covered fleets be alternative fuel vehicles. In addition, EISA requires agencies to reduce petroleum consumption. Acquiring plug-ins would be helpful in meeting both requirements. However, agencies would be able to replace more of their older, less-efficient vehicles by acquiring either less costly AFVs or fuel-efficient gasoline-powered vehicles. Depending on the circumstances, acquiring plug-ins could limit an agency’s ability to meet the requirement to reduce petroleum consumption. The new requirement to acquire low-emission vehicles creates an additional priority that agencies must manage: EISA directs agencies to procure only low-emission greenhouse gas vehicles, and EPA is in the process of developing a definition for these vehicles. DOE officials noted that the EISA requirement may be at odds with the AFV acquisition requirement because most AFVs in use today, particularly flex-fuel vehicles, meet the EISA emissions requirement only if they are fueled with alternative fuel, not gasoline. In addition, the amount of emissions produced by a plug-in hybrid depends in part on the source of energy used to generate electricity, as well as how much gasoline it consumes. Once agencies have guidance defining low- emission vehicles, they may face similar conflicts in trying to meet the various vehicle acquisition requirements and goals. Finally, in our 2008 report, which addressed the extent to which agencies were making progress toward meeting federal fleet energy objectives, we found several additional conflicts agencies experienced in trying to meet all of the current regulations. For example, we found that while agencies were able to meet the alternative fuel vehicle acquisition requirement, they were highly unlikely be able to meet the alternative fuel use requirement because of a limited supply of alternative fuel and an inadequate alternative fuel infrastructure. These issues were also factors in some agencies’ inability to meet the petroleum requirements for fiscal year 2007. Accordingly, we suggested that Congress consider aligning the federal fleet AFV acquisition and fueling requirement with current alternative fuel availability and revising those requirements as appropriate. As federal agencies work to cost-effectively comply with requirements and goals for conserving energy in their facilities and vehicle fleets, a number of uncertainties hinder their efforts. Although, by making statutory requirements, Congress signified the importance of acquiring alternative fuel vehicles, using alternative fuel, decreasing petroleum use, decreasing greenhouse gas emissions, and improving energy efficiency in facilities, the requirements can be costly and are sometimes in conflict. As a result, agencies are uncertain about setting priorities and struggle to meet the overall intent of these requirements and goals. Executive Order 13423’s directive to incorporate plug-in hybrids into fleets adds to the agencies’ struggle to balance requirements and goals within their budgets. Without having clear priorities for the patchwork of requirements that compete for funding, agencies may miss opportunities to effectively use new technologies and maximize petroleum reduction. Alternatively, agencies may opt to meet the requirements that are most feasible for them, regardless of whether the actions match the priorities of Congress. In the past, agencies chose among vehicles with internal combustion engines, which simplified the process of comparing the cost of vehicles and making cost-effective choices. With the advent of plug-in hybrids and all-electric vehicles, as well as new requirements such as reducing greenhouse gas emissions and petroleum consumption, the process has become more complicated. For several reasons, agencies lack information critical to making informed vehicle acquisition decisions that will meet energy-conservation requirements in a cost-effective manner. Specifically, agencies lack (1) data on how the different configurations of plug-ins will affect the costs of the vehicles over their life cycles, (2) strategic plans for how they will incorporate plug-in vehicles, and (3) guidance on how to account for the electricity plug-ins will use. Plug-ins will be expensive relative to other vehicles until battery costs come down and challenges such as achieving economies of scale are met. These high upfront costs will prevent agencies from including plug-ins in large numbers in their fleets without additional funding. Furthermore, agencies will also be hindered from incorporating plug-ins because of uncertainties regarding their performance, the maintenance and reliability associated with the vehicles’ batteries, and the resale value of the vehicles. Exploring the option of leasing the vehicles directly from manufacturers could help mitigate these risks and allow agencies to experiment with how well the vehicles perform within their fleet. To enable agencies to more effectively meet congressional requirements, we recommend that the Secretary of Energy, in consultation with EPA, GSA, OMB, and organizations representing federal fleet customers such as INTERFUEL, FEDFLEET, and the Motor Vehicle Executive Council, propose legislative changes that would resolve the conflicts and set priorities for the multiple requirements and goals with respect to reducing petroleum consumption, reducing emissions, managing costs, and acquiring advanced technology vehicles. We recommend that the Secretary of Energy begin to develop guidance for when agencies consider acquiring plug-in vehicles, as well as guidance specifying the elements that agencies should include in their plans for acquiring the mix of vehicles that will best enable them to meet their requirements and goals. Such guidance might include assessing the need for installing charging infrastructure and identifying areas where improvements may be necessary, mapping current driving patterns, and determining the energy sources used to generate electricity in an area. We also recommend that the Secretary of Energy continue ongoing efforts to develop guidance for agencies on how electricity used to charge plug- ins should be measured and accounted for in meeting energy-reduction goals related to federal facilities and alternative fuel consumption. In doing so, the Secretary should determine whether changes to existing legislation will be needed to ensure there is no conflict between using electricity to charge vehicles and requirements to reduce the energy intensity of federal facilities, and advise Congress accordingly. We recommend that the Administrator of the General Services Administration consider providing information to agencies regarding total cost of ownership or life-cycle cost for vehicles in the same class. For plug-in vehicles that are newly offered, the Administrator should provide guidance for how agencies should address uncertainties about the vehicles’ future performance in estimating the life-cycle costs of plug-ins, so agencies can make better- informed, consistent, and cost-effective decisions in acquiring vehicles. We also recommend that, once plug-in hybrids and all-electrics become available to the federal government but are still in the early phases of commercialization, the Administrator of GSA explore the possibility of arranging pass-through leases of plug-in vehicles directly from vehicle manufacturers or dealers—as is being done with DOD’s acquisition of neighborhood electric vehicles—if doing so proves to be a cost-effective means of reducing some of the risk agencies face associated with acquiring new technology. We provided a draft of this report to DOD, DOE, EPA, GSA, OMB, and USPS for review and comment. The audit liaisons from DOD, EPA, and USPS each provided comments via e-mail, and each agreed with the report findings and recommendations. In addition, EPA and USPS provided technical comments, which we incorporated into the draft. The Acting Administrator of GSA provided written comments and agreed with the findings and recommendations pertaining to GSA. The Deputy Associate Administrator for Procurement and Senior Budget Analyst responded orally on behalf of OMB and stated that OMB had no comment on the report’s findings and recommendations. DOE did not provide comments on our report within the 30-day review period. We are sending copies of this report to interested congressional committees and the Secretary of Defense, the Secretary of Energy, the Administrator of the Environmental Protection Agency, the Acting Administrator of the General Services Administration, the Director of the Office of Management and Budget, and the Postmaster General and Chief Executive Officer of the United States Postal Service. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Susan Fleming at [email protected] and (202) 512-2843 or Mark Gaffigan at [email protected] and (202) 512-3841. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The scope of our work included all of the various plug-in hybrid electric vehicle designs as well as the full range of plug-in electric vehicles that are currently in development or already on the market. We defined this set of vehicles as “plug-ins” since they derive part or all of their energy from plugging into an electricity source. Although the United States Postal Service (USPS) is not subject to Executive Order 13423 as are other federal agencies, our review encompassed the fleet operations of USPS because of its size, its past experience in testing electric vehicles, and the potential of that fleet to utilize plug-in technologies. In addition, USPS officials indicated that they will try to comply with the executive order even though they are not required to do so. To inform each of our objectives, we conducted nine site visits with organizations that have test fleets of plug-in hybrids and all-electric vehicles (see table 5). To identify the potential benefits and trade-offs of plug-ins, we interviewed officials from power companies and other entities, such as the National Laboratories, currently testing plug-ins. We also reviewed data from these organizations on the performance of plug-ins when it was available. We analyzed the results of published studies from academic research centers and others that evaluated the potential benefits plug-ins can offer with respect to issues such as reducing fuel consumption and greenhouse gas emissions and identified trade-offs plug-ins could require compared to other alternative fuel vehicles and conventional gasoline vehicles. In addition, we used these articles to identify changes in current conditions— such as shifting power sources used to produce electricity from fossil fuels to low carbon energy sources—that would be needed to ensure that plug- ins realized their potential. To determine the current status of plug-ins, in February 2009 we obtained information directly from Chrysler, Ford, General Motors, Phoenix Motorcars, and the Association of International Automobile Manufacturers. We also reviewed published material on Web sites of a variety of smaller manufacturers, such as Tesla Motors, Fisker Automotive, and others about the plug-ins that those manufacturers plan to bring to market. To understand the development of plug-in vehicle and battery development and identify any potential challenges to the development and commercialization of these technologies, we interviewed a wide variety of stakeholders, and reviewed documents from diverse stakeholders, including auto manufacturers, battery manufacturers, Department of Energy (DOE) and Environmental Protection Agency (EPA) officials, National Laboratory researchers, power companies, charging infrastructure equipment companies, and others. We reviewed published research related to plug-in technology, such as studies on vehicle and battery performance and consumer acceptance of plug-in technology. In addition, to examine the impact of rising or falling gasoline prices relative to electricity prices, different battery costs and prices of vehicles, and different assumptions regarding maintenance expenses and resale values, we developed a model to attempt to estimate the life-cycle and cost-effectiveness of plug-ins relative to conventional hybrids and conventional gasoline- powered vehicles. While our modeling effort highlighted the importance of certain variables, such as battery cost, because of the significant uncertainties regarding the estimates used in these models, we do not report specific results. We also tracked and analyzed developments related to the current economic crisis and financial stress facing the auto industry and the potential impact this crisis could have on plug-in vehicle development. In addition, we reviewed programs and incentives the federal government is using to assist auto manufacturers in developing and commercializing plug-ins, as well as incentives offered to consumers to purchase plug-ins. To determine the options that exist for the federal government to address challenges in the development, manufacture, and commercialization of plug- ins, we analyzed and synthesized the views of a wide range of stakeholders from interviews, published studies, and analyses regarding options for federal involvement. To ensure the studies we considered were of sufficient scientific rigor, we limited our review to articles published in well-respected peer- reviewed journals or those provided by experts or organizations because of their level of expertise in this area. Articles using cost-benefit analysis to describe the relative benefits of plug-ins were reviewed by an economist. The options selected for discussion represent those supported by many of these experts. In addition, we considered the potential costs the options could pose to the federal government as well as what role the government might play relative to other stakeholders who also stand to benefit from this technology. Inherently there are certain limitations and variances in the quality of information available about these options. Therefore, we used professional judgment in identifying the relative benefits and limitations of these options. In addition, we identified steps already taken by DOE and others to hasten the development of plug-ins and reviewed recent legislation, including the Recovery Act, to describe the most recent actions taken by the government to forward this technology. To describe how agencies are addressing the requirement to integrate plug-in hybrids into the federal fleet, we reviewed and analyzed plans and analyses prepared by the Department of Defense (DOD), DOE, EPA, the General Services Administration (GSA), USPS, and other agencies in order to represent a mixture of large and small fleets, vehicle use patterns, and types and conduct interviews with fleet managers from those agencies. We also interviewed officials from the Office of Management and Budget to understand their role in overseeing agency compliance with federal energy and fleet requirements and goals, including Executive Order 13423. To identify challenges related to integrating plug-in hybrids or all-electric vehicles into federal fleets, we interviewed fleet managers from DOD— including those of the Army, Navy, Air Force, and Marines—DOE, GSA, and USPS. We also attended and held discussions with those attending federal fleet manager meetings (FEDFLEET) organized by GSA. Furthermore, we used in-depth discussions with fleet managers from the selected agencies and our discussions at the FEDFLEET meetings to examine the life-cycle costing methodologies used by fleet managers to select vehicles for their fleets. To understand how alternative fuel vehicles and others are priced and made available to the federal fleet, we reviewed GSA’s procurement process. We also analyzed and compared the requirements contained in various legislative mandates and executive orders related to (1) federal fleet use of alternative fuels, (2) reductions in agencies’ overall energy consumption, and (3) increased reliance on renewable energy sources to identify how integrating plug-ins might help or hinder agencies’ efforts to meet these requirements. Because we did not interview managers from all of the agencies operating vehicle fleets, our findings are not be applicable to all federal agencies. Federal agencies are required by regulation to purchase all nontactical vehicles through the General Services Administration (GSA), which leverages its status to procure vehicles at significant discounts. The United States Postal Service (USPS) is not subject to GSA’s purchase restrictions as USPS can purchase its own vehicles or use GSA’s services to do so. Motor vehicle supply activities are largely carried out by two units within GSA’s Federal Acquisition Service—GSA Automotive, which is responsible for contracting with manufacturers and other suppliers for nontactical motor vehicles, and GSA Fleet, which leases a broad range of vehicles to federal customers and other eligible entities. Using the previous year’s purchase as a baseline, GSA Automotive contracts with auto manufacturers and other suppliers to procure vehicles for federal customers. This annual process begins each winter with discussions between GSA and auto manufacturers about anticipated federal needs, future vehicle availability, and any changes that have been made to federal vehicle standards. The purpose of the standards is to establish a practical degree of standardization within the federal fleet. The standards are organized by class of vehicle––such as sedans, trucks, and buses––and outline minimum criteria for vehicle characteristics such as engine horsepower, cabin space, and safety features. GSA publishes the standards and encourages the manufacturers to identify models they could offer at a competitive price to the government that meet or exceed the standards. If GSA modifies the standards to address, for example, new federal mandates or goals, GSA publishes a draft version and provides a comment period for stakeholders before finalizing them. Once the standards are finalized, GSA Automotive issues at least five solicitations in the FedBizOpps, the government portal for federal procurement opportunities to cover specific types of vehicles, such as sedans, trucks, buses and ambulances, reviews proposals, and awards contracts in time for the beginning of the fiscal year on October 1. The contracts are typically indefinite quantity/indefinite delivery contracts. Although federal motor vehicle procurement has not been limited to purchasing vehicles made by domestic manufacturers by the Federal Acquisition Regulations, historically, only domestic automakers have submitted proposals. According to GSA officials, the agency may contract with auto manufacturers from any country with which the United States has a trade agreement if the order is greater than $194,000. However, foreign manufactures have not submitted proposals in the past, citing franchise and licensing agreements with their domestic dealers as preventing direct sales of vehicles to the government. Nonetheless, some foreign manufacturers have encouraged their dealers to contract with GSA, which has allowed GSA to procure vehicles made by foreign manufacturers in limited numbers through domestic dealers. GSA is required by law to recover all costs it incurs in providing vehicles and services to federal customers. Since neither GSA Automotive nor GSA Fleet receives appropriations through the annual budget cycle, boththe procurement and the leasing activities operate out of revolving funds that are reconciled each year. GSA Automotive awards contracts for vehicles, provides information to agencies on pricing for evaluation, an places orders against the awarded contracts using their agency funds Automotive applies a 1 percent surcharge to the final purchase price of each vehicle ordered. Similarly, GSA Fleet obligates money to GSA Automotive from its revolving fund to purchase the vehicles it leases to federal customers and recovers purchase and maintenance costs through lease fees and the resale of vehicles at the end of their life cycle. According to GSA Fleet officials, approximately 20 percent of the leased vehicles are replaced each year. GSA Fleet replaces the leased vehicles using several criteria, among which age and mileage are foremost. Since GSA Fleet needs to recover its costs to maintain the solvency of its revolving fund, it auctions off most of its sedans, for example, within 5 years of their purchase. Agency-owned vehicles are usually retained for longer periods, as are larger vehicles such as trucks and buses. In addition to the contacts named above, Andrew Von Ah, Assistant Director; Karla Springer, Assistant Director; Lindsay Bach; Nabajyoti Barkakati; Charles Bausell; David Hooper; Joah Iannotta; John Johnson; SaraAnn Moessbauer; Madhav Panwar; and Crystal Wesco made key contributions to this report. | The U.S. transportation sector relies almost exclusively on oil; as a result, it causes about a third of the nation's greenhouse gas emissions. Advanced technology vehicles powered by alternative fuels, such as electricity and ethanol, are one way to reduce oil consumption. The federal government set a goal for federal agencies to use plug-in hybrid electric vehicles--vehicles that run on both gasoline and batteries charged by connecting a plug into an electric power source--as they become available at a reasonable cost. This goal is on top of other requirements agencies must meet for conserving energy. In response to a request, GAO examined the (1) potential benefits of plug-ins, (2) factors affecting the availability of plug-ins, and (3) challenges to incorporating plug-ins into the federal fleet. GAO reviewed literature on plug-ins, federal legislation, and agency policies and interviewed federal officials, experts, and industry stakeholders, including auto and battery manufacturers. Increasing the use of plug-ins could result in environmental and other benefits, but realizing these benefits depends on several factors. Because plug-ins are powered at least in part by electricity, they could significantly reduce oil consumption and associated greenhouse gas emissions. For plug-ins to realize their full potential, electricity would need to be generated from lower-emission fuels such as nuclear and renewable energy rather than the fossil fuels--coal and natural gas--used most often to generate electricity today. However, new nuclear plants and renewable energy sources can be controversial and expensive. In addition, research suggests that for plug-ins to be cost-effective relative to gasoline vehicles the price of batteries must come down significantly and gasoline prices must be high relative to electricity. Auto manufacturers plan to introduce a range of plug-in models over the next 6 years, but several factors could delay widespread availability and affect the extent to which consumers are willing to purchase plug-ins. For example, limited battery manufacturing, relatively low gasoline prices, and declining vehicle sales could delay availability and discourage consumers. Other factors may emerge over the longer term if the use of plug-ins increases, including managing the impact on the electrical grid (the network linking the generation, transmission, and distribution of electricity) and increasing consumer access to public charging infrastructure needed to charge the vehicles. The federal government has supported plug-in-related research and initiated new programs to encourage manufacturing. Experts also identified options for providing additional federal support. To incorporate plug-ins into the federal fleet, agencies will face challenges related to cost, availability, planning, and federal requirements. Plug-ins are expected to have high upfront costs when they are first introduced. However, they could become comparable to gasoline vehicles over the life of ownership if certain factors change, such as a decrease in the cost of batteries and an increase in gasoline prices. Agencies vary in the extent to which they use life-cycle costing when evaluating which vehicle to purchase. Agencies also may find that plug-ins are not available to them, especially when the vehicles are initially introduced because the number available to the government may be limited. In addition, agencies have not made plans to incorporate plug-ins due to uncertainties about vehicle cost, performance, and infrastructure needs. Finally, agencies must meet a number of requirements covering energy use and vehicle acquisition--such as acquiring alternative fuel vehicles and reducing facility energy and petroleum consumption--but these sometimes conflict with one another. For example, plugging vehicles into federal facilities could reduce petroleum consumption but increase facility energy use. The federal government has not yet provided information to agencies on how to set priorities for these requirements or leverage different types of vehicles to do so. Without such information, agencies face challenges in making decisions about acquiring plug-ins that will meet the requirements, as well as maximize plug-ins' potential benefits and minimize costs. |
Enhanced retirement benefits for certain law enforcement personnel began in 1947, when legislation was enacted into law providing the Federal Bureau of Investigation (FBI) Special Agents with a change in qualification for retirement benefits to help the FBI better manage its workforce. In 1948, legislation was enacted that expanded the provision of enhanced retirement benefits to certain other federal officers whose duties were primarily the investigation, apprehension, or detention of persons suspected or convicted of offenses against the criminal laws of the United States and to certain law enforcement personnel who moved to a supervisory or administrative position. In 1956, the enhanced retirement benefits definition was amended to specifically include within the term “detention” the duties of certain federal correctional employees, such as those in the Bureau of Prisons. In 1974, legislation was enacted into law that provided a statutory definition of a LEO for retirement purposes within CSRS. This legislation also increased the accelerated annuity multiplier and contained mandatory retirement provisions. The enhanced benefits attempted to provide a LEO with a retirement plan whereby it is economically feasible to retire at an earlier age with fewer years of service than regular civil service employees. Such benefits were also intended to assist the federal government with encouraging the maintenance of a young and vigorous law enforcement workforce through youthful career entry, continuous service, and early separation. According to OPM actuaries, as of April 2009, one out of five federal employees were covered by CSRS. At the end of fiscal year 2008, about 1.6 million persons were on the rolls of CSRS as retired and approximately 42,500 (about 3 percent) were receiving LEO retirement benefits. In 1986, a new retirement system for federal employees, the Federal Employees Retirement System (FERS), was established which, among other things, included provisions relating to LEOs, such as a different pension accrual formula, a mandatory LEO retirement age, and a related requirement limiting LEO coverage to only those positions that are physically demanding. More specifically, while the definition of a LEO under FERS generally mirrors the definition under CSRS, the FERS definition introduced and currently includes a rigorous duty standard. This provides that LEO positions be limited to those positions that are sufficiently rigorous that employment opportunities must be limited to young and physically vigorous individuals, as determined by the Director of OPM considering the recommendations of the employing agency. In 1988, the FERS LEO definition was amended to include two employee groups not determined by OPM to be covered by the definition, the Department of Interior Park Police and the U.S. Secret Service Uniformed Division. In general, neither the CSRS nor FERS LEO definitions have been interpreted by OPM to cover federal police officers. Implementing OPM regulations for CSRS and FERS provide that the respective LEO regulatory definitions, in general, do not include an employee whose primary duties involve maintaining order, protecting life and property, guarding against or inspecting for violations of law, or investigating persons other than those who are suspected or convicted of offenses against the criminal laws of the United States. At the end of fiscal year 2008, about 312,000 federal employees were retired and receiving benefits covered by FERS, with about 7,500 of them receiving LEO retirement benefits. Further elaboration on the history of the LEO definition can be found in appendix II. Generally, the retirement benefits received by federal LEO and other law enforcement personnel receiving similar benefits are greater than those provided to most other federal employees, albeit over a shorter period of time due to mandatory retirement age. Under both CSRS and FERS, the law provides for a faster accruing pension for LEOs than that provided for most other federal employees. For example, CSRS LEO pension benefits accrue at 2.5 percent times the number of years of service for the first 20 years (50 percent) compared to an average of less than 2 percent per year (36.25 percent for regular federal employees) for that same 20 year period. At age 50 with 20 years of service, a CSRS LEO’s annuity is about 38 percent higher than the annuity of a regular federal CSRS employee. Under FERS, LEO benefits accrue at 1.7 percent per year for the first 20 years compared to 1 percent per year for regular federal employees (34 percent versus 20 percent). Thus, for those under FERS, the total defined benefit is 70 percent higher for LEOs than for other federal employees at 20 years of service. See appendix III for additional information on the accrual rates of LEOs and regular federal employees. The greater retirement benefits received by federal LEO and other law enforcement personnel receiving similar benefits may be because as a group, LEO occupations are higher graded than the more occupationally diverse regular civil service employee occupations. They also may get additional credit toward basic pay for annuity computation from special pay provisions. For example, as shown in table 1 below, for those persons retiring in fiscal year 2008, the estimated typical annuity of an average LEO employee under FERS was over $17,000 more (more than double) than that of an average non-LEO FERS annuitant. Along with the more rapid pension accrual, benefits are also generally available to LEOs earlier than other employees with no penalties for early retirement compared to the average federal employee under the same retirement system; and more favorable treatment of cost of living adjustments (under FERS). LEOs under both CSRS and FERS are subject to mandatory retirement provisions, whereas most other federal civilian employees are not. Specifically, as a means to maintain a youthful and vigorous workforce, a law enforcement officer is subject to mandatory retirement when the officer becomes 57 years of age or, in some cases, older than 57 if needed to complete 20 years of service as a LEO. Both CSRS and FERS personnel receiving LEO benefits may be retained for a short time beyond the mandatory retirement age under certain circumstances. While allowing for an individual to obtain the full 20 years of coverage needed to qualify for LEO benefits, agencies also set maximum entry age requirements for LEOs based on the age and service requirements for LEO mandatory retirement. Thus, maximum entry age is typically 37 because it allows an employee to achieve 20 years of LEO service at age 57. Some agencies have extended their maximum hiring age for LEOs to around 40 to facilitate the hiring of certain highly-skilled armed services veterans who have completed a military career. For example, CBP has implemented a maximum entry age of 40 for its Border Patrol Agent positions. In addition to retirement benefits, the pay of law enforcement personnel also varies. In general, federal white-collar jobs are assigned a General Schedule (GS) grade. Grades represent level of difficulty, responsibility, and qualifications required of the person who fills that job. Pay varies within a grade level on the basis of 10 steps; employees receive step increases within a grade if they perform acceptably and have satisfied the waiting period requirement established for each step. LEOs within the GS system are entitled to higher rates of basic pay at grades GS-3 through GS-10, which increase pay by 3 to 23 percent above the normal federal government general schedule depending on grade level. Some LEOs are also entitled to law enforcement availability pay or administratively uncontrollable overtime pay. Availability pay is a regular supplement equal to 25 percent of the recipient’s adjusted rate of basic pay, subject to premium pay limitations. It is compensation generally fixed at 25 percent of the rate of basic pay for the position for the first 2 overtime hours on a regular workday and for additional irregular overtime hours. At agency discretion, certain employees may receive administratively uncontrollable overtime pay equal to 10 percent to 25 percent of their basic pay, with most recipients receiving a rate of 25 percent based on working an average of at least 9 hours of irregular overtime hours per week. Both availability pay and administratively uncontrollable overtime pay are to be counted as basic pay for computation of annuities, and as a result can increase the dollar value of an individual’s highest 3 earning years which are used to compute the annuity benefit amounts. In 2003, OPM established special rates for many GS police officers not considered to be LEOs by definition, because their primary duties involved maintaining order and protecting life and property as opposed to primarily involving the investigation, apprehension, or detention of individuals suspected or convicted of offenses against the criminal laws of the United States, which is a criterion in the definition. These special rates provide large increases at lower grades similar to the LEO special rates. At some grades and locations, the police special rates exceed the locality adjusted rates for LEOs at grades GS-3 through GS-10. As of fiscal year 2008, there were approximately 2,800 law enforcement personnel receiving special pay rates, without receiving enhanced retirement benefits. For additional information on some of the various pay systems that cover law enforcement personnel, see appendix IV. Before a group of employees may receive enhanced retirement benefits through the administrative process, agencies must make an administrative determination whether this group meets the statutory and regulatory definitions relating to a LEO and submit it for OPM’s determination. In recent years, several employee groups and unions representing law enforcement-related personnel who have not been found by their employing agencies and OPM to meet the applicable LEO definitions have sought to obtain enhanced retirement benefits directly through separate legislation. As part of the administrative process, agencies with law enforcement missions determine which occupations or employee groups are necessary for accomplishing their missions, taking into account the agency’s overall authorized level of resources and appropriations. As part of this determination, an agency head decides whether a particular position should be approved for LEO retirement coverage. If an agency determines the need for new positions that meet the statutory and regulatory definitions relating to a LEO (and therefore could receive enhanced retirement benefits, special pay, or salary provisions, and be subject to a mandatory retirement age), the agency sends a notice to OPM. This notice is to consist of, for example, the title of the position, the number of incumbents, whether the position is a supervisory or administrative position, whether it is a rigorous position, and the maximum entry age of the position. With certain exceptions, OPM may, at its discretion, review the position description to determine if it meets certain aspects of the statutory LEO definition. According to OPM officials, there is no requirement for a discussion between the agency head and OPM prior to an agency head’s decision and the issuance of a notice about such an administrative determination to OPM. OPM officials also stated that OPM has received hundreds of LEO retirement coverage notices covering probably thousands of positions over the last 10 years, reviewing about six position descriptions a month. OPM officials stated that OPM rarely overrules an agency head’s decision, but maintains the authority to do so. OPM officials noted a case in the late 1990s in which they had reviewed the Secretary of Energy’s decision to grant Nuclear Materials Couriers LEO status and accompanying benefits and overturned the Secretary’s decision because these positions did not meet the applicable LEO definitions. OPM officials, however, could not provide us with data on how often OPM overrules an agency head’s decision for granting LEO status and retirement benefits. As of fiscal year 2008, approximately half of federal employees receiving enhanced retirement benefits have been found to meet the applicable LEO definitions and are accruing such benefits as a result of the administrative process. Select employee groups that have been found to meet the LEO definitions and receive enhanced retirement benefits through agency determinations and OPM’s administrative process are shown in table 2. If individual employees feel that they have been wrongly excluded from LEO retirement provisions, the employee may, for example, appeal the final decision of an agency denying an individual’s request for approval of a position as rigorous, to the MSPB. According to MSPB officials, they periodically review employee appeals related to LEO coverage but noted that the number of such appeals has decreased in the last couple of years. The employee may also appeal the final decisions of MSPB to the U.S. Court of Appeals for the Federal Circuit. Overall, at the department level, DHS and DOJ human capital officials, as well as IRS officials, supported the use of the administrative process for determining who meets the LEO definitions and who receives LEO retirement benefits because they felt this process worked well and met the needs of their departments. Employee groups who have not been determined to meet the definitions of a LEO but believe they deserve similar benefits have sought these benefits directly through legislative action. For example, as noted above, Nuclear Materials Couriers were denied LEO status by OPM but, with support from the Department of Energy, were eventually provided with enhanced retirement benefits similar to those received by LEOs directly through legislation. In most cases, the recent efforts of those employee groups seeking enhanced retirement benefits have been led by unions or other organizations representing the interested employee groups, not the employing agencies. The employing departments and agencies generally have determined that the groups seeking LEO benefits through direct legislation do not meet the LEO definitions and do not qualify for the benefits. For example, various pieces of legislation were introduced in the 110th Congress that would have provided such benefits to approximately 25,000 additional employees. These employees include certain federal police who have not been found to meet the statutory LEO definition, Assistant U.S. Attorneys, CBP Agriculture Inspectors, and IRS Revenue Officers. In discussions with DHS human resource officials about their views on additional employee groups seeking enhanced retirement benefits directly through legislation, they expressed concern regarding such proposals. Human resources officials of the Justice Management Division (JMD) of DOJ stated that they found such proposals problematic due to high, unfunded costs and the fact that the positions do not meet the statutory definition of law enforcement officer. Specifically, in reference to proposed legislation that would have provided enhanced retirement benefits to Assistant U.S. Attorneys, these officials stated that the duties of Assistant U.S. Attorney positions are not primarily the investigation, apprehension, or detention of individuals nor related to the protection of officials of the United States against threats to personal safety. DOJ JMD officials added that Assistant U.S. Attorney duties also do not require young and vigorous personnel which is essential to a law enforcement officer workforce. As of fiscal year 2008, approximately half of law enforcement personnel receiving enhanced retirement benefits did not receive these benefits through the application of the LEO definitional criteria from their employing agency and OPM via the administrative process, but received these benefits directly through legislation that either (1) provided benefits similar to those received by LEOs or (2) added their occupation to the statutory LEO definition. Select employee groups receiving enhanced retirement benefits through these two ways are listed in table 3. Law enforcement-related employee groups that sought enhanced retirement benefits directly through legislation have cited the reduction of high attrition rates as a primary rationale for granting such benefits to those not currently receiving them. Other reasons cited include the need to provide equitable benefits to groups performing similar duties and how changing duties may have put the employees at more risk. Although data exist that could provide some insight into attrition in the federal workforce as a means to inform decisions on retirement benefits, the groups requesting these benefits have not consistently provided this data to us. The additional short-term costs to a federal agency for providing enhanced retirement benefits for LEOs under FERS is higher than the costs of providing benefits to regular federal employees, raising questions about the ability of agencies to cover increased costs if additional employee groups receive such benefits. In addition, while the long-term costs to the federal government of providing enhanced LEO or similar retirement benefits for CSRS-covered staff are important, such costs are not included in the Congressional Budget Office scoring process. Finally, providing enhanced retirement benefits to certain employee groups directly through legislation has created perceived inequities across certain law enforcement-related occupations and some agencies report that future action to provide enhanced retirement benefits to certain employee groups could affect their strategic workforce planning. In their petitions seeking benefits outside of OPM’s administrative process, organizations have cited a number of rationales for providing enhanced retirement benefits to the employees they represent. The primary rationale used by additional groups seeking benefits is that law enforcement-related personnel have high rates of attrition because they are not currently receiving enhanced retirement benefits. However, when we asked the employee groups and unions seeking enhanced retirement benefits for those they represent for data to substantiate this rationale, they did not consistently provide these data. To examine the validity of this rationale, we analyzed attrition rates by law enforcement status governmentwide to determine if a relationship between attrition rates and enhanced retirement benefits exists. According to our analysis of CPDF data, law enforcement-related personnel not receiving enhanced retirement benefits typically have higher attrition rates than those law enforcement personnel receiving LEO or similarly enhanced retirement benefits, but lower than the attrition rates for general federal government employees. Specifically, the average government-wide attrition rate from fiscal years 2004 through 2008 for law enforcement-related personnel not receiving enhanced retirement benefits was 4.7 percent, compared to 3.2 percent for law enforcement personnel receiving enhanced retirement benefits and 3.5 percent for law enforcement-related personnel who received special pay and no enhanced retirement benefits. The average governmentwide attrition rate for all other federal personnel (those not employed in law enforcement or related occupations) was 5.4 percent, higher than all of the governmentwide averages for law enforcement and related personnel. Figure 1, below, illustrates these comparative trends. Further, our analysis revealed the following: Attrition rates vary by department and are influenced by type of occupation, challenging work conditions, and other factors. For example, the average attrition rates from fiscal years 2004 through 2008 for law enforcement-related personnel not receiving enhanced retirement benefits for DHS and DOJ were 4.1 percent and 3.5 percent, respectively, while the average attrition rates for law enforcement personnel receiving such benefits for DHS and DOJ were 4.8 percent and 2.2 percent, respectively. This difference in the average attrition rates for those law enforcement personnel receiving enhanced retirement benefits may be attributed to the types of occupations and their associated law enforcement-related functions. For example, DHS officials attributed some of the attrition within one of its component agencies, CBP, to the challenging work of some personnel, especially those stationed at remote U.S. border locations. In comparison, DOJ officials reported a high degree of employee satisfaction in the FBI Special Agent occupation, but some attrition challenges in relocating agents to high-cost urban areas or other undesirable areas. However, the FBI Police, consisting of approximately 250 officers, has experienced relatively high level of attrition in comparison to the department. Specifically, the average attrition rate of FBI Police in fiscal year 2008 was approximately 17 percent, which is more than 5 times higher than DOJ’s average. Meanwhile, the Department of the Treasury’s law enforcement and law enforcement-related personnel have lower average attrition rates than similar personnel groups within DHS and DOJ. For example, the average attrition rate from fiscal years 2004 through 2008 for law enforcement-related personnel not receiving enhanced retirement benefits was 2.0 percent while the average attrition rate for law enforcement personnel receiving such benefits was 1.7 percent. For more information on the attrition rates by year and by department, see appendix V. Attrition was higher for those law enforcement and law enforcement-related personnel with fewer years of service. For example, governmentwide, the attrition rate for federal personnel with less than 5 years of service was 11.1 percent and the attrition rate for those with 5 or more years of service was 3.8 percent for fiscal year 2008. This trend remains consistent across law enforcement and related personnel and those departments employing the majority of these personnel. Law enforcement personnel receiving enhanced retirement benefits with less than 5 years of service had a 10.4 percent attrition rate, while those with 5 or more years of service had a 2.2 percent attrition rate for fiscal year 2008. Because the attrition rates are consistently higher for those with less than 5 years of service, the percentage of these personnel within a workforce may also affect the overall attrition rates. For example, in fiscal year 2008 personnel with less than 5 years of service accounted for approximately 34 percent and 17 percent of total personnel within the DHS and DOJ, respectively. The DHS-wide rate of attrition may be higher than the DOJ-wide rate of attrition for all personnel because DHS has a higher percentage of personnel with less than 5 years of service than does DOJ and those with less service have higher attrition. Both DHS and DOJ officials at the department level were aware that they have higher attrition among groups of employees with less than 5 years of service, but neither DHS nor DOJ officials indicated that this attrition was hindering their ability to meet their mission. The majority of law enforcement-related personnel moving to other agencies are not receiving enhanced retirement benefits as a result of that move. For example, from fiscal years 2004 through 2007, approximately 6,500 law enforcement-related personnel moved between federal agencies. As shown in figure 2, 54 percent remained in federal law enforcement-related occupations that do not receive enhanced retirement benefits, 18 percent moved into law enforcement occupations that do garner such benefits, and 27 percent moved into non-law enforcement-related occupations. Overall, our analysis shows that the attrition rates vary when analyzed by different categories and factors (governmentwide, departmentwide, and by years of service). However, our analysis could not link attrition levels with the presence or absence of enhanced retirement benefits. This is consistent with what we reported with respect to metropolitan D.C. federal police forces. Specifically, in June 2003 we reported that no clear pattern existed regarding turnover among D.C. police forces receiving federal law enforcement retirement benefits and those receiving traditional retirement benefits. Also, analyzing the trends in data alone does not determine whether the attrition rates for law-enforcement personnel are too high or problematic for the agencies or whether the rates are acceptable and manageable through the use of human capital tools. However, for current and future data analysis, OPM has recently developed and implemented a new tool, the Enterprise Human Resource Integration (EHRI) program, which involves a standardized and centralized collection of federal personnel data that can be queried and analyzed for specific personnel. Using the analytic tools accompanying EHRI, executive branch departments and agencies can analyze their own data on attrition or seek OPM’s assistance in providing such analysis. However, analyzing attrition data alone may not fully indicate why personnel are leaving a particular agency because, as we have previously reported, a variety of organizational, personal, and economic factors, in addition to compensation, influence separation decisions. Two additional rationales offered by organizations advocating for enhanced retirement benefits for law enforcement related employees are that (1) employees are performing duties that are similar to those of law enforcement employee groups receiving enhanced retirement benefits; or (2) employees are performing high-risk duties related to homeland security activities, such as guarding the northern and southern borders from those illegally trying to enter the U.S. For example, representatives from DOJ’s Executive Office of U.S. Attorneys stated that, in addition to addressing retention challenges, Assistant U.S. Attorneys should be afforded enhanced retirement benefits similar to those received by LEOs because of the risks they encounter working with defendants that are in pretrial status as well as convicted criminals. In addition, officials from the Executive Office of U.S. Attorneys noted that Assistant U.S. Attorneys work closely with law enforcement personnel who already receive such benefits. We did not address the validity of these rationales because we did not do a detailed analysis and comparison of the duties of the wide variety of different groups of employees who perform law enforcement- related functions across various agencies. Overall, the short-term costs to a federal agency for providing enhanced retirement benefits for law enforcement personnel under FERS are higher than providing retirement benefits to regular federal employees. As illustrated in Table 4 below, the mandatory agency contribution to the retirement fund for a LEO under FERS is 13.7 percent of basic pay more than for a regular FERS employee and 0.5 percent more for a LEO under CSRS than for a regular CSRS employee. According to DHS and DOJ officials, if enhanced retirement benefits are provided directly through legislation, the department or component agencies may not have the resources immediately available to cover their increased contributions, and could likely seek additional funds for this purpose. For example, when CBP Officers were granted enhanced retirement benefits directly through legislation in CBP’s fiscal year 2008 appropriations act, congressional appropriators directed $50 million for fiscal year 2008 to help the agency implement the legislation. In addition, congressional appropriators directed an additional $200 million for fiscal year 2009 to cover the increased agency contributions to the retirement system. This results in approximately $10,000 more per position for fiscal year 2009. In addition to the agency contribution costs, CBP officials stated that they incurred other expenses during the conversion process, including staffing and training costs. In subsequent years, CBP officials stated that they plan to include these increased costs in their annual budget requests. Specifically, for the fiscal year 2010 budget request, CBP’s Office of Finance included $225 million for the additional retirement benefits for that year. In contrast, 2002 legislation was enacted into law providing that the Director of the FBI may establish a permanent police force, with enhanced retirement benefits, to be known as the FBI Police. However, according to the FBI, due, in part, to lack of funding to support this action, the FBI has not implemented these provisions and these benefits have not been provided. The majority of those personnel who may seek enhanced retirement benefits in the future are covered under FERS and, therefore, would have most of the costs of their enhanced benefits covered by increased agency contributions. However, there are potential, unfunded long-term costs to the pension system of providing such benefits to any additional law enforcement personnel who are covered under CSRS. At the end of fiscal year 2008, approximately 51,000 federal personnel performing some law enforcement-related activities were not receiving enhanced retirement benefits. During the 110th Congress, at least six pieces of legislation were introduced, but not enacted into law, which would have extended enhanced retirement benefits to approximately 25,000 additional employees. Specifically, these various pieces of legislation would have provided enhanced retirement benefits similar to those received by LEOs to thousands of federal police not currently receiving LEO or similarly enhanced retirement benefits, as well as Assistant U.S. Attorneys and others. The cost for providing enhanced retirement benefits to the groups covered by CSRS and FERS under these pieces of legislation would have been approximately $250 million for 1 fiscal year. The long-term costs to the federal government for providing FERS employee pensions would be accounted for in higher agency contributions by the employing agency (and an additional 0.5 percent from individuals). However, this is not the case for CSRS employees and the long-term costs to the federal government of providing enhanced LEO or similar retirement benefits for CSRS staff are not acknowledged directly by the Congressional Budget Office process, or the requesting groups that sought additional benefits. According to OPM’s actuaries, they do, upon request, provide estimates of the effects of retirement coverage changes on the Civil Service Retirement and Disability Fund. Specifically, as table 4 illustrates, agencies pay an additional 0.5 percent contribution for CSRS-covered LEO staff over regular federal employees (7.5 percent versus 7 percent) and LEO staff make a similarly increased contribution as well. However, the cost to the government of CSRS retirement benefits is greater than those combined agency and staff contributions. Each CSRS position represents an unfunded liability to Treasury and CSRS LEOs represent a greater unfunded liability than regular employees because the contributions do not meet the costs associated with benefits. In January 2007 and again in December 2007 we reported on the importance of making policy decisions that take into consideration the need for fiscal stewardship. Specifically, at that time we reported on the challenge facing Congress in making fiscally responsible policy decisions given our nation’s growing fiscal imbalance. Although there is no question that law enforcement and related personnel play an invaluable role in securing this nation, granting enhanced retirement benefits to additional employee groups may or may not be the most cost-efficient solution for retaining this population. The federal government has many human capital tools that can be used to address attrition, which we discuss later in this report. While providing enhanced retirement benefits to additional employee groups directly through legislation has been used as an alternative to OPM’s administrative process, doing so has also resulted in some perceived inequities across certain federal occupations. For example, under OPM implementing regulations, federal police officers are generally excluded from the regulatory definition of a LEO to receive enhanced retirement benefits. However, four federal agency police departments were provided with enhanced retirement benefits directly through legislation (Park Police, U.S. Secret Service Uniformed Division, Capitol Police, and Supreme Court Police) while others do not receive such benefits (e.g. Veterans Affairs Police). DOJ officials at the department level expressed concern about the potential disparity in benefits provided to their personnel when other employee groups are provided with enhanced retirement benefits directly through legislation. In addition, some Detection Enforcement Officers within CBP Air and Marine Air Interdiction who do not receive enhanced retirement benefits told us that they are not being treated fairly in relationship to their co-workers who seemingly perform similar mission critical duties and are exposed to similar risks, but who receive enhanced retirement benefits. Providing additional employee groups with enhanced retirement benefits could also affect an agency’s strategic workforce planning. In the past, we have called on agencies to develop a long-term strategic workforce plan that considers the unique number, type, and competency levels of employees needed for the agency to meet its mission in the long run and the strategies it will use to recruit, hire, train, and retain these employees. As part of this planning process, agencies are to determine tools they will, and can afford to, use to achieve their plan. Strategic workforce planning also focuses on developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce to meet the needs of the future. In 2002, we reported that each agency needs to ensure that its human capital program capitalizes on its workforce’s strengths and addresses related challenges in a manner that is clearly linked to achieving the agency’s mission and goals. Thus, it is through its strategic workforce planning that an agency would determine the number, types, and duties of law enforcement personnel needed to perform its mission; whether it has any challenges recruiting or retaining personnel for these positions and, if so, what are the most cost-efficient tools it can use to address these challenges, such as retention incentives; and how to manage all of this within the agency’s available budget. When unions or employee groups seek legislation for enhanced retirement benefits outside of an agency’s strategic workforce planning process, it could, according to DHS and DOJ human resource officials, affect the workforce strategies and resources an agency has devised. Individuals who become LEOs can also qualify for certain special pay provisions, availability pay, or administratively uncontrollable overtime pay. Such provisions may affect other payroll and matching benefit costs not accounted for in agencies’ workforce plans. According to DHS and DOJ human resource officials, whose departments represent the majority of law enforcement and related personnel, their departments’ strategic workforce planning may be affected if additional employee groups were provided with enhanced retirement benefits directly through legislation, especially if the departments were not allocated additional funds. Federal agencies, including those that employ law enforcement and law enforcement-related personnel, such as DOJ, DHS, and Treasury, can use a variety of human capital tools, such as student loan reimbursements and monetary retention incentives, to retain such personnel. However, DOJ and DHS officials at the department level stated that these tools are currently used to a limited extent due to a lack of sustained and available funding. Because such tools were specifically designed to address retention issues, these tools could provide a cost-efficient alternative to granting enhanced retirement benefits for those employee groups that seek such benefits directly through legislation and cite retention as the primary rationale. In June 2004, we defined human capital tools as the policies and practices that an agency can implement in managing its workforce to accomplish its mission. These tools can relate to recruitment, retention, compensation, position classification, incentive awards, training, performance management, and work-life policies, among others. For example, a federal agency may award a recruitment incentive to attract new employees or provide a relocation incentive to a current employee moving to a different geographic location to accept a position that the employing agency has deemed hard to fill. In addition, an agency may pay a retention incentive to keep a current employee if the agency determines that the employee is has unusually high or unique qualifications or if the agency has a special need for the employee’s services, making retention of the employee essential, and if the employee would be likely to leave the federal service in the absence of the incentive. Table 5 below provides details on key human capital tools officials from DHS, DOJ, and the IRS told us they use to retain law enforcement and law enforcement-related personnel. Officials in some DHS and DOJ component agencies as well as the IRS that employ LEOs report that they use human capital tools to retain personnel effectively. For example, according to officials from DOJ’s Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), they use a variety of human capital tools, such as a Foreign Language Award Program and Health Improvement Program, with some success to retain their agents already receiving enhanced retirement benefits. Officials from the IRS’s Criminal Investigation Division, whose employees receive LEO retirement benefits, stated that they use retention incentives to retain corporate knowledge and expertise with some success. In addition, within DHS, the U.S. Secret Service provides retention and foreign language bonuses to retain LEOs. Officials from some DHS and DOJ component agencies that employ law enforcement-related personnel also stated that they used human capital tools to retain such staff. However, these officials stated that they are continuing to experience some retention challenges for certain types of these personnel, even though they are utilizing human capital tools, to varying degrees, in an effort to retain them. Specifically, according to FBI Police officials, they are facing difficulties retaining their police force of 241 officers (as of August 2008), despite their use of human capital tools, such as student loan reimbursements. FBI Police officials noted that their agency loses a number of police officers to other positions within the FBI, particularly to the Special Agent position, which provides enhanced retirement benefits. FBI Police officials also said that it takes approximately 7 to 9 months to bring on a new police officer due to the detailed testing and background check required. Therefore, even a small amount of attrition would have an impact on the FBI Police’s ability to meet its mission. Similarly, according to the Director of DOJ’s Executive Office of U.S. Attorneys (EOUSA), the office is using retention incentives, student loan reimbursements, and monetary rewards up to $7,500, but acknowledged that it is a challenge to retain their approximately 5,300 Assistant U.S. Attorneys, especially mid-career-level attorneys who could earn more money in the private sector, particularly in some major metropolitan areas. Officials from the FBI Police and EOUSA stated that they believe providing such personnel with enhanced retirement benefits may be an option for addressing their current retention challenges. Although some DOJ component agency officials cited challenges retaining their staff, DOJ JMD human resource officials stated that they do not feel that they face challenges retaining law enforcement-related personnel and would not support these personnel seeking enhanced retirement benefits directly through legislation. DOJ JMD human resource officials highlighted the fact that the department’s average overall attrition rate from fiscal years 2004 through 2008 for law enforcement-related personnel (3.5 percent) is lower than average for all of the federal government (4.7 percent). In addition, DHS human resource officials acknowledged the department’s difficulty in retaining some staff, but noted that this may be because the department is relatively new and, therefore, some higher than average attrition is to be expected. According to OPM officials, enhanced retirement benefits are not intended to be a tool for retaining personnel and, thus, may not be appropriate in addressing the cited and related retention challenges. A possible option for addressing the retention challenges cited by some DOJ component agencies and DHS human resource officials is the use of human capital tools for these groups. According to our analysis of OPM’s annual reports to Congress on agencies’ use of retention incentives for calendar years 2006 and 2007, DHS and DOJ use human capital tools to retain their personnel to a lesser extent than other federal departments. Specifically, DHS awarded an average of $2,241 retention incentives to approximately 0.6 percent of all DHS employees (law enforcement and related personnel as well as other personnel), while DOJ awarded an average of $3,279 to approximately 0.9 percent of all DOJ employees and Department of the Treasury awarded an average of $13,467 to approximately 0.1 percent of all its employees for fiscal years 2006 and 2007. In comparison, OPM reported that other reporting departments awarded an average retention incentive of $5,629 to approximately 3.6 percent of its employees during the same time frame. For additional information on the use of human capital tools during calendar years 2006 and 2007, as reported to OPM, see appendix VI. We have also previously reported on the effectiveness of providing cash incentives, such as retention incentives and special pay, to retain federal personnel. Specifically, in July 2005, we reported that some deferred benefits, such as retirement, are not valued as highly as cash compensation (basic pay or special pay and monetary retention incentives) and that cash compensation is generally accepted as a far more efficient tool than deferred benefits for retaining certain personnel. Cash compensation has been used for various groups of law enforcement- related personnel. For example, what is known as the Customs Officer Pay Reform Act (COPRA) of 2003 and its implementing regulations provided revised and enhanced overtime compensation and premium pay provisions to a number of customs inspectors and supervisors. In addition, our analysis of fiscal years 2004 through 2008 CPDF data indicated that the attrition rates for those law enforcement-related personnel receiving special pay were similar to those personnel receiving enhanced retirement benefits and were lower than those law enforcement- related personnel receiving neither enhanced pay nor enhanced retirement benefits. Specifically, the average governmentwide attrition rate for law enforcement-related personnel not receiving enhanced retirement benefits was 4.7 percent, compared to 3.2 percent for law enforcement personnel receiving enhanced retirement benefits and 3.5 percent for law enforcement-related personnel who received special pay and no enhanced retirement benefits. The use of special pay could therefore also be an option for addressing some of the retention challenges reported by some agencies employing law enforcement-related personnel not receiving enhanced retirement benefits. If sustainable funding were available, agencies using human capital tools in a targeted manner for law enforcement related personnel could be a cost-efficient option (considering their relatively low cost to a federal agency when compared to enhanced retirement benefits). For example, as noted above, our analysis of CPDF data found that all federal employees, including law enforcement and law enforcement-related personnel, with less than 5 years of service had higher attrition rates than those with 5 or more years of service. Specifically, law enforcement personnel receiving enhanced retirement benefits with less than 5 years of service had a 10.4 percent attrition rate while those with 5 or more years of service had a 2.2 percent attrition rate for fiscal year 2008. For those law enforcement- related personnel with less than 5 years of service, targeted use of human capital tools or the use of special pay could be a more meaningful option for addressing their attrition. According to DOJ officials, they do not currently have plans to target their use of human capital tools toward any specific personnel because they did not feel their department faces retention challenges. These officials also noted that these tools are available for all of their employees. DHS human resource officials acknowledged that the department faces challenges in retaining its employees, including those in law enforcement-related positions with less than 5 years of service and noted that they have a number of efforts under way to address attrition for this specific population. For example, they have implemented a program that seeks to retain employees by allowing them to explore different career paths within DHS rather than leaving the department altogether. While the costs of providing retention incentives and special pay are not insignificant, they pose less of a potential financial liability to the federal government than providing enhanced retirement benefits. For example, according to OPM, the average annual retention incentive provided to federal employees in calendar year 2007 was $5,573. We estimated that the average cost to CBP, for example, for providing increased agency contributions to fund enhanced retirement benefits to the approximately 20,000 CBP Officers, was about $10,000 per position for fiscal year 2009— and these costs are expected to continue for the rest of each individual’s career as a CBP Officer. Moreover, when we spoke to human resource officials from DHS, they stated that providing special pay, which is higher than that of a regular federal employee, may be a less costly option for addressing retention challenges than providing enhanced retirement benefits to additional employee groups. Currently, there are approximately 51,000 federal law enforcement-related personnel who have not received enhanced retirement benefits because they have not been determined to meet the statutory and regulatory definitions relating to a LEO or have not been provided such benefits directly through legislation. Employees in several occupations have expressed interest in obtaining LEO or similar enhanced retirement benefits, and they will most likely seek them directly through legislation. Law enforcement-related personnel who have previously obtained or are seeking enhanced retirement benefits directly through legislation have used various rationales to justify their request, including high attrition rates, equity considerations, and the assertion that they are now performing more homeland-security-related functions than they had previously. However, in those instances where employee groups are seeking enhanced retirement benefits directly through separate legislation, data are not always provided to support the various rationales. Our analysis of available attrition data showed that while law enforcement- related personnel without enhanced retirement benefits generally have higher attrition rates than those with enhanced retirement benefits or special pay, their attrition rates are actually lower than the overall average for other federal employees. Therefore, it may be useful to evaluate such data when determining whether to provide expensive, enhanced retirement benefits in response to assertions of retention challenges. In addition, our analysis indicates that considering the costs to agencies, and to the pension system as a whole, of providing these enhanced retirement benefits is important because it is a long-term commitment and may affect agency strategic workforce planning options. OPM’s actuaries can provide estimates of the long-term costs to the government for the increased pensions, but if additional employee groups are granted enhanced retirement benefits directly through legislation, agencies may need short-term supplemental funding as well as longer-term additional funding to cover increased agency contributions to the retirement fund. Furthermore, assessing the impact of potentially unfunded liabilities to the treasury and the retirement system related to providing enhanced retirement benefits to employees still under CSRS may also be important when determining whether to provide enhanced retirement benefits to additional employee groups. Soliciting agencies’ perspectives on their personnel needs, challenges, and available budgets would help to inform these benefit decisions as well, especially given that legislative requests for such benefits typically are not brought by agencies, but by unions or employee group representatives. Agencies’ possible use of human capital tools in a targeted fashion (such as cash bonuses or special pay) are another potentially more cost-efficient means for addressing the human capital issues cited for law enforcement personnel than awarding enhanced retirement benefits, which may also result in unintended consequences such as perceptions of inequity Finally, information on attrition, costs, agencies’ strategic workforce plans and budgets, and other issues can help to inform whether requests for enhanced retirement benefits directly through legislation are justifiable, affordable, and cost-efficient. These are some of the challenges facing both agencies and Congress in making fiscally responsible policy decisions, especially given our nation’s growing fiscal imbalance. We requested comments on a draft of this report from DHS, DOJ, IRS, and OPM. On July 24, 2009, we received written comments from OPM on the draft report, which are reproduced in full in appendix VII. OPM generally concurred with the report. DHS and DOJ did not provide written comments, but in e-mails received July 27, 2009 and July 28, 2009, DOJ and DHS liaisons stated that the departments generally agreed with the report. OPM, DHS, and DOJ also provided technical comments, which we have incorporated where appropriate. In an e-mail received July 24, 2009, the IRS liaison stated that IRS had no comments on the draft report. We will provide copies of this report to the Attorney General, the Secretary of the Department of Homeland Security, the Secretary of the Treasury, the Director of OPM, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Eileen R. Larence at (202) 512-6510 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. This report addresses the following questions: (1) What processes are being used to grant enhanced retirement benefits to federal law enforcement personnel? (2) What are the rationales and potential costs for extending such benefits to additional occupations or employee groups? (3) To what extent have federal agencies used human capital tools, such as retention incentives, to retain both law enforcement and other related personnel? As agreed upon with your offices, our review focused on the Department of Homeland Security (DHS), Department of Justice (DOJ), and Department of the Treasury because these federal entities employed approximately 84 percent of all law enforcement and law enforcement- related personnel in fiscal year 2008. For the purposes of this report, we are defining the term law enforcement personnel in a manner that is broader in scope than the statutory and regulatory law enforcement officer (LEO) definitions and we are not including other specialized non- law enforcement related annuity recipients, such as federal air traffic controllers and firefighters. To identify the processes that have been used to grant enhanced retirement benefits to federal law enforcement personnel, we reviewed relevant laws, regulations, as well as legislation introduced in the 110th Congress that would have provided such benefits to additional employee groups. We also reviewed reports by the Office of Personnel Management (OPM), the Congressional Budget Office, and the Congressional Research Service that describe the current processes by which such benefits are provided. Also, we obtained information on the specific benefits provided to law enforcement personnel from DHS, DOJ, and the Internal Revenue Service (IRS) within the Department of the Treasury, as well as OPM and select employee organizations and unions. During this review, we also obtained information from OPM on its role and responsibilities related to providing enhanced retirement benefits to personnel who perform law enforcement-related duties. We also met with representatives from six unions and other employee organizations who have advocated for enhanced retirement benefits, to discuss the current process for obtaining enhanced benefits. In addition, we met with staff from the Merit System Protection Board (MSPB), which adjudicates federal employees’ appeals of personnel actions such as appeals from employees who believe they are entitled to LEO coverage, to discuss their views and opinions on the current criteria used to determine which federal personnel meet the statutory and regulatory LEO definitions. We did not, however, review the appropriateness of the statutory and regulatory definitions relating to LEOs nor did we determine criteria for evaluating the definitions or the processes used by various agencies in implementing the definition. To identify the rationales and potential costs for extending enhanced retirement benefits to additional occupations or employee groups, we met with representatives from six unions and employee organizations who have advocated for enhanced retirement benefits to discuss the rationales that law enforcement-related personnel are using to seek enhanced benefits similar to those received by LEOs. We also discussed potential effects of providing enhanced retirement benefits to additional employee groups that may be beneficial to consider, such as potential costs, when providing such benefits to others groups with DHS, DOJ, IRS, and OPM officials. Further, we reviewed previous GAO reports that discuss the importance of making policy decisions that take into consideration the need for fiscal stewardship. We also obtained information on the extent to which granting such benefits may affect other employees and agencies’ workforce planning. During this review, we interviewed officials from DHS, DOJ, IRS, and OPM on the potential workforce planning effects of providing enhanced retirement benefits directly through legislation to those additional employee groups seeking such benefits. Because one of the primary rationales provided was that law enforcement- related personnel not receiving enhanced retirement benefits exhibit high attrition, including moving to occupations that provide such benefits, we analyzed data from OPM’s Central Personnel Data File (CPDF) for fiscal years 2004 through 2008 to calculate the attrition rates and to determine the extent to which these rationales can be substantiated with existing data. Our analysis focused on DHS, DOJ, and Treasury; however, we also analyzed these data on a governmentwide basis. Regarding CPDF, we have previously reported that governmentwide data from CPDF for most of the key variables used in this study (agency/sub-element, position occupied, retirement plan, work schedule, and occupation) were at least 99 percent accurate and thus concluded that the data were sufficiently reliable for the purposes of this study. Our analysis of CPDF data included personnel that were: identified as permanent employees of all work schedules. identified as having separated from their agency of employment through resignation or transfer from one agency to another agency. For the purpose of our analysis, we divided personnel into four different groups by LEO status. The first group, referred to as law enforcement personnel, included personnel that were: identified as having LEO enhanced retirement, and identified as personnel receiving enhanced retirement benefits similar to LEOs through separate legislation. The second group, referred to as law enforcement-related personnel, included personnel that were: identified as personnel that have not been found to meet the LEO definition by their employing agency and OPM nor have they been provided with similar enhanced retirement benefits. identified as potentially performing certain law enforcement-related duties including but not limited to carrying a weapon, having arrest authority, or participating in some investigative capacity. Occupations frequently thought of as being law enforcement-related may include personnel in the following occupations (if not covered by LEO): 0006, 0007, 0025, 0080, 0082, 0083, 0084, 1801, 1802, 1810, 1811, 1812, 1816, 1854, 1881, 1884, 1890, 1895, 1896, and 1899. Identified as having previously expressed interest in receiving such benefits through legislation. The following occupations were added to the law enforcement-related group because they have previously lobbied for passage of a bill to give them retirement benefits similar to LEO retirement: CBP Agricultural Inspectors (0401), Assistant U.S. Attorneys (0905), and IRS Revenue Officers (1169). The third group, referred to as law enforcement-related personnel receiving special pay, also included those who have not been found to meet the LEO definition and are performing certain law enforcement- related duties but receive special pay. The fourth group, referred to as other federal personnel, include those who do not function in a law enforcement capacity and do not perform law enforcement-related duties. To calculate the rates of attrition for each fiscal year, we divided the total number of resignations and transfers from one agency to another by the average of the number of permanent employees. The average number of employees for a given fiscal year was calculated using the number of employees at the beginning and the end of each fiscal year. We calculated the rates of attrition for each of the previously described personnel groups on a governmentwide basis as well as on a departmentwide basis for DHS, DOJ, and the Department of the Treasury. We focused our analysis on these Departments because they employ 84 percent of federal law enforcement and law enforcement-related personnel. To calculate the average attrition rates from fiscal year 2004 through 2008, we added the total of each group’s attrition rate for each fiscal year multiplied by the average population of the group and divided it by the total population of the 5 year time frame. We calculated the average rates of attrition for each of the previously described personnel groups on a governmentwide basis as well as on a departmentwide basis for DHS, DOJ, and the Department of theTreasury. Further, we analyzed CPDF data to determine whether law enforcement- related personnel not receiving LEO or similarly-enhanced retirement benefits were moving to other federal positions that offered these benefits (because such moves were another rationale from those unions and employee groups seeking enhanced retirement benefits). We totaled the number of employees that moved from a law enforcement-related occupation not receiving enhanced retirement benefits on a governmentwide basis from fiscal year 2004 through fiscal year 2007. Then, we calculated the percentage of those employees who moved to a law enforcement occupation receiving such benefits under the same parameters. To determine the extent to which federal agencies have used human capital tools to retain both law enforcement and law enforcement-related personnel, we reviewed and analyzed information reported to OPM on the extent to which DHS, DOJ, and Treasury were using retention incentives. We also obtained information on the use of human capital tools to retain law enforcement and law enforcement-related personnel from DHS, DOJ, and IRS human capital officials, various component agency officials, and union and employee representatives. In addition, we reviewed previous GAO reports that discuss the use and potential effectiveness of human capital tools to retain federal employees. We conducted this performance audit from January 2008 through July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The current definition of “law enforcement officer” can be traced back to as early as 1948. In 1948, legislation was enacted into law that, in general, provided enhanced retirement benefits to certain federal officers whose duties were primarily the investigation, apprehension, or detention of persons suspected or convicted of offenses against the criminal laws of the United States. This legislation expanded the coverage of enhanced retirement benefits governmentwide beyond the limited scope of legislation enacted 1 year earlier in 1947 covering only FBI agents. In comments on the then pending 1948 legislation, the Civil Service Commission noted that it was “not in favor of special legislation for individual groups of employees, but inasmuch as Congress has approved special legislation for the investigatory personnel of the Federal Bureau of Investigation it would not oppose benefits for similar groups of employees.” Committee report language noted that the “committee believes it is only fair to grant such retirement benefits as are provided for under the bill to law-enforcement agents in all parts of the Government at an earlier age, because it is physically impossible to carry on the necessary strenuous activities after reaching 50 years of age.” Currently, law enforcement personnel performing certain specified types of duties can fall within the Civil Service Retirement System (CSRS) and Federal Employee Retirement System (FERS) statutory and regulatory retirement-related definitions of the term “law enforcement officer” (LEO) and thus be eligible for enhanced retirement benefits under the respective retirement plans. LEO retirement coverage does not depend on the classification of a position within an occupational series (e.g., Police Officer GS-0083) or the law enforcement mission of a particular agency. For CSRS purposes, a LEO is defined in statute as an employee whose primary duties are the “investigation, apprehension, or detention of individuals suspected or convicted of offenses against the criminal laws of the United States, including an employee engaged in this activity who is transferred to a supervisory or administrative position.” OPM’s implementing regulations provide additional definitions. The term “primary duties”, for example, is defined, in part, as “those duties of a position that – (1) are paramount in influence or weight; that is, constitute the basic reasons for the existence of the position; (2) occupy a substantial portion of the individual’s working time over a typical work cycle; and (3) are assigned on a regular and recurring basis.” The implementing regulations further provide, for example, that the definition of a LEO “does not include an employee whose primary duties involve maintaining law and order, protecting life and property, guarding against or inspecting for violations of law, or investigating persons other than persons who are suspected or convicted of offenses against the criminal laws of the United States.” The main statutory provision of the FERS LEO definition generally parallels the CSRS LEO definition. Like the CSRS provision, the statutory FERS LEO definition includes an employee whose primary duties are the “investigation, apprehension, or detention of individuals suspected or convicted of offenses against the criminal laws of the United States.” The statutory FERS LEO definition additionally includes an employee whose primary duties are the protection of officials of the United States against threats to personal safety. Like the CSRS definition, the FERS definition also includes employees primarily performing such duties who transfer to supervisory and administrative positions. However, the statutory FERS definition of a “law enforcement officer” is more restrictive than the CSRS LEO definition in that it expressly includes a rigorous duty standard. With respect to those employees described above, the statutory FERS LEO definition additionally requires, in general, that the duties of such positions be “sufficiently rigorous that employment opportunities should be limited to young and physically vigorous individuals.” As with CSRS, OPM implementing regulations provide additional FERS LEO-related definitions. The term “rigorous position”, for example, is defined under OPM FERS regulations to mean, in pertinent part, “a position the duties of which are so rigorous that employment opportunities should, as soon as reasonably possible, be limited (through establishment of a maximum entry age and physical qualifications) to young and physically vigorous individuals whose primary duties are investigating, apprehending, or detaining individuals suspected or convicted of offenses against the criminal laws of the United States or protecting the personal safety of United States officials.” The statutory FERS definition of “law enforcement officer” also specifically includes certain employees of the U.S. Park Police and members of the U.S. Secret Service Uniformed Division. OPM implementing regulations provide that the term “rigorous position” is deemed to include such positions in the Park Police and Secret Service Uniformed Division. The CSRS and FERS implementing regulations relating to the definition of a LEO generally exclude an “employee whose primary duties involve maintaining law and order, protecting life and property, guarding against or inspecting for violations of law, or investigating persons other than persons who are suspected or convicted of offenses against the criminal laws of the United States.” In this regard, groups that are generally excluded from the CSRS and FERS definitions of “law enforcement officer” are police officers, guards, and inspectors. As discussed above, federal uniformed police typically do not have LEO retirement coverage because they are generally excluded from the CSRS and FERS definitions relating to a “law enforcement officer.” Legislation has been enacted into law, however, that extends enhanced retirement benefits to certain federal uniformed police groups within the broader law enforcement community. For example, Congress has extended enhanced retirement benefits to certain employees of the U.S. Secret Service Uniformed Division officers, U.S. Park Police, U.S. Capitol Police, and U.S. Supreme Court Police. The officers of the U.S. Secret Service Uniformed Division and the U.S. Park Police were added in 1988 when legislation amended the statutory FERS LEO definition. Committee report language accompanying the 1988 legislation provided that “although these individuals are commonly thought to be law enforcement officers, the Office of Personnel Management says they do not meet the FERS definition of ‘law enforcement officer’ under section 8401(17) and thus do not qualify for FERS law enforcement officer benefits.” In comparison, rather than amending the statutory LEO definition, legislation in 1990 and 2000 provided the U.S. Capitol Police and U.S. Supreme Court Police, respectively, with enhanced retirement benefits similar to those received by LEOs. Both MSPB and the U.S. Court of Appeals for the Federal Circuit, for example, have issued decisions that affect, on an individual basis, which employees receive LEO. An individual employee asserting that his or her position duties are primarily the investigation, apprehension, or detention of individuals suspected or convicted of offenses against the criminal laws of the United States, may, for example, appeal an agency’s final decision to the MSPB. An employee may also appeal a final decision of the MSPB to the U.S. Court of Appeals for the Federal Circuit. As discussed earlier, in general, in order to qualify for LEO coverage, an employee must show that the duties of his or her position are primarily the investigation, apprehension, or detention of individuals suspected or convicted of crimes against the criminal laws of the United States. FERS has the additional statutory requirement that LEO positions are to be those that are sufficiently rigorous that employment opportunities should be limited to young and physically vigorous individuals, as determined by the Director of OPM considering the recommendations of the employing agency. OPM regulations set out a three-prong test to determine whether duties are considered primary duties of a particular position: (1) whether the duties are paramount in influence or weight, that is, constitute the basic reasons for the existence of the position; (2) whether the duties occupy a substantial portion of the individual’s working time over a typical work cycle; and (3) whether the duties are assigned on a regular and recurring basis. Under OPM regulations, in general, if an employee spends at least 50 percent of his or her time performing a duty or group of duties, they are his or her primary duties. In addition, duties that are of an emergency, incidental, or temporary nature cannot be considered “primary” even if they meet the substantial portion of time criterion, according to the OPM regulations. In determining whether an employee meets the LEO definitional criteria for coverage, under pertinent case law, MSPB must examine all relevant evidence, including the position description. The U.S. Court of Appeals for the Federal Circuit clarified its approach to law enforcement officer cases in a 2001 decision, Watson v. Department of the Navy, 262 F.3d 1292, 1299 (Fed. Cir. 2001), noting a legislative mandate for a position- oriented approach in cases of requests for law enforcement officer credit by requiring the “basic reasons” for the existence of the position must be the performance of law enforcement officer duties. Under this approach, if the position was not created for the purpose of investigation, apprehension, or detention, then the incumbent of the position would not be entitled to law enforcement officer credit. Cole, 2007 MSPB LEXIS 4819 (2007). detaining criminals, (2) whether there is an early mandatory retirem age, (3) whether there is a youthful maximum entry age for the position, whether the job is physically demanding so as to require a youthful workforce, and (5) whether the officer is exposed to hazard or dange addition, determination of eligibility for LEO retirement coverage is strictly construed because the program is “more costly to the government than more traditional retirement plans and often results in the retirement of important people at a time whe n they would otherwise have continued to work for a number of years.” (4) A 2005 decision of the Court of Appeals for the Federal Circuit Court, Crowley v. United States, 398 F.3d 1329, 1338 (Fed. Cir. 2005), noted that two factors predominate over all others in determining primary duties. The Crowley court noted that the most important consideration in its posit oriented approach of LEO determination is the physical vigorousness required by the position in question, followed by the hazardousness of a position. The Crowley court stated that, while hazardousness was also important, it was secondary to physical vigorousness because the legislative history greater extent. of the LEO statute emphasized physical vigor to a The Crowley court stated that physical vigorousness is the “sine qua no of LEO status determinations and that absent a showing of a position’s requirement of physical vigorousness, an employee cannot successfully show LEO status. The Crowley court noted that the relevant considerations are whether or not the position contains (in order of importance) (1) strenuous physical fitness requirements, (2) age requirements (such as a mandatory retirement age or maximum entry age), or (3) a requirement that an employee be on call 24-hours a day. The Crowley court explained that these sub-factors should be evalua n” ted by applying the facts of a given case to the law to determine which sub- factors, if any, have been satisfied. If the position in question is found be vigorous, then the second major factor necessary to establish LEO status—hazardousness—must be considered. An individual working in a position designated as a “law enforcement officer” position is typically covered either under special rules for CSRS or FERS. Under CSRS, LEOs pay a higher retirement contribution rate (7.5 percent of pay) for more generous retirement benefits and have the ability to retire at age 50 after 20 years of law enforcement officer-covered or other eligible service. The benefits are to be computed based on 2.5 percent of the high three average salary for each of the first 20 years of covered service, and 2 percent per year of service (covered or not) thereafter. An individual is subject to mandatory retirement upon reaching the age of 57 or the completion of 20 years of covered service. Under FERS, there are also special benefits, but the rules are different. Like CSRS, the individual’s contribution rate is one-half percent more than for regular benefits. FERS also has different rules for when an individual may retire: at age 50 with 20 years of covered service (like CSRS), or with 25 years of covered service without a minimum age. Under FERS, the special benefit formula is 1.7 percent of the high three average salary for each of the first 20 covered years of FERS service, and 1 percent of pay per year of service thereafter. The FERS Cost of Living Adjustment is to begin at retirement instead of age 62, the age for regular retirees. In addition, law enforcement officer retirees are to receive the FERS Special Retirement Supplement until age 62, but the earnings test is not to be applied to the Special Retirement Supplement until the Minimum Retirement Age is reached. An individual is subject to mandatory retirement upon reaching the age of 57 or the completion of 20 years of covered service, if then over that age. The table below shows the annuity accrual rates. The table below reflects selected information from Appendix C of the Office of Personnel Management’s (OPM) July 2004 report to Congress entitled, Federal Law Enforcement Pay and Benefits. The information pertains to selected non-standard pay plans provided to various law enforcement and law enforcement-related personnel as set out in OPM’s report. The following tables relate to attrition at the departmental level for Department of Homeland Security (DHS), Department of Justice (DOJ), and Department of the Treasury as well as government-wide for law enforcement, law enforcement-related, law enforcement special pay, and all other personnel from the Office of Personnel Management’s (OPM) Central Personnel Data File (CPDF). For the purposes of this report, attrition is defined as resignations and transfers from the department of employment. The average attrition rates for each fiscal year were calculated by dividing the sum of the resignations and transfers for a given year by the mean number of employees on the first and last pay period of that fiscal year. The overall average attrition rate was calculated by multiplying the sum of each fiscal year’s average attrition rate by each fiscal year’s mean number of employees and dividing that number by the sum of each fiscal year’s mean number of employees. The Office of Personnel Management (OPM) is required to submit an annual report to certain congressional committees on agencies’ use of the retention incentives (as well as recruitment and relocation incentives) authorized in Sections 5753 and 5754 of title 5, United States Code. OPM requested that agencies not only submit a report on their use of retention incentives in each calendar year but also requested comments on any barriers faced in using theses incentives. Under Section 5754, with OPM authorization an agency may provide a retention incentive to certain eligible employees currently in the federal service if the agency either deems that the employee’s unusually high or unique qualifications or the agency’s special need for the employee’s services make the employee’s retention essential and that the employee would likely leave the federal service in the absence of the incentive. The retention incentive may not exceed 25 percent of the employee’s annual rate of basic pay (may not exceed 10 percent if authorized for a group or category of employees). With OPM approval and critical agency need, the incentive may reach up to 50 percent. For most payment options, including an initial lump-sum payment, installments during the service period, a final lump-sum payment, or in some combination, the employee must sign a service agreement. OPM reports that in 2007, 41 of the 97 responding agencies paid a total of 22,794 retention incentives that valued over $127.0 million with an average incentive of $5,573. In 2006, 47 of the 95 responding agencies paid a total of 17,803 retention incentives that valued over $95.9 million with an average incentive of $5,388. OPM reports that in 2007, DHS paid a total of 656 retention incentives that valued over $500,000 with an average incentive of $885. In 2006, DHS paid a total of 1,098 retention incentives that valued over $3.3 million with an average incentive of $3,051. For both calendar years 2006 and 2007, DHS’s average incentives awarded were lower than the average incentives awarded for all reporting agencies. OPM reports that in 2007, DOJ paid a total of 1,528 incentives that valued over $3.9 million with an average incentive of $2,554, which was lower than the average incentive awarded for all reporting agencies. In 2006, DOJ paid a total of 281 incentives that valued over $2.0 million with an average incentive of $7,219, which was higher than the average incentive awarded for all reporting agencies. OPM reports that in 2007, the Department of the Treasury paid a total of 118 incentives that valued over $1.8 million with an average incentive of $15,280. In 2006, Department of the Treasury paid a total of 95 incentives that valued over $1.0 million with an average incentive of $11,215. For both calendar years 2006 and 2007, Treasury’s average incentives awarded were higher than the average incentives awarded for all reporting agencies. Along with the submission of incentive usage data, OPM asked that agencies describe how they used the incentives and to discuss any perceived barriers to using retention incentives. In general, OPM reports that the agencies used the incentives most often to target specific occupations that present particular retention challenges (highly competitive market), to resolve retention challenges present in specific locations, and to meet a highly specified staffing challenge. Specifically, DOJ reported to OPM for calendar year 2007 that the Executive Office of United States Attorneys (EOUSA) has found that retention incentives are effective in addressing attrition and shortages in EOUSA key positions. In addition to the contact named above, Steve D. Morris, Assistant Director, managed this assignment. Elizabeth Dunn, George Erhart, and Meg Ullengren made significant contributions to the work. Geoffrey Hamilton provided significant legal support and analysis. Gregory Wilmoth provided significant assistance with design and methodology, as well as the data analysis from OPM’s Central Personnel Data File. Adam Vogt provided assistance in report preparation, and Ryan D’Amore made contributions to the work during the preliminary phase of the review. | From fiscal years 2000 through 2008, the number of persons employed by federal agencies who perform various law enforcement functions and receive either special pay or enhanced retirement benefits, in the form of a faster-accruing pension, has increased by 55 percent. In addition, as of September 2008, approximately 51,000 personnel were employed in law enforcement-related occupations that could seek enhanced retirement benefits in the future. GAO was asked to conduct a review of the retirement benefits provided to law enforcement personnel. This report addresses (1) the processes used to grant enhanced retirement benefits to federal law enforcement personnel, (2) the rationales and potential costs for extending benefits to additional occupations, and (3) the extent to which federal agencies used human capital tools to retain law enforcement and other related personnel. GAO reviewed relevant laws, regulations, and other documentation, such as agency reports describing the processes used to grant enhanced benefits, and interviewed officials from the Office of Personnel Management (OPM), Department of Homeland Security (DHS), Department of Justice (DOJ), and the Internal Revenue Service (IRS) because these entities employed approximately 84 percent of all law enforcement and law enforcement-related personnel in fiscal year 2008. In commenting on a draft of this report, DHS, DOJ and OPM generally concurred with the report. IRS stated that it had no comments on the report. In order for certain employees to receive enhanced retirement benefits, agencies generally determine that a certain group of employees meets the statutory and regulatory definitions of a Law Enforcement Officer (LEO)--which includes such activities as conducting investigations--and submit the determination to OPM. As of the end of fiscal year 2008, about half of federal employees receiving enhanced retirement benefits met the statutory and regulatory definitions. In recent years, several employee groups and unions representing law enforcement personnel whose agencies and OPM have determined that they do not meet the LEO definitions have sought such benefits directly through legislation. Currently, about half of law enforcement personnel receiving enhanced benefits have obtained these benefits directly through legislation. Law enforcement-related employee groups that sought enhanced retirement benefits directly through legislation have cited a number of rationales to justify receiving these benefits, including high attrition rates. The provision of such retirement benefits may result in additional costs to the agency and federal government because these costs are generally higher than providing retirement benefits to regular federal employees. GAO's analysis of available data showed that attrition for law enforcement-related personnel not receiving enhanced retirement benefits was higher than law enforcement personnel receiving such benefits but not as high as all other federal employees. While attrition data are available, when asked to provide such data, the employee groups and unions seeking enhanced retirement benefits did not consistently provide it to us. Analyzing attrition data alone may not fully indicate why personnel are leaving a particular agency because a variety of organizational and economic factors, as well as compensation, influence separation decisions. GAO's analysis also showed that such benefits increase agency short-term costs and could increase the government's long-term pension liability. Finally, providing such benefits to some groups but not others has created perceived inequities and DHS and DOJ acknowledge that it could affect their strategic workforce planning. Federal agencies have the authority to use human capital tools, such as retention incentives, to assist with their efforts to address specific retention challenges. Some department and agency officials to whom we spoke said these tools are effective for retaining law enforcement personnel, while others maintained they need enhanced retirement benefits to effectively retain law enforcement-related personnel. The targeted use of these tools may present a cost-efficient alternative for retaining law enforcement-related personnel. |
Cargo containers are an important segment of maritime commerce. Approximately 90 percent of the world’s cargo moves by container. Each year, approximately 16 million oceangoing cargo containers enter the U.S. carried aboard thousands of container vessels. In 2002, approximately 7 million containers arrived at U.S seaports, carrying more than 95 percent of the nation’s non-North American trade by weight and 75 percent by value. Many experts on terrorism—including those at the Federal Bureau of Investigation and academic, think tank and business organizations— have concluded that the movement of oceangoing cargo containers are vulnerable to some form of terrorist action. A terrorist incident at a seaport, in addition to killing people and causing physical damage, could have serious economic consequences. In a 2002 simulation of a terrorist attack involving cargo containers, every seaport in the United States was shut down, resulting in a loss of $58 billion in revenue to the U.S. economy, including spoilage, loss of sales, and manufacturing slowdowns and halts in production. CBP is responsible for preventing terrorists and weapons of mass destruction from entering the United States. As part of its responsibility, it has the mission to address the potential threat posed by the movement of oceangoing containers. To perform this mission, CBP has inspectors at the ports of entry into the United States. While most of the inspectors assigned to seaports perform physical inspections of goods entering the country, some are “targeters”—they review documents and intelligence reports and determine which cargo containers should undergo additional documentary reviews and/or physical inspections. These determinations are not just based on concerns about terrorism, but also concerns about illegal narcotics and/or other contraband. The CBP Commissioner said that the large volume of imports and its limited resources make it impossible to physically inspect all oceangoing containers without disrupting the flow of commerce. The Commissioner also said it is unrealistic to expect that all containers warrant such inspection because each container poses a different level of risk based on a number of factors including the exporter, the transportation providers, and the importer. These concerns led to CBP implementing a layered approach that attempts to focus resources on potentially risky cargo containers while allowing other cargo containers to proceed without disrupting commerce. As part of its layered approach, CBP employs its Automated Targeting System (ATS) computer model to review documentation on all arriving containers and help select or “target” containers for additional documentary review and/or physical inspection. The ATS was originally designed to help identify illegal narcotics in cargo containers. ATS automatically matches its targeting rules against the manifest and other available data for every arriving container, and assigns a level of risk (i.e., low, medium, high) to each container. At the port level, inspectors use ATS, as well as other data (e.g., intelligence reports), to determine whether to inspect a particular container. In addition, CBP has a program, called the Supply Chain Stratified Examination, which supplements the ATS by randomly selecting additional containers to be physically examined. The results of the random inspection program are to be compared to the results of ATS inspections to improve targeting. If CBP officials decide to inspect a particular container, they might first use equipment such as the Vehicle and Cargo Inspection System (VACIS) that takes a gamma-ray image of the container so inspectors can see any visual anomalies. With or without VACIS, inspectors can open a container and physically examine its contents. Other components of the layered approach include the Container Security Initiative (CSI) and the Customs-Trade Partnership Against Terrorism (C- TPAT). CSI is an initiative whereby CBP places staff at designated foreign seaports to work with foreign counterparts to identify and inspect high- risk containers for weapons of mass destruction before they are shipped to the United States. C-TPAT is a cooperative program between CBP and members of the international trade community in which private companies agree to improve the security of their supply chains in return for a reduced likelihood that their containers will be inspected. Risk management is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions linking resources with prioritized efforts for results. Risk management is used by many organizations in both government and the private sector. In recent years, we have consistently advocated the use of a risk management approach to help implement and assess responses to various national security and terrorism issues. We have concluded that without a risk management approach that provides insights about the present threat and vulnerabilities as well as the organizational and technical requirements necessary to achieve a program’s goals, there is little assurance that programs to combat terrorism are prioritized and properly focused. Risk management could help to more effectively and efficiently prepare defenses against acts of terrorism and other threats. Key elements of a risk management approach are listed below. Threat assessment: A threat assessment identifies adverse events that can affect an entity, which may be present at the global, national, or local level. Vulnerability assessment: A vulnerability assessment identifies weaknesses in physical structures, personnel protection systems, processes or other areas that may be exploited by terrorists. Criticality assessment: A criticality assessment identifies and evaluates an entity’s assets or operations based on a variety of factors, including importance of an asset or function. Risk assessment: A risk assessment qualitatively and/or quantitatively determines the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk characterization: Risk characterization involves designating risk on a scale, for example, low, medium, or high. Risk characterization forms the basis for deciding which actions are best suited to mitigate risk. Risk mitigation: Risk mitigation is the implementation of mitigating actions, taking into account risk, costs, and other implementation factors. Systems Approach: An integrated systems approach to risk management encompasses taking action in all organizational areas, including personnel, processes, technology, infrastructure, and governance. Monitoring and evaluation: Monitoring and evaluation is a continuous repetitive assessment process to keep risk management current and relevant. It includes external peer review, testing, and validation. Modeling can be an important part of a risk management approach. To assess modeling practices related to ATS, we interviewed terrorism experts and representatives of the international trade community who were familiar with modeling related to terrorism and/or ATS and reviewed relevant literature. There are at least four recognized modeling practices that are applicable to ATS as a decision-support tool. Conducting external peer review: External peer review is a process that includes an assessment of the model by independent and qualified external peers. While external peer reviews cannot ensure the success of a model, they can increase the probability of success by improving the technical quality of projects and the credibility of the decision- making process. Incorporating additional types of information: To identify documentary inconsistencies, targeting models need to incorporate various types of information to perform complex “linkage” analyses. Using only one type of information will not be sufficient enough to yield reliable targeting results. Testing and validating through simulated terrorist events: A model needs to be tested by staging simulated events to validate it as a targeting tool. Simulated events could include “red teams” that devise and deploy tactics in an attempt to define a system’s weaknesses, and “blue teams” that devise ways to mitigate the resulting vulnerabilities identified by the red team. Using random inspections to supplement targeting: A random selection process can help identify and mitigate residual risk (i.e., the risk remaining after the model-generated inspections have been done), but also help evaluate the performance of the model relative to other approaches. CBP has taken several positive steps to address the terrorism risks posed by oceangoing cargo containers. For example, CBP established the National Targeting Center to serve as the national focal point for targeting imported cargo containers and distributing periodic intelligence alerts to the ports. CBP also modified its ATS, which was originally designed to identify narcotics contraband, to include targeting rules for terrorism that could identify high-risk containers for possible physical screening and inspection. In addition, CBP developed a training course for staff responsible for targeting cargo containers. Further, CBP also promulgated regulations aimed at improving the quality and timeliness of transmitted cargo manifest data for use in the targeting system. However, while its strategy incorporates some elements of risk management, CBP has not performed a comprehensive set of threat, criticality, vulnerability and risk assessments that experts said are vital for determining levels of risk for each container and the types of responses necessary to mitigate that risk. Regarding recognized modeling practices, CBP has not subjected ATS to external peer review or testing as recommended by the experts we contacted. Further, CBP has implemented a random inspection designed to improve its targeting rules, but officials at ports can waive the inspections. CBP has recognized the potential threat posed by oceangoing cargo containers and has reviewed and updated some aspects of its layered targeting strategy. According to CBP officials, several of the steps that CBP has taken to improve its targeting strategy have resulted in more focused targeting of cargo containers that may hold weapons of mass destruction. CBP officials told us that, given the urgency to take steps to protect against terrorism after the September 11, 2001, terrorist attacks, that they had to take an “implement and amend” approach. That is, they had to immediately implement targeting activities with the knowledge they would have to amend them later. Steps taken by CBP include the following: In November 2001, the U.S. Customs Service established the National Targeting Center to serve as the national focal point for targeting imported cargo for inspection. Among other things, the National Targeting Center interacts with the intelligence community and distributes to the ports any intelligence alerts it receives. The National Targeting Center also assists targeters in conducting research on incoming cargo, attempts to improve the targeting of cargo, and manages a national targeting training program for CBP targeters. In August 2002, CBP modified the ATS as an anti-terrorism tool by developing terrorism-related targeting rules and implementing them nationally. According to CBP officials responsible for ATS, these targeting rules were developed in consultation with selected intelligence agencies, foreign governments, and companies. CBP is now in the process of enhancing the ATS terrorism-related rules. The newest version of the ATS rules, which is still being tested, gives added risk points when certain rules apply collectively to the same container. CBP refers to this as the “bundling” of rules. In these circumstances, CBP would assume an elevated level of risk for the cargo. Related to this, CBP is currently in the process of developing and implementing further enhancements—known as the “findings module”—to capture additional information related to individual inspections of cargo containers, such as whether an inspection resulted in the discovery of contraband. In 2002, CBP also developed a 2-week national training course to train staff in targeting techniques. The course is intended to help ensure that seaport targeters have the necessary knowledge and ability to conduct effective targeting. The course is voluntary and is conducted periodically during the year at the Los Angeles, Long Beach and Miami ports, and soon it will be conducted at the National Targeting Center. In fiscal year 2003, approximately 442 inspectors completed the formal training and CBP plans to train an additional 374 inspectors in fiscal year 2004. In February 2003, CBP began enforcing new regulations about cargo manifests—called the “24 hour rule”—which requires the submission of complete and accurate manifest information 24 hours before a container is loaded on a ship at a foreign port. Penalties for non- compliance can include a CBP order not to load a container on a ship at the port of origin or monetary fines. The rule is intended to improve the quality and timeliness of the manifest information submitted to CBP, which is important because CBP relies extensively on manifest information for targeting. According to CBP officials we contacted, although no formal evaluations have been done, the 24-hour rule is beginning to improve both the quality and timeliness of manifest information. CBP officials acknowledged, however, that although improved, manifest information still is not always accurate or reliable data for targeting purposes. While CBP’s targeting strategy incorporates some elements of risk management, our discussions with terrorism experts and our comparison of CBP’s targeting system to recognized risk management practices showed that the strategy does not fully incorporate all key elements of a risk management framework. Elements not fully incorporated are discussed below. CBP has not performed a comprehensive set of assessments for cargo containers. CBP has attempted to assess the threat of cargo containers through contact with governmental and non-governmental sources. However, it has not assessed the vulnerability of cargo containers to tampering or exploitation throughout the supply chain, nor has it assessed which port assets and operations are the most critical in relation to their mission and function. These assessments, in addition to threat assessments, are needed to understand and identify actions to mitigate risk. CBP has not conducted a risk characterization for different forms of cargo, or the different modes of transportation used to import cargo. CBP has made some efforts in this regard by characterizing the risk of each oceangoing cargo containers as either low, medium, or high-risk. But, CBP has not performed a risk characterization to assess the overall risk of cargo containers, or determine how this overall risk characterization of cargo containers compares with sea cargo arriving in other forms, such as bulk cargo (e.g., petroleum and chemical gas shipments) or break-bulk cargo (e.g., steel and wood shipments). Additionally, CBP has not conducted risk characterization to compare the risk of cargo containers arriving by sea with the risk of cargo containers (or other cargo) arriving by other modes, such as truck or rail. These characterizations would enable CBP to better assess and prioritize the risks posed by oceangoing cargo containers and incorporate mitigation activities in an overall strategy. CBP actions at the ports to mitigate risk are not part of an integrated systems approach. Risk mitigation encompasses taking action in all organizational areas, including personnel, processes, technology, infrastructure, and governance. An integrated approach would help assure that taking action in one or more areas would not create unintended consequences in another. For example, taking action in the areas of personnel and technology—adding inspectors and scanning equipment at a port—without at the same time ensuring that the port’s infrastructure is appropriately reconfigured to accept these additions and their potential impact (e.g., more physical examinations of containers), could add to already crowded conditions at that port and ultimately defeat the purpose of the original actions. We recognize that CBP implemented the ATS terrorist targeting rules in August 2002 due to the pressing need to utilize a targeting strategy to protect cargo containers against terrorism, and that CBP intends to amend the strategy as necessary. However, implementing a comprehensive risk management framework would help to ensure that information is available to management to make choices about the best use of limited resources. This type of information would help CBP obtain optimal results and would identify potential enhancements that are well-conceived, cost-effective, and work in tandem with other system components. Thus, it is important for CBP to amend its targeting strategy within a risk management framework that takes into account all of the system’s components and their vital linkages. Interviews with terrorism experts and representatives from the international trade community who are familiar with CBP’s targeting strategy and/or terrorism modeling told us that the ATS is not fully consistent with recognized modeling practices. Challenges exist in each of the four recognized modeling practice areas that these individuals identified: external peer review, incorporating different types of information, testing and validating through simulated events, and using random inspections to supplement targeting. With respect to external review, CBP consulted primarily with in-house subject matter experts when developing the ATS rules related to terrorism. CBP officials told us that they considered these consultations to be an extensive process of internal, or governmental, review that helped adapt ATS to meet the terrorist threat. With a few exceptions, CBP did not solicit input from the extended international trade community or from external terrorism and modeling experts. With respect to the sources and types of information, ATS relies on the manifest as its principal data input, and CBP does not mandate the transmission of additional types of information before a container’s risk level is assigned. Terrorism experts, members of the international trade community, and CBP inspectors at the ports we visited characterized the ship’s manifest as one of the least reliable or useful types of information for targeting purposes. In this regard, one expert cautioned that even if ATS were an otherwise competent targeting model, there is no compensating for poor input data. Accordingly, if the input data are poor, the outputs (i.e., the risk assessed targets) are not likely to be of high quality. Another problem with manifests is that shippers can revise them up to 60 days after the arrival of the cargo container. According to CBP officials, about one third of these manifest revisions resulted in higher risk scores by ATS—but by the time these revisions were received, it is possible that the cargo container may have left the port. These problems with manifest data increase the potential value of additional types of information. With respect to testing and validation, CBP has not attempted to test and validate ATS through simulated events. The National Targeting Center Director told us that 30 “events” (either real or simulated) are needed to properly test and validate the system. Yet CBP has not conducted such simulations to test and validate the system. Without testing and validation, CBP will not know whether ATS is a statistically valid model and the extent to which it can identify high-risk containers with reasonable assurance. The only two known instances of simulated tests of the targeting system were conducted without CBP’s approval or knowledge by the American Broadcast Company (ABC) News in 2002 and 2003. In an attempt to simulate terrorist smuggling highly enriched uranium into the United States, ABC News sealed depleted uranium into a lead-lined pipe that was placed into a suitcase and later put into a cargo container. In both instances, CBP targeted the container that ABC News used to import the uranium, but it did not detect a visual anomaly from the lead-lined pipe using the VACIS and therefore did not open the container. With respect to instituting random inspections, CBP has a process to randomly select and examine containers regardless of the risk. The program—the Supply Chain Stratified Examination—measures compliance with trade laws and refocused it to measure border security compliance. One aspect of this new program is random inspections. However, CBP guidance states that port officials may waive the random inspections if available resources are needed to conduct inspections called for by ATS targeting or intelligence tips. Accordingly, although the containers targeted for inspection may be randomly selected, the containers being inspected from the program may not be a random representation. Therefore, CBP may not be able to learn all possible lessons from the program and, by extension, may not be in a position to use the program to improve the ATS rules. Our visits to six seaports found that the implementation of CBP’s targeting strategy faces a number of challenges. Specifically, CBP does not have a uniform national system for reporting and analyzing inspection statistics by risk category that could be used for program management and oversight. We also found that the targeters at ports that completed the national training program were not tested and certified, so there is no assurance that they have the necessary skills to perform targeting functions. Further, we found that space limitations and safety concerns constrain the ports in their utilization of screening equipment, which can affect the efficiency of examinations. A CBP official told us that CBP does not have a national system for reporting and analyzing inspection statistics by risk category. While officials at all the ports provided us with inspection data, the data from some ports were generally not available by risk level, were not uniformly reported, were difficult to interpret, and were not complete. In addition, we had to contact ports several times to obtain these data, indicating that basic data on inspections were not readily available. All five ports that gave information on sources of data said they had extracted data from the national Port Tracking System. However, this system did not include information on the number of non-intrusive examinations or physical examinations conducted, according to risk category. Moreover, a CBP headquarters official stated that the data in the Port Tracking System are error prone, including some errors that result from double counting. One port official told us that the Port Tracking System was not suitable for extracting the examination information we had requested, so they had developed a local report to track and report statistics. Our findings are consistent with a March 2003 Treasury Department Inspector General Report which found, among other things, that inspection results were not documented in a consistent manner among the ports and examination statistics did not accurately reflect inspection activities. A CBP official said that they are in the process of developing a replacement for the Port Tracking System to better capture enforcement statistics but this new system is still in its infancy. Separately, CBP officials said that they are trying to capture the results of cargo inspections through an enhancement to ATS called the findings module. A National Targeting Center official stated that the findings module would allow for more consistency in capturing standardized inspection results and would also serve as a management control tool. National Targeting Center officials said that the module would be able to categorize examination results according to the level of risk. A CBP official told us the module was being implemented nationwide in late November 2003. While the ATS findings module shows potential as a useful tool for capturing inspection results, it is too soon to tell whether it will provide CBP management with consistent, complete inspection data for analyzing and improving the targeting strategy. While over 400 targeters have completed the new national targeting training, CBP has no mechanism to test or certify their competence. These targeters play a crucial role because they are responsible for making informed decisions about which cargo containers will be inspected and which containers will be released. According to National Targeting Center officials, the goal is for each U.S. seaport to have at least one targeter who has completed national targeting training so that the knowledge and skills gained at the training course can be shared with other targeters at their port of duty. To train other staff, however, the targeter who took the training must have attained a thorough understanding of course contents and their application at the ports. Because the targeters who complete the training are not tested or certified on course materials, CPB has little assurance that the targeters could perform their duties effectively or that they could train others to perform effectively. CBP could have better assurance that staff can perform well if CBP tested or certified their proficiency after they have completed the national targeting training. This would also increase the likelihood that course participants are in a position to effectively perform targeting duties and could train others at the ports on how to target potentially suspicious cargo. Further, it would lessen the likelihood that those who did not do well in class are placed in these important positions. Such testing and certification of targeting proficiency would demonstrate CBP’s intent to ensure that those responsible for making decisions about whether and how to inspect containers have the knowledge and skills necessary to perform their jobs well. One of the key components of the CBP targeting and inspection process is the use of non-intrusive inspection equipment. CBP uses inspection equipment, including VACIS gamma-ray imaging technology, to screen selected cargo containers and to help inspectors decide which containers to further examine. A number of factors constrain the use of non-intrusive inspection equipment, including crowded port terminals, mechanical breakdowns, inclement weather conditions, and the safety concerns of longshoremen at some ports. Some of these constraints, such as space limitations and inclement weather conditions, are difficult if not impossible to avoid. According to CBP and union officials we contacted, concern about the safety of VACIS is a constraint to using inspection equipment. Union officials representing longshoremen at some ports expressed concerns about the safety of driving cargo containers through the VACIS because it emits gamma rays when taking an image of the inside of the cargo container. Towing cargo containers through a stationary VACIS unit reportedly takes less time and physical space than moving the VACIS equipment over stationary cargo containers that have been staged for inspection purposes. As a result of these continuing safety concerns, some longshoremen are unwilling to drive containers through the VACIS. CBP’s response to these longshoremen’s concerns has been to stage containers away from the dock, arraying containers in rows at port terminals so that the VACIS can be driven over a group of containers for scanning purposes. However, as seaports and port terminals are often crowded, and there is often limited space to expand operations, it can be space-intensive and time consuming to stage containers. Not all longshoremen’s unions have safety concerns regarding VACIS inspections. For example, at the Port of New York/New Jersey, longshoremen’s concerns over the safety of operating the VACIS were addressed after the union contacted a consultant and received assurances about the safety of the equipment. Similar efforts by CBP to convince longshoremen’s unions about the safety of VACIS have not been successful at some of the other ports we visited. In closing, as part of a program to prevent terrorists from smuggling weapons of mass destruction into the United States, CBP has taken a number of positive steps to target cargo containers for inspection. However, we found several aspects of their targeting strategy are not consistent with recognized risk management and modeling practices. CBP faces a number of other challenges in implementing its strategy to identify and inspect suspicious cargo containers. We are now in the process of working with CBP to discuss our preliminary findings and to develop potential recommendations to resolve them. We plan to provide the subcommittee with our final report early next year. This concludes my statement. I would now be pleased to answer any questions for the subcommittee. For further information about this testimony, please contact me at (202) 512-8816. Seto Bagdoyan, Stephen L. Caldwell, Kathi Ebert, Jim Russell, Brian Sklar, Keith Rhodes, and Katherine Davis also made key contributions to this statement. To assess whether the CBP’s development of its targeting strategy is consistent with recognized risk management and modeling practices, we compiled a risk management framework and recognized modeling practices, drawn from an extensive review of relevant public and private sector work, prior GAO work on risk management, and our interviews with terrorism experts. We selected these individuals based on their involvement with issues related to terrorism, specifically concerning containerized cargo, the ATS, and modeling. Several of the individuals that we interviewed were referred from within the expert community, while others were chosen from public texts on the record. We did not assess ATS’s hardware or software, the quality of the threat assessments that CBP has received from the intelligence community, or the appropriateness or risk weighting of its targeting rules. To assess how well the targeting strategy has been implemented at selected seaports in the country, we visited various CBP facilities and the Miami, Los Angeles-Long Beach, Philadelphia, New York-New Jersey, New Orleans, and Seattle seaports. These seaports were selected based on the number of cargo containers processed and their geographic dispersion. At these locations, we observed targeting and inspection operations; met with CBP management and inspectors to discuss issues related to targeting and the subsequent physical inspection of containers; and reviewed relevant documents, including training and operational manuals, and statistical reports of targeted and inspected containers. At the seaports, we also met with representatives of shipping lines, operators of private cargo terminals, the local port authorities, and Coast Guard personnel responsible for the ports’ physical security. We also met with terrorism experts and representatives from the international trade community to obtain a better understanding of the potential threat posed by cargo containers and possible approaches to countering the threat, such as risk management. We conducted our work from January to November 2003 in accordance with generally accepted government auditing standards. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Container Security: Current Efforts to Detect Nuclear Material, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October. 12, 2001. Federal Research: Peer Review Practices at Federal Science Agencies Vary. GAO/RCED-99-99. Washington, D.C.: March 17, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | After the attacks of September 11, 2001, concerns intensified that terrorists would attempt to smuggle a weapon of mass destruction into the United States. One possible method for terrorists to smuggle such a weapon is to use one of the 7 million cargo containers that arrive at our nation's seaports each year. The Department of Homeland Security's U.S. Customs and Border Protection (CBP) is responsible for addressing the potential threat posed by the movement of oceangoing cargo containers. Since CBP cannot inspect all arriving cargo containers, it uses a targeting strategy, which includes an automated targeting system. This system targets some containers for inspection based on a perceived level of risk. In this testimony, GAO provides preliminary findings on its assessment of (1) whether CBP's development of its targeting strategy is consistent with recognized key risk management and computer modeling practices and (2) how well the targeting strategy has been implemented at selected seaports around the country. CBP has taken steps to address the terrorism risks posed by oceangoing cargo containers. These include establishing a National Targeting Center, refining its automated targeting system, instituting a national training program for its personnel that perform targeting, and promulgating regulations to improve the quality and timeliness of data on cargo containers. However, while CBP's strategy incorporates some elements of risk management, it does not include other key elements, such as a comprehensive set of criticality, vulnerability and risk assessments that experts told GAO are necessary to determine risk and the types of responses necessary to mitigate that risk. Also, CBP's targeting system does not include a number of recognized modeling practices, such as subjecting the system to peer review, testing and validation. By incorporating the missing elements of a risk management framework and following certain recognized modeling practices, CBP will be in a better position to protect against terrorist attempts to smuggle weapons of mass destruction into the United States. CBP faces a number of challenges at the six ports we visited. CBP does not have a national system for reporting and analyzing inspection statistics and the data provided to us by ports were generally not available by risk level, were not uniformly reported, were difficult to interpret, and were incomplete. CBP officials told us they have just implemented a new module for their targeting system, but it is too soon to tell whether it will provide consistent, complete inspection data for analyzing and improving the targeting strategy. In addition, CBP staff that received the national targeting training were not tested or certified to ensure that they had learned the basic skills needed to provide effective targeting. Further, space limitations and safety concerns about inspection equipment constrained the ports in their utilization of screening equipment, which has affected the efficiency of examinations. |
Discharge permits establish limits on the amounts and types of pollutants that can be released into waterways. Under the Clean Water Act, concentrated animal feeding operations that discharge pollutants to surface waters must obtain permits from EPA or authorized states. However, unlike municipal and most industrial facilities that are allowed to discharge some waste, concentrated animal feeding operations are required to construct and operate facilities that do not release any waste to surface waters, except in extraordinary circumstances. Under EPA’s prior regulations, animal feeding operations could be defined as CAFOs and require discharge permits if they, among other things had more than 1,000 animal units, had more than 300 animal units and either discharged through a man-made device into navigable waters or directly into waters of the United States that originate outside the facility, or were of any size but had been determined by EPA or the state permitting authority to contribute significantly to water pollution. Under these regulations, a large animal feeding operation did not need a permit if it only discharged during a 25-year, 24-hour storm event—the amount of rainfall during a 24-hour period that occurs on average once every 25 years or more. In addition, the regulations did not generally require permits for chicken operations that use dry manure-handling systems—that is, systems that do not use water to handle their waste. Further, animal wastes that were applied to crop and pastureland were generally not regulated. EPA has authorized 44 states and the U.S. Virgin Islands to administer the discharge permit program for CAFOs. To become an authorized state, the state must have discharge permit requirements that are at least as stringent as the requirements imposed under the federal program and must contain several key provisions. These provisions include allowing for public participation in issuing permits; issuing permits that must be renewed every 5 years; including authority for EPA and authorized states to take enforcement action against those who violate permit conditions; and providing for public participation in the state enforcement process by either allowing the public to participate in any civil or administrative action or by providing assurance that the state will investigate citizen complaints. According to EPA, public participation in the permitting and enforcement process is critical because it allows the public to express its views on the proposed operations and to assist EPA and state authorities in ensuring that permitted operations remain in compliance. The CAFO program has had two major shortcomings that have led to inconsistent and inadequate implementation by the authorized states. These shortcomings include (1) exemptions in EPA’s regulations that have allowed as many as 60 percent of the largest animal feeding operations to avoid obtaining permits and (2) minimal oversight of state CAFO programs by EPA. Although EPA maintains that it has limited tools to compel states to properly implement the CAFO program, it recently has had limited success in persuading some authorized states to begin issuing discharge permits that include all program requirements. Two exemptions in CAFO regulations have allowed large numbers of animal feeding operations to avoid obtaining discharge permits. However, EPA believes that many of these operations may degrade water quality. The first exemption allowed operations to avoid obtaining discharge permits if they discharge waste only during 25-year, 24-hour rainstorm events. However, based on its compliance and enforcement experience, EPA believes that many of the operations using this exemption should, in fact, have a discharge permit because they are likely discharging more frequently. For example, when EPA proposed changes to the CAFO regulations, it stated that operations using this exemption were not taking into consideration discharges that may occur as a result of overfilling the waste storage facility, accidental spills, or improper land application of manure and wastewater. The second exemption allowed about 3,000 confined chicken operations that use dry manure-handling systems to avoid obtaining permits. EPA believes that chicken operations using dry manure-handling systems should obtain permits because EPA and state water quality assessments found that nutrients from confined chicken operations, similar to other large livestock operations, contaminate waters through improper storage, accidental spills, and land application. As a result of these exemptions, we estimate that only about 40 percent (4,500 of 11,500) of confined animal feeding operations currently have discharge permits. In addition, EPA believes about 4,000 smaller animal feeding operations may threaten water quality and may also need to be permitted. According to EPA and state officials, these smaller operations are generally not permitted because federal and state programs have historically focused their limited resources dedicated to CAFOs on regulating only the largest operations. EPA’s limited oversight of the states has contributed to inconsistent and inadequate implementation by the authorized states. In particular, our surveys show that 11 authorized states—with a total of more than 1,000 large animal feeding operations–do not properly issue discharge permits. Although eight of these states issue some type of permit to CAFOs, the permits do not meet all EPA requirements, such as including provisions for public participation in issuing permits. The remaining three states do not issue any type of permit to CAFOs, thereby leaving facilities and their wastes essentially unregulated. EPA officials believe that most large operations either discharge or have a potential to discharge animal waste to surface waters and should have discharge permits. The two states that lead the nation in swine production illustrate how programs can meet some EPA permit requirements but not others. For example, while Iowa’s permits for uncovered operations (see fig. 1) meet all program requirements, its permits for covered operations (see fig. 2) do not. Contrary to EPA requirements that permits are renewed every 5 years, Iowa issues these permits for indefinite periods of time. While North Carolina issues permits to both covered and uncovered animal feeding operations, these permits do not include all EPA requirements, such as provisions for public participation or allowing for EPA enforcement of the state permit. Michigan and Wisconsin also illustrate how two authorized states with a similar number of animal feeding operations differ in program implementation. According to USDA estimates, both states have over 100 operations with more than 1,000 animal units that could be defined as CAFOs. While Wisconsin had issued 110 permits to these operations, Michigan had not issued any, according to our survey. As a result, waste discharges from facilities in Michigan remained unregulated under the CAFO program. EPA officials acknowledged that until the mid-1990s the agency had placed little emphasis on and directed few resources to the CAFO program and that this inattention has contributed to inconsistent and inadequate implementation by authorized states. Instead, the agency gave higher priority and devoted greater resources to its permit program for the more traditional point sources of pollution—industrial and municipal waste treatment facilities. However, as EPA’s and the states’ efforts have reduced pollution from these sources, concerns grew in the 1990s that the increasing number of large concentrated animal feeding operations could potentially threaten surface water quality. In response, EPA began placing more emphasis and directing more resources to the CAFO program. As a result, some states that had not previously issued discharge permits began to do so. As shown in figure 3, EPA has historically assigned significantly more personnel resources to the industrial and municipal portions of the NPDES permit program. In the four regions we reviewed, the number of full-time equivalent positions dedicated to the CAFO program has increased since 1997—from 1 to 6 percent—but this increase has, for the most part, been at the expense of the industrial and municipal portions of the permit program. EPA officials told us that due to budget constraints, any increase in resources in one program area requires the reduction of resources in others. In addition to resource constraints, EPA officials say that the agency has little leverage to compel states to issue permits with all required elements because the agency’s primary recourses in such situations are to either (1) withhold grant funding it provides to states for program operations or (2) withdraw the states’ authority to run the entire NPDES permit program, including the regulation of industrial and municipal waste treatment facilities. EPA has been reluctant to use these tools because it maintains that withholding grant funding would further weaken the states’ ability to properly implement the program and EPA does not have the resources to directly implement the permit program in additional states. To date, EPA has never withheld grants or withdrawn a state’s authority. However, EPA has had limited success in persuading some authorized states to begin issuing discharge permits with all EPA requirements. For example, Michigan has been an authorized state since 1973, but only agreed in 2002 to begin issuing discharge permits. This agreement followed an EPA investigation that revealed several unpermitted CAFOs. Similarly, EPA recently persuaded Iowa to increase the issuance of discharge permits to uncovered feedlots. However, to date the agency has not been able to convince the state to issue permits to its covered operations, even though EPA believes these types of operations should also have permits. In 2002, EPA was also successful in persuading three other authorized states—Florida, North Carolina, and South Carolina—to begin issuing discharge permits that meet all program requirements. According to our surveys of the regions and states, EPA’s revised regulations—eliminating the 25-year, 24-hour storm exemption; explicitly including dry-manure chicken operations; and extending permit coverage to include the land application areas under the control of CAFO—address some key problems of the CAFO program. However, they will also increase EPA’s oversight responsibility and require authorized states to increase their permitting, inspection, and enforcement activities. Furthermore, neither EPA nor the states have planned how they will face these challenges or implement the revised program. EPA’s decision to eliminate regulatory exemptions should strengthen the permit program because the revised regulations will extend coverage to more animal feeding operations that have the potential to contaminate waterways. As previously mentioned, the 25-year, 24-hour storm exemption has proven particularly problematic for EPA and the states because it allowed CAFO operators to bypass permitting altogether. By eliminating this exemption, we estimate that an additional 4,000 large animal feeding operations will require permits. According to our survey results, the elimination of this exemption could significantly improve the program. In addition, EPA’s decision to also explicitly require permits for large dry-manure chicken operations will increase the number of permitted facilities by another 3,000. Lastly, CAFO operators are, for the first time, required to either (1) apply for a permit or (2) provide evidence to demonstrate that they have no potential to discharge to surface waters. In addition to eliminating regulatory exemptions, EPA also extended permit coverage to include the application of animal waste to crop and pastureland controlled by the CAFO. Specifically, CAFO operators who apply manure to their land will be required to develop and implement nutrient management plans that, among other things, specify how much manure can be applied to crop and pastureland to minimize potential adverse effects on the environment. CAFO operators will need to maintain the plan on site and, upon request, make it available to the state permit authority for review. Although EPA believes that the revised regulations will improve the CAFO program, the changes will create resource and administrative challenges for the authorized states. We estimate that the revised regulations could increase the number of operations required to obtain permits by an estimated 7,000—from about 4,500 permits currently issued, to about 11,500. States will therefore need to increase their efforts to identify, permit, and inspect animal feeding operations and, most likely, will have to increase their enforcement actions. However, many states have not yet identified and permitted CAFOs that EPA believes should already have been covered by the CAFO program. Therefore, increased permitting requirements could prove to be a daunting task. For example, Iowa has only permitted 32 operations out of more than 1,000 of its animal feeding operations that have more than 1,000 animal units. Furthermore, states may need to identify and permit an estimated 4,000 operations with fewer than 1,000 animal units that EPA believes may be discharging. Finally, when states inspect CAFOs, they will need to determine if the operation’s nutrient management plan is being properly implemented. According to state officials, meeting these demands will require additional personnel. However, most of the states we visited cannot hire additional staff and would have to redeploy personnel from other programs. For example, Iowa and North Carolina, two states with a large number of potential CAFOs, each have less than one full-time employee working in the CAFO program. While the burden of implementing the revised regulations will fall primarily on the states, EPA will need to increase its oversight of state programs to ensure that the states properly adopt and implement the new requirements. This oversight effort will be especially important in light of the large number of animal feeding operations that will need permits under the revised regulations. Although most of the regions have not determined precisely what additional resources they will need to adequately carry out their increased responsibilities, EPA officials told us that, like the states, they will have to redeploy resources from other programs. Despite the challenges that EPA and the states will face in implementing the revised CAFO program, they have not yet prepared for their additional responsibilities. According to our survey of 10 EPA regions, the regions and states have not estimated the resources they will need to implement the revised CAFO program. EPA, for its part, has not developed a plan for how it intends to carry out its increased oversight responsibilities under the revised regulations, such as ensuring that authorized states properly permit and inspect CAFOs and take appropriate enforcement action. EPA and state officials told us they intend to wait until the revised regulations are issued before they begin planning for their implementation. EPA did not formally consult with USDA when it was developing the proposed CAFO regulations published in January 2001, but the department has played a greater role in providing input for the revised regulations. EPA and USDA developed a joint animal feeding operation strategy in 1998 to address the adverse environmental and public health effects of animal feeding operations. However, USDA’s involvement in developing the proposed CAFO regulations was generally limited to responding to EPA requests for data. USDA officials told us that they were asked to provide substantive comments only after the Office of Management and Budget suggested that EPA solicit USDA’s views. However, USDA officials maintained that they did not have sufficient time to fully assess the proposed regulations and discuss its concerns with EPA before the proposed regulations were published in January 2001. In June 2001, to address USDA concerns, EPA and USDA established an interagency workgroup on the proposed revisions to the CAFO regulations. Under this arrangement, USDA provided technical information that identified how the proposed regulations could adversely affect the livestock industry and suggested alternative approaches that would mitigate these effects. For example, through this interagency workgroup, USDA suggested that EPA consider allowing states greater flexibility in regulating smaller operations. USDA also raised concerns that EPA’s proposed nutrient management plan was not entirely consistent with USDA’s existing comprehensive nutrient management plan and would be confusing to operators. EPA agreed to take these concerns into consideration when it prepared the final revisions to the regulations. In July 2001, to further strengthen the cooperative process, EPA and USDA developed Principles of Collaboration to ensure that the perspectives of both organizations are realized. In essence, the principles recognize that USDA and EPA have clear and distinct missions, authorities, and expertise, yet can work in partnership on issues on mutual concern. To ensure that both EPA and USDA work together constructively, the principles call for EPA and USDA to establish mutually agreeable time frames for joint efforts and provide adequate opportunities to review and comment on materials developed in collaboration prior to public release. According to USDA and EPA officials, this new arrangement has improved the agencies’ working relationship. Although EPA has historically given the CAFO program relatively low priority, it has recently placed greater attention on it as a result of the 1989 lawsuit and the growing recognition of animal feeding operations’ contributions to water quality impairment. The implementation of the CAFO program has been uneven because of regulatory exemptions and the lower priority EPA and the states have assigned to it. Although EPA has had some recent success in persuading states to begin issuing discharge permits that include all program requirements, agency officials say that their ability to compel states to do so is limited. While the revised regulations will help address the regulatory problems, they will also increase states’ burdens for permitting, inspecting, and taking enforcement actions. Because several states have yet to fully implement the previous, more limited, program, EPA will need to increase its oversight of state programs in order to ensure that the new requirements are properly adopted and carried out by the states. EPA and the states have not identified what they will need to do—or the required resources—to carry out these increased responsibilities. For example, they have not determined how they intend to accomplish their expanded roles and responsibilities within current staff levels. To help ensure that the potential benefits of the revised CAFO program are realized, we recommend that the Administrator, EPA, develop and implement a comprehensive tactical plan that identifies how the agency will carry out its increased oversight responsibilities under the revised program. Specifically, this plan should address what steps the agency will take to ensure that authorized states are properly permitting and inspecting CAFOs and taking appropriate enforcement actions against those in noncompliance. In addition, the plan should identify what, if any, additional resources will be needed to carry out the plan and how these resources will be obtained; and work with authorized states to develop and implement their own plans that identify how they intend to carry out their increased permitting, inspection, and enforcement responsibilities within specified time frames. These plans should also address what, if any, additional resources will be needed to properly implement the program and how these resources will be obtained. We provided EPA and USDA with a draft of this report for review and comment. The Director of Animal Husbandry and Clean Water Programs, along with other USDA officials, provided oral comments for USDA. EPA provided written comments. Both agencies expressed agreement with the findings and recommendations in the report. EPA and USDA also provided technical comments that we incorporated into the report as appropriate. EPA’s written comments are presented in appendix II. We are sending copies of this report to the Administrator of the Environmental Protection Agency, the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix III. To determine the problems EPA faced in administering the CAFO program and the potential challenges the states and EPA may face when implementing revisions to its CAFO regulations, we surveyed all 10 EPA regional offices. Our survey asked regional officials to provide information on program management and oversight of authorized states’ CAFO programs, resources dedicated to the program, problems EPA has faced administering the program, and the potential challenges the states and EPA might face in implementing revisions to the CAFO program. In addition, we interviewed EPA officials in 4 of the 10 regions. We judgmentally selected the 4 regions that represent 23 states with an estimated 70 percent of large animal feeding operations that could be designated as CAFOs under the revised regulations. Because EPA and most states do not know precisely how many animal feeding operations should have discharge permits, we used USDA’s estimate of the number of potential CAFOs based on livestock type and the number of animals on the farm from the 1997 Census of Agriculture. These regions and their represented states are Region 3–Philadelphia: Delaware, Maryland, Pennsylvania, Virginia, and Region 4–Atlanta: Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee; Region 5–Chicago: Illinois, Indiana, Michigan, Minnesota, Ohio, and Region 7–Kansas City: Iowa, Kansas, Missouri, and Nebraska. To determine how the 44 authorized states and the U.S. Virgin Islands administer the program and to obtain their views on the challenges they might encounter in implementing the revised regulations, we interviewed program officials in four authorized states—Iowa, North Carolina, Pennsylvania, and Wisconsin. We judgmentally selected these states from among the four regions we visited because they have large numbers of confined poultry, swine, and dairy and beef cattle operations. We did not evaluate how EPA directly administers the program in the states and territories not authorized to implement the CAFO program because these states contained less than 5 percent of large CAFOs. EPA administers the program directly because these states have not asked for authority to administer the program. To examine the extent of USDA’s involvement in developing the proposed revisions to EPA’s CAFO regulations, we interviewed officials in USDA’s Natural Resources Conservation Service and EPA. We also observed an EPA and USDA Working Group Meeting on Concentrated Animal Feeding Operations. We conducted our review from January 2002 through October 2002 in accordance with generally accepted government auditing standards. In addition to the individual named above, Mary Denigan-Macauley, Oliver Easterwood, Lynn Musser, Paul Pansini, and John C. Smith made key contributions to this report. | Congress is concerned that waste from animal feeding operations continues to threaten water quality. In light of this concern, GAO was asked to review the Environmental Protection Agency's (EPA) administration of its regulatory program for animal feeding operations and to determine the potential challenges states and EPA may face when they begin to implement the revisions to this program. GAO surveyed all EPA regional offices and four states with large numbers of animal feeding operations that may be subject to EPA regulations. Until the mid-1990s, EPA placed little emphasis on and had directed few resources to its animal feeding operations permit program because it gave higher priority to other sources of water pollution. In addition, regulatory exemptions have allowed many large operations to avoid regulation. As a result of these problems, many operations that EPA believes are polluting the nation's waters remain unregulated. Implementation of revised regulations raise management and resource challenges for the states and the agency. For example, because the number of animal feeding operations subject to the regulations will increase dramatically, states will need to increase their efforts to identify, permit, and inspect facilities and take appropriate enforcement actions against those in noncompliance. For its part, EPA will need to increase its oversight of state programs to ensure that the new requirements are adopted and implemented. Neither the states nor EPA have determined how they will meet these challenges. |
To determine why the Erie WSO was spun down before completion of the Secretary of Commerce’s report on 32 areas of concern, we analyzed documents that described the spin-down and reviewed the Secretary’s report. We also discussed the timeline of these events with NWS officials. To determine what weather services were provided before and after the Erie office was spun down, we reviewed NWS site implementation plans for the Cleveland, Pittsburgh, and Central Pennsylvania weather offices, and interviewed former employees of the Erie WSO and officials at each of the three WFOs. We also discussed the services provided and concerns raised about the quality and types of services with (1) members of Save Our Station, a group dedicated to saving the Erie WSO, (2) Erie television station meteorologists, (3) the National Air Traffic Controllers Association safety representative at Erie International Airport, (4) officials at Presque Isle State Park, Erie, (5) the officer in charge of the U.S. Coast Guard Station in Erie, and (6) emergency management officials and representatives of emergency volunteer organizations, such as Skywarn, in each of the nine counties that constituted the Erie WSO warning area. We reviewed NWS’ responses to concerns raised. We identified safety concerns raised regarding the weather services provided at the Erie airport and obtained NWS’ responses to these concerns through interviews with the National Air Traffic Controllers Association safety representative at Erie International Airport, the manager of the Aviation Weather Requirements Division, the Federal Aviation Administration (FAA), and NWS officials. To identify concerns raised about small-craft advisories on Lake Erie, we interviewed (1) officials at Presque Isle State Park, (2) the officer in charge of the U.S. Coast Guard station in Erie, (3) the commander of the Greater Erie Boating Association, and (4) members of Save Our Station. We reviewed NWS documents relating to aviation weather and the small-craft advisories on Lake Erie and obtained NWS’ responses to safety concerns. To determine if reliable statistical or other evidence existed that addressed degradation of service, we reviewed NWS verification statistics for severe weather events in the nine counties included in the Erie WSO county warning area prior to and after spin-down of the Erie office. We discussed the methodology and process used to develop these statistics, and their reliability, with NWS officials. In addition, we discussed NWS verification statistics and studies with a professor emeritus and an associate professor of meteorology at Pennsylvania State University and also with the chairperson of the Modernization Transition Committee. Further, we reviewed available NWS lake-effect snow study reports. We interviewed the NWS Eastern region team responsible for the lake-effect snow study and the director of the Office of Meteorology at NWS headquarters. In discussions with representatives of Save Our Station, county emergency management directors, and volunteer organizations, we obtained specific examples of weather events that these individuals believed demonstrated evidence of degradation of service. In addition, we reviewed the National Research Council (NRC) report on NWS modernization and the Secretary’s report on 32 areas of concern, with specific reference to radar coverage. To understand the ability of NWS’ new radars and other data tools available to forecasters to provide adequate coverage for severe weather event warnings and lake-effect snow, we discussed this topic with NWS officials and the study director of NRC, the chairperson of the Modernization Transition Committee, a member of the Secretary’s report team who was the acknowledged expert on NWS radar, the former chairperson of NRC’s Modernization Committee (who is also a professor emeritus of meteorology), and an associate professor of meteorology at Pennsylvania State University. We performed our work at NWS headquarters in Silver Spring, Maryland; at the NWS Eastern region in Bohemia, New York; at the Cleveland, Pittsburgh, and Central Pennsylvania WFOs; and at the Erie WSO. In addition, we conducted telephone interviews with emergency management officials and emergency volunteers in the Erie WSO county warning area. We performed our work from April to August 1997, in accordance with generally accepted government auditing standards. As agreed with your offices, we did not assess the adequacy of the NWS responses to identified concerns, and we did not assess the adequacy of reports discussed in this report. The Secretary of Commerce provided written comments on a draft of this report. These comments are discussed at the end of this report and are reprinted in appendix II. NWS began a nationwide modernization program in the 1980s to upgrade observing systems, such as satellites and radars, and design and develop advanced forecaster computer workstations. The goals of the modernization are to achieve more uniform weather services across the nation, improve forecasts, provide better detection and prediction of severe weather and flooding, permit more cost-effective operations through staff and office reductions, and achieve higher productivity. As part of its modernization program, NWS plans to shift its field office structure from 52 Weather Service Forecast Offices and 204 WSOs, to one with 119 WFOs. NWS field offices provide basic weather services such as forecasts, severe weather warnings, warning preparedness, and—where applicable—aviation and marine forecasts. Warnings include “short-fused”—events such as tornadoes, flash floods, and severe storms—and “long-fused”—events such as gales and heavy snow. NWS broadcasts forecasts and warnings over the National Oceanic and Atmospheric Administration’s (NOAA) Weather Radio. NWS offices transmit hourly weather updates and severe weather warnings as they are issued on hundreds of NOAA Weather Radio stations around the country. Warning preparedness includes coordinating with local emergency management, law enforcement agencies, and the media on notification of and response to severe weather events, and training volunteer weather observers to collect and report data under a program commonly called Skywarn. NWS relies heavily on supplemental data provided by Skywarn volunteers’ reports on severe weather events. Under NWS’ restructuring plan, the Erie WSO is slated for closure and has been spun down operationally. When fully functioning, this office’s primary role was to provide severe weather warnings to nine counties in northwestern Pennsylvania, operate an on-site radar, and take surface-condition weather observations. Under the NWS field office restructuring, responsibility for Erie’s nine counties is divided among three WFOs: Erie and Crawford counties are served by the Cleveland WFO; Venango and Forest counties are served by the Pittsburgh WFO; and Cameron, Elk, McKean, Potter, and Warren counties are served by the Central Pennsylvania WFO (located at State College, Pennsylvania).Figures 1 and 2 present maps of the premodernized and modernized office structures for the northwestern Pennsylvania area. Under the field office restructuring, the three offices assuming coverage responsibility for Erie’s nine counties have been in the process of installing new systems and equipment, such as new radars, and training staff in using the new technologies. In addition, each office taking on part of Erie’s former responsibilities communicated modernization and restructuring changes with the newly-assumed counties’ emergency response community, volunteer weather observers, the media, and the public. Once sufficient systems and staff were in place, the three WFOs—Cleveland, Pittsburgh, and Central Pennsylvania—began assuming responsibility for their respective counties. Erie gradually phased out its routine radar operation; it was responsible for augmenting ASOS until October 1996 when FAA took over responsibility for this function. Two other NWS changes affected the Erie area, but were not part of the spin-down or required for consideration in making an office closure certification; these changes affected the number and type of forecasts issued and the area covered by the forecasts. First, in both the premodernized and modernized environments, the 2-day forecast is broken into four 12-hour periods. However, with access to improved, real-time data from new technology—primarily the new radars implemented as part of the modernization—NWS in 1994 added a short-term forecast, called the Nowcast, which is a 6-hour forecast. The second change NWS implemented during modernization was a reduction in the area covered by its zone forecast. Before modernization, forecast zones (i.e., the areas for which a particular forecast was issued) could include several counties as well as specific localized forecasts for high-population areas. In October 1993, NWS reduced the size of its zones to single counties to allow forecasters to take advantage of improved data and make more specific forecasts and warnings. Because of this ability to be more specific, most NWS areas discontinued the localized forecasts for high-population areas. The Weather Service Modernization Act requires that before any office may be closed, the Secretary of Commerce must certify to the Congress that closing the field office will not degrade service to the affected area. This certification must include (1) a description of local weather characteristics and weather-related concerns that affect the weather services provided within the service area, (2) a detailed comparison of the services provided within the service area and the services to be provided after such action, (3) a description of recent or expected modernization of NWS operations that will enhance services in the area, (4) identification of areas within a state that will not receive coverage (at an elevation of 10,000 feet or below) by the modernized radar network, (5) evidence, based upon a demonstration of modernized NWS operations, used to conclude that services will not be degraded from such action, and (6) any report of the Modernization Transition Committee that evaluates the proposed certification. In response to concerns from members of the Congress, the Department of Commerce agreed to take several steps to identify community concerns regarding modernization changes, such as office closures, and study the potential for degradation of service. First, the Department published a notice in the Federal Register in November 1994, requesting comments on service areas where it was believed that premodernized weather services may be degraded with planned modernization changes. Next, the Department contracted with NRC to conduct an independent scientific assessment of proposed modernized radar coverage and consolidation of field offices in terms of the no degradation of service requirement. In addition, NRC established criteria for identifying service areas where the elimination of older radars could degrade services. Finally, the Secretary of Commerce applied the NRC criteria to identified areas of concern to determine whether a degradation of service is likely to occur. The resulting report, Secretary’s Report to Congress on Adequacy of NEXRAD Coverage and Degradation of Weather Services Under National Weather Service Modernization for 32 Areas of Concern, was issued in October 1995. NWS started spinning down the Erie WSO by transferring warning responsibilities to the three assuming WFOs in August 1994 before the Department of Commerce began its review of areas of concern. However, Erie community members raised questions in June 1994, several months before Erie was identified as one of the areas of concern through the Federal Register process. NWS continued with its plans to spin down the office because officials believed they would be providing the best service to the area by relying on modernized radars in other offices. Erie continued surface observations and radar operations until October 1996 and March 1997, respectively. The starting point for the Department of Commerce study of areas of concern was the November 1994 Federal Register announcement soliciting concerns about NWS modernization and restructuring plans. In February 1995, Erie was identified as 1 of 32 areas of concern. The Department of Commerce reviewed the 32 areas between June and August 1995, and issued its report in October 1995. The report concluded that with the exception of lake-effect snow, the assuming WFOs will be able to detect severe weather phenomena over northwestern Pennsylvania. In addition, the report recommended that NWS (1) compare the adequacy of the assuming WFOs’ new radars and other data sources with Erie’s old radar in identifying lake-effect snow over a 2-year period and (2) transmit data from Erie’s radar to nearby WFOs to support the lake-effect snow study and facilitate the continued spin-down of the Erie office. The three weather offices that assumed responsibility for the counties formerly served by the Erie WSO provide generally the same types of services that the Erie office had provided, with the exception of the general public’s local or toll-free telephone access to NWS personnel. The general public in the nine counties must now call long-distance to contact the Cleveland, Central Pennsylvania, and Pittsburgh WFOs. Services for Erie and Crawford counties are now provided entirely by the Cleveland WFO. There are few changes to the services that were provided by the Erie WSO. The primary changes are the discontinuance of the localized forecast for the city of Erie and the addition of the Nowcast. As noted before, localized forecasts were discontinued because of changes in the size and detail of zone forecasts. Another significant change is the transfer of ASOS augmentation to FAA. This relieves NWS of maintaining staff on-site to take observations. Table 1 presents a detailed comparison of the services provided to Erie and Crawford counties before and after spin-down. The Pittsburgh WFO now provides all services to Venango and Forest counties with the exception of issuing NOAA weather radio reports and updates. Changes in services to these counties are minimal as Pittsburgh was already providing many services to these areas. The only significant change is the addition of the short-term forecast—the Nowcast—which was not provided in premodernization. Table 2 presents a detailed comparison of services provided before and after spin-down. Services for Cameron, Elk, McKean, Potter, and Warren counties are now provided mostly by the Central Pennsylvania WFO. Since this office is not yet fully staffed, forecasting and long-fused warning services are still provided by Pittsburgh. Again, with the exception of the Nowcast, no major changes have occurred for these counties. Since many of these counties are mountainous, NOAA Weather Radio service does not reach all areas. NWS believes service will be improved when additional transmitters are installed in fiscal year 1998. The Central Pennsylvania and Pittsburgh WFOs will program these transmitters. Table 3 presents a detailed comparison of services provided before and after spin-down. Many concerns have been raised about the specific services being provided by NWS as well as the quality of the service provided. Most concerns had been brought to NWS’ attention and NWS provided responses to them. Other concerns brought to our attention either had not been reported to NWS or NWS had not officially responded. We discussed these concerns with NWS officials and received their responses. The most common concern—voiced by almost every individual we spoke with—was with the ability of distant radars to detect all types of weather phenomena. Table 4 presents concerns raised by users in Erie and Crawford counties and NWS’ responses. The primary concern voiced from five of the seven counties now served by the Central Pennsylvania and Pittsburgh WFOs was the ability of distant radars to provide adequate coverage for severe weather phenomena in order to issue accurate and timely forecasts and warnings. Some users in counties at the fringes of radar coverage questioned NWS’ ability to track approaching severe weather outside the range of an office’s radar. NWS’ responses to these concerns were to assure county officials and residents that the new radars and other components of the modernization, such as satellites and improved weather models, would enable NWS to provide better service to their areas. Furthermore, WFOs can access radar data from nearby WFOs. For example, if a severe storm was moving eastward into northwestern Pennsylvania, Central Pennsylvania and Pittsburgh staff would likely access data from Cleveland’s radar to help determine the path and intensity of the event. One individual expressed concern that during severe weather events, there may not be sufficient staff to operate the amateur radio equipment, which is used to communicate with Skywarn volunteers. According to NWS, there are licensed amateur radio operators on staff. However, if licensed staff are not available during severe events, NWS can call on volunteers to help operate the equipment. These concerns seemed to have been allayed as most officials told us that service provided by the new offices is at least equal to the service provided before modernization. A few concerns have been raised regarding weather services provided at the Erie International Airport and the timeliness of small-craft advisories for Lake Erie. The most commonly cited concern was with ASOS, which has been the subject of much scrutiny since its nationwide deployment. We reported on several ASOS issues in 1995, such as specific sensor problems and the system’s difficulty reporting actual, prevailing conditions in rapidly changing or patchy weather conditions. NWS has implemented modifications to address sensor problems and, in some places, including Erie, added sensors to better report representative observations. In addition, since ASOS does not replace all human observations, human observers must continue to take manual observations at airports such as Erie to supplement the system (this process is called augmentation) and correct the system when it is not accurately reporting current conditions. Under an NWS/FAA interagency agreement, FAA accepted augmentation responsibility for the Erie ASOS in October 1996. At that point, NWS weather observers were discontinued at Erie and air traffic controllers became responsible for augmenting ASOS observations and correcting the system when it reported inaccurate conditions. Concerns surround the issue of whether this ASOS augmentation responsibility is too much for air traffic controllers. FAA recognizes these concerns and has sponsored an independent study of the impact of ASOS augmentation. According to the manager of FAA’s Aviation Weather Requirements Division, a report is expected in the fall of 1997. Table 5 presents specific safety concerns raised and NWS responses. There are several sources of evidence that address whether a degradation of service has occurred in the Erie area. NWS’ statistical verification program collects performance data on the issuance of forecasts and warnings and provides information necessary to compare “premodernized” and “modernized” performance. Overall, data for the former nine-county Erie WSO area show an improvement in service under the three WFOs. Studies by NRC and the Department of Commerce analyzed the ability of the new radars and other components of the modernization to detect certain weather phenomena and assessed the potential for degradation of weather services in the Erie area. NRC concluded that the ability to detect three severe weather phenomena, including lake-effect snow, was questionable. The Department of Commerce’s study expanded on NRC’s work and concluded that lake-effect snow was the only phenomena that remained a concern. NWS is completing a 3-year study of its ability to detect and predict lake-effect snow in the Great Lakes area, which includes northwestern Pennsylvania. Since the 1980s, NWS has assessed the accuracy and timeliness of its severe weather warnings and public and aviation forecasts through a statistical verification program. The verification process includes determining the accuracy of the forecast elements of maximum and minimum temperature and probability of precipitation. Several elements of the aviation forecasts are likewise verified. Severe weather warnings are verified by determining whether an event for which a warning was issued occurred. The elements calculated for warning verification are probability of detection (i.e., NWS’ ability to detect weather events—the higher the probability, the better the performance), false alarm rate, and lead time. If a warning was issued but a severe weather event did not occur, a higher false alarm rate results. If a severe weather event occurred without a warning, the probability of detection goes down. Warning and forecast verification statistics historically have been used to help weather office managers determine trends in performance and identify areas needing improvement. With modernization, the statistics are included in the certification package as support either for or against a determination of degradation of service. NWS officials stressed, however, that verification statistics are not the most important component of the no-degradation assessment. Rather, they said, they rely most heavily on feedback from users to determine satisfaction with the level of service being provided and whether degradation has occurred. The verification statistics for the nine former Erie office counties show an overall improvement to the area in warning service. Appendix I presents the warning verification data for the nine-county area. The statistics also show slight improvement for public forecast service. The aviation forecast verification statistics show a negligible decline from .33 to .32, on a scale from 0 to 1 with 1 being the best performance. NWS officials cautioned that there are limitations to the verification program and resulting data. For example, since the number and type of weather events vary from year to year, it is impossible to directly compare performance from one year to another. In addition, it is more difficult to verify events in sparsely populated areas. Finally, NWS officials acknowledged that severe weather warning verification procedures vary across offices. In August 1994, the Department of Commerce contracted with NRC to study NWS’ modernized radar network coverage and identify any gaps that could result in a degradation of weather service. In addition, NRC was to develop criteria for the Department to use in determining the potential for degradation of service in those areas of concern identified through the public comment process. In June 1995, NRC issued its report, Toward a New National Weather Service: Assessment of NEXRAD Coverage and Associated Weather Services. Overall, NRC concluded that weather services on a national basis would be improved substantially under the new radar network. For example, compared with the old radar network, the modernized radar network will cover a much broader area of the contiguous United States and provide greater coverage for detecting specific severe weather phenomena, such as supercells, mini-supercells, and macrobursts. NRC also noted that the new radars are just one element in a composite weather system that includes satellites, automated surface observing equipment, wind profilers, improved numerical forecast models, and cooperative networks of human observers and spotters. NRC cautioned, however, that at old radar sites where radar coverage is to be provided by a new radar some distance away, there is the potential for degradation in radar-detection coverage capability. In particular, northwestern Pennsylvania was one such area with degraded radar coverage for macrobursts, mini-supercells, and lake-effect snow. NRC recommended NWS study the area to determine whether the degraded radar coverage would result in a degradation of weather service. Figure 3 shows the approximate gap in radar coverage for lake-effect snow over northwestern Pennsylvania. As agreed with concerned members of the Congress, the Department of Commerce used NRC’s criteria to evaluate the potential for degradation in the 32 areas identified via the Federal Register process and assessed the potential for degradation of service for the radar gaps identified in NRC’s report. The Secretary’s team conducted additional research into the capabilities of the new radars and found that the effective range of detection was greater than estimated by NRC. Specifically, the team concluded that the new radars serving the former Erie WSO area would be able to detect macrobursts and mini-supercells for northwestern Pennsylvania. It was still clear, however, that the radars could not adequately detect some lake-effect snow events in the Erie area. Therefore, the Secretary’s team recommended that NWS compare the adequacy of the assuming WFOs’ new radars and other data sources with Erie’s old radar in identifying lake-effect snow over a 2-year period to determine how well the composite weather system could help detect and predict lake-effect snow over the area in question. In addition, the report recommended that NWS keep the Erie radar (an older vintage) operational until the results of the study were compiled, which was done. NWS began a lake-effect snow study in November 1994, 1 year before the Secretary’s team recommended that a similar assessment be done. NWS initiated the study to improve its ability to detect and predict lake-effect snow, as well as in response to concerns raised by congressional staff and residents of northern Indiana and northwestern Pennsylvania; these areas were scheduled to lose old radars and, instead, receive coverage from more distant but modernized radars. The goal of the study was to find ways of improving the warning and forecast services associated with lake-effect snow events. In response to the Secretary’s team’s recommendation, however, another goal was added to this study—to determine whether lake-effect snow detection would be degraded over northwestern Pennsylvania, if the Erie radar and office were shut down. Data on lake-effect snow were collected over the three winter seasons between 1994 and 1997. While the broad study area included all areas in New York, Pennsylvania, Ohio, and Indiana that experience lake-effect snow, a seven-county area was established surrounding Erie on which more specific analysis would be performed. After each winter season, a data report was issued by NWS. These reports conclude that NWS has made significant progress in improving its ability to detect and forecast lake-effect snow, however, there are still questions about the level of this service being provided to northwestern Pennsylvania. For example, NWS’ Eastern Region reported that for about 35 percent of lake-effect snow events, the composite weather system will be insufficient to compensate for the degradation in radar coverage over northwestern Pennsylvania. In addition, this report stated that NWS is not able to provide detailed, short-term forecasts (Nowcasts) during lake-effect snow events like it can for other areas that have better radar coverage. The Eastern Region’s report and the director of NWS’ Office of Meteorology point out, however, that this problem does not constitute a degradation of service because the probability of detection for lake-effect snow in the seven-county study area has improved since 1993. Even though degradation has not occurred, according to the Eastern Region report and the director, this level of service is still unacceptable because lake-effect snow is the Erie area’s most severe weather condition and the community does not receive the same level of service that other lake communities receive. As a result, the Eastern Region report recommended that a radar be installed to provide better coverage for this severe weather phenomenon in northwestern Pennsylvania. The director of the Office of Meteorology agrees with this recommendation, but points out that since data from this new radar would be transmitted to existing WFOs, an additional weather office is not needed in the Erie area. NWS’ final report of the lake-effect snow study is expected this fall. Any conclusions and recommendations from the lake-effect snow study will be reviewed by the Secretary’s team, which will make recommendations to the Secretary regarding specific actions to be taken. Once the results of the lake-effect snow study are finalized and actions taken to address degradation concerns, if any, NWS officials told us they will pursue closure certification for the Erie office. In commenting on a draft of this report, the Department of Commerce took no exceptions to the information presented and acknowledged that we had conducted thorough work in researching the issues and preparing the report. The Department reiterated that, after NOAA presents the Secretary’s team with the results of the lake-effect snow study, it will review and evaluate the findings, conclusions, and recommendations and determine the need for a radar in northwestern Pennsylvania. The Department’s written response is reprinted in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 10 days from the date of this letter. At that time we will send copies to the Ranking Minority Member, House Committee on Science, and the Chairmen and Ranking Minority Members of the Senate Committee on Commerce, Science, and Transportation; House and Senate Committees on Appropriations; House Committee on Government Reform and Oversight; and Senate Committee on Governmental Affairs; and to the Director, Office of Management and Budget. We are also sending copies to Senators Arlen Specter and Rick Santorum; Congressman John Peterson; the Secretary of Commerce; the Administrator, National Oceanic and Atmospheric Administration; and the Acting Director of the National Weather Service. Copies will be made available to others upon request. Please contact me at (202) 512-6408 if you or your staffs have any questions concerning this report. I can also be reached by e-mail at [email protected]. Major contributors to this report are listed in appendix III. Lead-time (minutes) Lead-time (minutes) Lead-time (minutes) Keith A. Rhodes, Technical Director Mark E. Heatwole, Assistant Director Patricia J. Macauley, Information Systems Analyst-in-Charge J. Michael Resser, Business Process Analyst Michael P. Fruitman, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined how the National Weather Service (NWS) had implemented modernization and restructuring activities in northwestern Pennsylvania, focusing on identifying: (1) why the Erie, Pennsylvania, weather service office (WSO) was spun down prior to the Department of Commerce's October 1995 report on 32 areas of concern; (2) what types of services were provided to the counties served by the Erie office before and after office spin-down, as well as what public concerns have been raised, and how NWS responded to them; (3) what safety concerns have been raised to weather services at the Erie airport and to the timeliness of small-craft advisories for Lake Erie, including how NWS responded to public concerns about these issues; and (4) whether any reliable statistical or other evidence exists that addresses whether a degradation of service in the Erie area has occurred as a result of the modernization and office restructuring. GAO noted that: (1) NWS started spinning down the Erie WSO by transferring warning responsibilities to the three assuming Weather Forecast Offices (WFO) in August 1994 before the Department of Commerce began its review of the 32 areas of concern in June 1995; (2) concerns about the Erie office closure, however, were made known as early as June 1994; (3) NWS continued with its plans to spin down the office because officials believed that they would be providing the best service to the area by relying on modernized radars in other offices; (4) the three WFOs that assumed responsibility for the counties formerly served by the Erie WSO provide generally the same types of services that the Erie office had provided, with the exception of the general public's local or toll-free telephone access to NWS personnel; (5) the major concerns surrounding the transfer of responsibilities relate to whether radar coverage over the counties formerly served by Erie would be adequate, and whether forecasts and warnings are at least equal in accuracy and timeliness to those previously issued by Erie; (6) NWS responses to such concerns include analyzing its ability to detect severe weather phenomena over northwestern Pennsylvania, as well as providing data on how well the assuming offices are issuing forecasts and warnings; (7) a few concerns also have been raised regarding NWS service to the Erie airport and the timeliness of small-craft advisories for Lake Erie; (8) the most commonly voiced concern regarded an automated surface observing system (ASOS) and requirements for air traffic controllers to augment it with human observations; (9) the Federal Aviation Administration (FAA) has sponsored a study of the impact of its augmentation responsibilities at airports such as Erie and will be issuing a report in the fall of 1997; (10) several studies present evidence that a degradation in service has not occurred in northwestern Pennsylvania; however, the ability to detect and predict lake-effect snow remains a concern; (11) NWS is completing a lake-effect snow study to determine the effectiveness of the modernized weather system in detecting and forecasting lake-effect snow; (12) the Director of NWS' Office of Meteorology told GAO that he will recommend a radar for the Erie area; and (13) however, NWS has not yet taken a position on the need for a radar, and the Secretary of Commerce is scheduled to make the final decision on any action to be taken in northwestern Pennsylvania. |
Available studies and credit reporting industry data disagree on the extent of errors in credit reports. The limited literature on credit report accuracy indicated high rates of errors in credit report data. In contrast, the major CRAs and CDIA stated that they did not track errors specifically but that the data the credit industry maintained suggested much lower rates of errors. Both the literature and the data provided by the credit industry had serious limitations that restricted our ability to assess the overall level credit reporting accuracy. Yet, all of the studies identified similar types and causes of errors. While data provided by the credit industry did not address type and cause of errors, representatives from the three major CRAs and CDIA cited types and causes similar to those cited in the literature. The credit industry has developed and implemented procedures to help ensure accuracy of credit report data, although no one has assessed the efficacy of these procedures. Moreover, FTC tracks consumer disputes regarding the accuracy of information in credit reports and has taken eight enforcement actions directly or indirectly involving credit report accuracy since 1996. We identified three studies completed after the 1996 FCRA amendments that directly addressed credit report accuracy, and one that indirectly addressed the topic. One of these reports, published in December 2002 by Consumer Federation of America, presents the frequency and types of errors drawn from files requested by mortgage lenders on behalf of consumers actively seeking mortgages. The Consumer Federation of America initially reviewed 1,704 credit files representing consumers from 22 states and subsequently re-examined a sample of 51 three-agency merged files. In this sample of merged files, the study found wide variation in the information maintained by the CRAs, and that errors of omission were common in credit reports. For example, the report stated that about: 78 percent of credit files omitted a revolving account in good standing; 33 percent of credit files were missing a mortgage account that had 67 percent of credit files omitted other types of installment accounts that had never been late; 82 percent of the credit files had inconsistencies regarding the balance on revolving accounts or collections; and 96 percent of the credit files had inconsistencies regarding an account’s credit limit. A March 1998 U.S. Public Interest Research Group (U.S. PIRG) study found similar frequencies of errors in 133 credit files representing 88 individual consumers. U.S. PIRG reported that 70 percent of the files reviewed contained some form of error. The errors ranged in severity from those unlikely to have negative repercussions to those likely to cause a denial of credit. For example, the report found: 41 percent of the credit files contained personal identifying information that was long-outdated, belonged to someone else, was misspelled, or was otherwise incorrect; 29 percent of the credit files contained an error—accounts incorrectly marked as delinquent, credit accounts that belonged to someone else, or public records or judgments that belonged to someone else—that U.S. PIRG stated could possibly result in a denial of credit; and 20 percent of the credit files were missing a major credit card account, loan, mortgage, or other account that demonstrated the creditworthiness of the consumer. Similar to the U.S. PIRG study, a 2000 survey conducted by Consumers Union and published by Consumer Reports asked 25 Consumers Union staffers and their family members to apply for their credit reports and then review them. In all, Consumers Union staff and family members received and evaluated 63 credit reports and in more than half of the reports, they found inaccuracies that they reported as having the potential to derail a loan or deflect an offer for the lowest-interest credit card. The inaccuracies identified were similar to those reported by the Consumer Federation of America and U.S. PIRG—inclusion of information belonging to other consumers, inappropriately attributed debts, inaccurate demographic information, and inconsistencies between the credit reports provided by the three major CRAs regarding the same consumer. While not specifically assessing the accuracy of credit reports, a Federal Reserve Bulletin article found that credit reports contained inconsistencies and cited certain types of data furnishers, including collection agencies and public entities, as a primary source for some of the inconsistencies found. Among the study’s findings: Approximately 70 percent of the consumers in the study’s sample had a missing credit limit on one or more of their revolving accounts, Approximately 8 percent of all accounts showed positive balances but were not up to date, Between 1 and 2 percent of the files were supplied by creditors that reported negative information only, and Public records inconsistently reported actions such as bankruptcies and collections. An important aspect of the Federal Reserve study was that it used a statistically valid and representative sample of credit reports, and received access to this sample with the cooperation of one of the three major CRAs. However, because the sample came from one CRA only, the findings of the study may not be representative of other CRAs. Representatives of the three major CRAs and CDIA told us that they do not maintain data on the frequency of errors in credit reports. However, the industry does maintain data that suggest errors are infrequent in cases of an adverse action. CDIA stated that the three major CRAs provided or disclosed approximately 16 million credit reports, out of approximately 2 billion reports sold annually in the marketplace. According to CDIA data, 84 percent of the disclosures followed an adverse action and only 5 percent of disclosures went to people who requested their reports out of curiosity. Out of these disclosures, CRA officials stated that an extremely small percentage of people identified an error. An Arthur Andersen study, conducted in 1992, found a similar infrequent rate of errors arising from adverse actions. Under commission by the Associated Credit Bureaus (now CDIA), the study reportedly found that only 36 consumers—out of a sample of 15,703 people denied credit— disputed erroneous information that resulted in a reversal of the original negative credit decision. Similarly, in an attempt to respond to our data request, CDIA produced data gathered by a reseller over a two-week period that indicated that out of 189 mortgage consumers, only 2 consumers (1 percent) had a report that contained an inaccuracy. In our conversation with data furnishers, we discovered that two conduct internal audits on the accuracy of the information they provide to the CRAs. These data furnishers indicated that the information they provide and the CRAs maintain is accurate 99.8 percent of the time. While consumer disputes do not provide a reliable measure of credit report accuracy, CRA representatives told us that disputes provide an indicator of what people perceive as errors when reviewing their credit files. A CDIA official stated that five types of disputes comprise about 90 percent of all consumer disputes received by the three major CRAs. These five dispute types are described as: Claims account has been closed; Dispute present or previous account status, payment history, or Dispute related to disposition of account included in or excluded from Not my account. Although CDIA could not provide a definitive ranking for all five types of disputes, it did state that “not my account” was the most frequently received dispute. After receiving a consumer’s dispute, FCRA requires a CRA to conduct a reinvestigation. The purpose of reinvestigation is to either verify the accuracy of the disputed information, or to confirm and remove an error. CDIA provided data on the disposition of dispute reinvestigations by categories of those received by the three major CRAs in 2002. CRA officials explained that the data represents the first 3 quarters of 2002, and that each CRA reported data on a different quarter. CDIA declined to provide the total number of consumer disputes. Table 1 shows the frequency of these four disposition categories. Specifically, the table indicates that over half of all disputes required the CRA to modify a credit report in some way, though not necessarily to remove an error. It is important to emphasize that not every dispute leads to identifying an error. Indeed, many disputes, as the table indicates, resulted in a verification of accuracy or an update of existing information. Additionally, CRA and CDIA representatives stated that many disputes resulted in the CRA clarifying or explaining why a piece of information was included in the credit report. For example, if recently married consumers obtained a copy of their files, they might not see their married names on file. In such cases, the files still accurately reflected the most current information provided to the CRA, but the consumer may have perceived the less-than- current information as an error while the CRA would not. The CRA representative cited another example of a consumer seeing an account listed with a creditor he or she did not recognize. However, the account in question was with a retailer that subsequently outsourced its lending to another company. In this case, the information was correct but the consumer was not aware of the outsourcing. One CRA representative indicated that over 50 percent of the calls they received resulted in what they consider “consumer education.” We cannot determine the frequency of errors in credit reports based on the Consumer Federation of America, U.S. PIRG, and Consumers Union studies. Two of the studies did not use a statistically representative methodology because they examined only the credit files of their employees who verified the accuracy of the information, and it was not clear if the sampling methodology in the third study was statistically projectable. Moreover, all three studies counted any inaccuracy as an error regardless of the potential impact. Similarly, the studies used varying definitions in identifying errors, and provided sometimes obscure explanations of how they carried out their work. Because of this, the findings may not represent the total population of credit reports maintained by the CRAs. Moreover, none of these groups developed their findings in consultation with members of the credit reporting industry, who, according to a CDIA representative, could have verified or refuted some of the claimed errors. Beyond these limitations, a CDIA official stated that these studies misrepresented the frequency of errors because they assessed missing information as an error. According to CRA officials errors of omission may be mitigated in certain instances because certain lenders tend to use merged credit report files in making lending decisions, such as mortgage lenders and increasingly credit card lenders. CRA officials explained that while complete and current data are necessary for a wholly accurate credit file, both are not always available to them. For instance, credit-reporting cycles, which dictate when CRAs receive data updates from data furnishers, may affect the timeliness of data. CRAs rely on these updates, which may come daily, weekly, or monthly depending on the data furnisher’s reporting cycle. If a data furnisher provided information on a monthly basis there would be a lag between a consumer’s payment, for example, and the change in credit file information. Likewise, if a data furnisher reported to one CRA but not to another, the two reports would differ in content and could produce different credit scores. It is important to note that reporting information to the CRAs is voluntary on the part of data furnishers. While the Federal Reserve Bulletin article noted inconsistencies as an area of concern, it recognized that all credit reports would not contain identical information. Along with misrepresenting error frequency by counting omitted information, industry officials believed that the literature misrepresented the frequency of errors because the literature defined errors differently than the credit industry. The CRAs and CDIA stated that they consider only those errors that could have a meaningful impact on a person’s credit worthiness as real errors. This distinction is critical to assessing accuracy, as, according to the CDIA, a mistake in a consumer’s name might literally be an inaccuracy, but may ultimately have no impact on the consumer. The data provided by CDIA and the CRAs have serious limitations as well. For example, neither CDIA nor CRA officials provided an explanation of the methodology for the collection of data provided by CDIA and for the assessments cited by the CRAs. Moreover, because these data related primarily to those errors that consumers disputed after an adverse action, they excluded a potentially large population of errors. Specifically, these data excluded errors that would cause a credit grantor to offer less favorable terms on a loan rather than deny the loan application. The data also excluded errors in cases where consumers were not necessarily seeking a loan and therefore did not have a need to review their credit reports. Additionally, as stated earlier, only a small percentage of consumers requested credit reports simply out of curiosity. While the CDIA representatives felt that these data were useful for assessing a level of accuracy, they agreed that by focusing on these data only, the industry did not consider a potentially large set of errors. While both the literature and credit industry representatives cited similar types and causes of errors, neither the literature nor the credit industry data identified one particular type or cause of error as the most common. All respondents stated that error type could range from wrong names and incorrect addresses to inaccurate account balances and erroneous information from public records. Based on the literature we reviewed and on our discussions with CRA and data furnisher officials, we could not identify any one cause or source most responsible for errors. However, the Consumer Federation 2002 study, the Federal Reserve Bulletin article, and a representative from the National Foundation for Credit Counseling stated they felt data furnishers often caused more errors than did CRAs or consumers. According to several respondents, this was particularly true for data furnishers, such as collection agencies and public entities that did not rely on accurate credit reports for lending decisions. For example, while a bank needs accurate information in assessing lending risk, and thus attempts to report accurate information, a collection agency does not rely on credit reports for business decisions, and therefore has less of an incentive to report fully accurate information. Data furnishers told us that they did not consider CRAs as a significant cause of errors, but stated that difficulty in matching consumer identification information might cause some errors. Data furnishers also stated that the quality control efforts among data furnishers might vary due to the extent of data integrity procedures in place. They explained that some smaller data furnishers might not have sophisticated quality control procedures because implementing such a system was expensive. On the other hand, errors might occur at any step in the credit reporting process. Consumers could provide inaccurate names or addresses to a data furnisher. A data furnisher might introduce inaccuracies while processing information, performing data entry, or passing information on to the CRAs. And, CRAs might process data erroneously. Figure 1 shows some common causes for errors that might occur during the credit reporting process. CRAs and data furnishers also cited other causes of errors. For example, collection agencies and public records on bankruptcies, tax liens, and judgments were cited as major sources of errors. CRA officials and data furnishers said the growing number of fraudulent credit “repair” clinics that coach consumers to make frivolous reinvestigation requests in an effort to get accurate, though negative, information off the credit report also might cause errors, as disputed information a CRA cannot verify within 30 days is deleted from the consumer’s credit report. File segregation, a tactic in which a consumer with a negative credit history tries to create a new credit file by applying for credit using consistent but inaccurate information, was another reported cause for inaccurate credit data. The credit industry has been working on systems to help ensure accuracy since the “reasonable procedures” standard took effect under FCRA in 1970. Within the last decade, CDIA has led efforts to implement industry systems and processes to increase the accuracy of credit reports. In commenting upon accuracy, representatives from CDIA, the CRAs, the Federal Reserve, and the data furnishers stated that credit score models were highly calibrated and accurate and, on the aggregate level, credit reports and scores were highly predictive of credit risk. During the 1970s, the Associated Credit Bureaus (now CDIA) attempted to increase report accuracy by introducing Metro 1, a method of standardizing report formats. The goals of Metro 1 were to create consistency in reporting rules and impose a data template on the industry. In conjunction with the industry, in 1996 CDIA created Metro 2, an enhancement of the Metro 1 format that enables a finer distinction for reporting information. For example, Metro 2 allowed CDIA to implement an “Active Military Code” to protect the credit reports of troops serving overseas. Since active military personnel are legally entitled to longer periods to make credit payments without penalty, this new code ensured that data furnishers did not incorrectly report accounts as delinquent. While use of the Metro format is voluntary, CRAs currently receive over 99 percent of the volume of credit data—30,000 furnishers providing a total of 2 billion records per month—in either Metro 1 or Metro 2 format, with over 50 percent sent in Metro 2. One data furnisher who recently switched from Metro 1 to Metro 2 found that data accuracy improved overall as evidenced by the reduction in the number of data rejections by the CRAs and dispute data. Those data furnishers that do not use the Metro formats provide data on compact disc, diskette, tape, or other type of electronic media. While use of standardized reporting formats ensures more consistent reporting of information, because the industry has never conducted a study to set a baseline level of error frequency in credit reports, and does not currently collect such data, no one knows the extent to which these systems have improved accuracy in credit reports. FTC has taken eight formal enforcement actions since the passage of the 1996 FCRA amendments against CRAs, data furnishers, and resellers that directly or indirectly relate to credit report accuracy. FTC receives and tracks FCRA complaint data against CRAs by violation type and uses this data to identify areas that may warrant an enforcement action. While these data cannot provide the number of violations or frequency of errors in credit reports, since each complaint does not necessarily correspond to a violation, they can give a sense of the relative frequency of complaints surrounding CRAs. We discuss complaint data in more detail in the next section. According to FTC staff, accuracy in the context of FCRA means more than the requirement that CRAs establish “reasonable procedures to assure maximum possible accuracy of their reports.” They explained that the statute also seeks to improve accuracy of credit reports by a “self-help” process in which the different participants comply with duties imposed by FCRA. First, creditors and others that furnish information are responsible for accuracy. Second, credit bureaus must take reasonable steps to ensure accuracy. Finally, users of credit reports must notify consumers (provide adverse action notices) about denials of a loan, insurance, job, or other services because of something in their credit report. FTC staff stated that it is crucial that consumers receive adverse action notices so that they can obtain their credit reports and dispute any inaccurate information. For that reason, the Commission has made enforcement in this area a priority. FTC staff stated that their primary enforcement mechanism is to pursue action against a CRA or data furnisher that showed a pattern of repeated violations of the law identified through consumer complaints. According to FTC staff, the Commission has taken eight enforcement actions against CRAs, furnishers, or lenders, since 1996 that directly or indirectly addressed credit report accuracy. One case pertained to a furnisher providing inaccurate information to a CRA, two cases pertained to a furnisher or CRA failing to investigate a consumer dispute, and two actions were taken against lenders that did not provide adverse action notices as required by statute. The remaining three cases were against the major CRAs for blocking consumer calls and having excessive hold times for consumers calling to dispute information on their credit reports. In addition to enforcing FCRA, FTC also provides consumer educational materials and advises consumers on their rights (such as the right to sue a CRA or data furnisher for damages and recoup legal expenses). To date, no comprehensive assessments have addressed the impact of the 1996 FCRA credit report accuracy amendments or the potential effects inaccuracies have had on consumers. In addition, because it has not conducted surveys, FTC was not able to provide overall trend data on the frequency of errors in credit reports. Industry officials as well as two studies we reviewed suggest that errors and inaccuracies in credit reports have the potential to both help and hurt individual consumers, while in some instances errors or inaccuracies may have no effect on the consumer’s credit score. The impact of any particular error or inaccuracy in a particular credit report will be dependent on the unique and specific circumstances of the consumer. Data on the impact of the 1996 FCRA amendments on credit report accuracy was not available. For instance, we could not identify impact information from the literature we reviewed and industry officials with whom we spoke said they did not collect such data. Furthermore, FTC could not provide overall trend data but did provide FCRA-related consumer complaint data involving CRAs. FTC staff could not say what the trend in the frequency of errors in credit reports has been since the 1996 amendments because that data is not available. However, FTC officials provided consumer complaint data that shows from 1997 through 2002, the number of FCRA complaints involving CRAs received annually by FTC increased from 1,300 to almost 12,000. The most common complaints cited against CRAs in 2002 pertained to the violations are listed below: Provided inaccurate information (5,956 complaints); Failed to reinvestigate disputed information (2,300 complaints); Provided inadequate phone help (1,291 complaints); Disclosed incomplete/improper credit file to customer (1,033 Improperly conducted reinvestigation of disputed item (771 complaints). Consumer complaint data involving CRAs and FCRA provisions represent 3.1 percent of the total complaints FTC received directly from consumers on all matters in 2002. The FTC staff explained that their knowledge was limited to complaints that came into the agency and that they did not conduct general examinations or evaluations that would enable them to project trends. FTC staff cautioned that it would not be appropriate to conclude that since the complaints against CRAs were on the rise, accuracy of credit reports was deteriorating. They stated that the increase in the number of complaints could be due to greater consumer awareness of FTC’s role with respect to credit reporting, as well as a general trend towards increased consumer awareness of credit reporting and scoring. CRAs and the literature suggest that credit-reporting errors could have both a positive and negative effect on consumers. One CRA stated that errors occur randomly and may result in either an increase, decrease, or no change in a credit score. Another CRA stated that information erroneously omitted from a credit report such as a delinquency, judgment, or bankruptcy filing would tend to raise a credit score while that same information erroneously posted to the report would tend to lower the score. The Consumer Federation of America study cited earlier also analyzed 258 files to determine whether inconsistencies were likely to raise or lower credit scores. In approximately half the files reviewed (146 files, or 57 percent), the study could not clearly identify whether inconsistencies in credit reports were resulting in a higher or lower score. The study determined that in the remaining 112 files there was an even split between files that would result in a higher or lower score. The Federal Reserve Bulletin article previously mentioned also concluded that limitations in consumer reporting agency records have the potential to both help and hurt individual consumers. The article further stated that consumers who were hurt by ambiguities, duplications, and omissions in their files had an incentive to correct them, but consumers who were helped by such problems did not. Industry officials and the literature we reviewed suggested that the impact of an error in a consumer’s credit report was dependent on the specific circumstance of the information contained in a credit file. CRA and data furnisher officials further pointed out that a variety of factors such as those identified by Fair Isaac, a private software firm that produces credit score models, might impact a credit score. According to the Fair Isaac Web site, their credit score model considers five main categories of information along with their general level of importance to arrive at a score. These categories and their respective weights in determining a credit score include payment history (35 percent), amounts owed (30 percent), length of credit history (15 percent), types of credit in use (10 percent) and new credit (10 percent). As such, no one piece of information or factor alone determines a credit score. For one person, a given factor might be more important than for someone else with a different credit history. In addition, as the information in a credit report changes, so does the importance of any factor in determining a credit score. Fully understanding the impact of errors on consumer’s credit scores would require access to consumer credit reports, discussions with consumers to identify errors, and discussions with data furnishers to determine what impact, if any, correction of errors might have on decisions made based on the content of a credit report. The lack of comprehensive information regarding the accuracy of consumer credit reports inhibits any meaningful discussion of what more could or should be done to improve credit report accuracy. Available studies suggest that accuracy could be a problem, but no study has been performed that is representative of the universe of credit reports. Furthermore, any such study would entail the cooperation of the CRAs data furnishers, and consumers to fully assess the impact of errors on credit scores and underwriting decisions. Because of the importance of accurate credit reports to the fairness of our national credit system, it would be useful to perform an independent assessment of the accuracy of credit reports. Such an assessment could be conducted by FTC or paid for by the industry. The assessment would then form the basis for a more complete and productive discussion of the costs and benefits of making changes to the current system of credit reporting to improve credit report accuracy. Another option for improving the accuracy of credit reports would be to create the opportunity for more reviews of credit reports by consumers. One way this could be accomplished would be to expand the definition of what constitutes an adverse action. Currently, consumers are only entitled to receive a free copy of their credit reports when they receive adverse action notices for credit denials or if they believe that they have been the victim of identity theft. When consumers see their credit reports, they have a chance to identify errors and ask for corrections to ensure the accuracy of their credit reports. Expanding the criteria for adverse actions to include loan offers with less than the most favorable rates and terms would likely increase the review of credit files by consumers. Such added review of credit files would in all likelihood help to further ensure the overall accuracy of consumer credit reports. However, the associated costs to the industry would also need to be considered against the anticipated benefits of increasing consumer access to credit reports. For further information regarding this testimony, please contact Harry Medina at (415) 904-2000. Individuals making key contributions to this statement include Janet Fong, Jeff R. Pokras, Mitchell B. Rachlis, and Peter E. Rumble. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Accurate credit reports are critical to the credit process--for consumers attempting to obtain credit and to lending institutions making decisions about extending credit. In today's sophisticated and highly calibrated credit markets, credit report errors can have significant monetary implications to consumers and credit granters. In recognition of the importance of this issue, the Senate Committee on Banking, Housing, and Urban Affairs asked GAO to (1) provide information on the frequency, type, and cause of credit report errors, and (2) describe the impact of the 1996 amendments to the Fair Credit Reporting Act (FCRA) on credit report accuracy and potential implications of reporting errors for consumers. Information on the frequency, type, and cause of credit report errors is limited to the point that a comprehensive assessment of overall credit report accuracy using currently available information is not possible. Moreover, available literature and the credit reporting industry strongly disagree about the frequency of errors in consumer credit reports, and lack a common definition for "inaccuracy." The literature and industry do identify similar types of errors and similar causes of errors. Specifically, several officials and reports cited collection agencies and governmental agencies that provide information on bankruptcies, liens, collections, and other actions noted in public records as major sources of errors. Because credit report accuracy is essential to the business activities of consumer reporting agencies and credit granters, the credit industry has developed and implemented procedures to help ensure accuracy. However, no study has measured the extent to which these procedures have improved accuracy. While the Federal Trade Commission (FTC) tracks consumer complaints on FCRA violations, these data are not a reliable measure of credit report accuracy. Additionally, FTC has taken eight formal enforcement actions directly or indirectly related to credit report accuracy since Congress enacted the 1996 FCRA amendments. Neither the impact of the 1996 FCRA amendments on credit report accuracy nor the potential implications of errors for consumers is known. Specifically, because comprehensive or statistically valid data on credit report errors before and after the passage of the 1996 FCRA amendments have not been collected, GAO could not identify a trend associated with error rates. Industry officials and studies indicated that credit report errors could either help or hurt individual consumers depending on the nature of the error and the consumer's personal circumstances. To adequately assess the impact of errors in consumer reports would require access to the consumer's credit score and the ability to determine how changes in the score affected the decision to extend credit or the terms of the credit granted. Ultimately, a meaningful independent review in cooperation with the credit industry would be necessary to assess the frequency of errors and the implications of errors for individual consumers. |
DOD has a mandate to deliver high-quality products to warfighters when they need them and at a price the country can afford. Quality and timeliness are especially critical to maintain DOD’s superiority over others, to counter quickly changing threats, and to better protect and enable the warfighter. U.S. weapons are the best in the world, but the programs to acquire them frequently take significantly longer and cost more money than promised and often deliver fewer quantities and capabilities than planned. It is not unusual for time and money to be underestimated by 20 to 50 percent. Considering that DOD is investing $1.4 trillion to acquire over 75 major weapon systems as of March 2015, cost increases of this magnitude have sizeable effects. Typically, when costs and schedules increase, the buying power of the defense dollar is reduced. Consequences associated with this history of acquisition include: the warfighter gets less capability than promised; weapons perform well, but not as well as planned and are harder to trade-offs made to pay for cost increases—in effect, opportunity costs—are not explicit. This state of weapon acquisition is not the result of inattention. Many reforms have been instituted over the past several decades, but the above outcomes persist. DOD is in the midst of a series of “Better Buying Power” initiatives begun in June 2010 that have resulted in some improvements, but it is too early to assess their long term impact. The decision to start a new program is the most highly leveraged point in the product development process. Establishing a sound business case for individual programs depends on disciplined requirements and funding processes. A solid, executable business case provides credible evidence that (1) the warfighter’s needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources—that is, proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when it is needed. A program should not go forward into product development unless a sound business case can be made. If the business case measures up, the organization commits to the development of the product, including making the financial investment. At the heart of a business case is a knowledge-based approach to product development that is both a best practice among leading commercial firms and the approach reflected in DOD’s acquisition regulations. For a program to deliver a successful product within available resources, managers should demonstrate high levels of knowledge before significant commitments are made. In essence, knowledge supplants risk over time. Establishing a business case calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and invites failure. This process requires the user and developer to negotiate whatever trade-offs are needed to achieve a match between the user’s requirements and the developer’s resources before system development begins. Key enablers of a good business case include: Firm, Feasible Requirements: requirements should be clearly defined, affordable, and clearly informed—thus tempered—by systems engineering; once programs begin, requirements should not change without assessing their potential disruption to the program. Mature Technology: science and technology organizations should shoulder the technology development burden, proving technologies can work as intended before they are included in a weapon system program. The principle here is not to avoid technical risk but rather take risk early and resolve it ahead of program start. Incremental, Knowledge-based Acquisition Strategy: rigorous systems engineering coupled with more achievable requirements are essential to achieve faster delivery of needed capability to the warfighter. Building on mature technologies, such a strategy provides time, money, and other resources for a stable design, building and testing of prototypes, and demonstration of mature production processes. Realistic Cost Estimate: sound cost estimates depend on a knowledge-based acquisition strategy, independent assessments, and sound methodologies. An oft-cited quote of David Packard, former Deputy Secretary of Defense, is: “We all know what needs to be done. The question is why aren’t we doing it?” We need to look differently at the familiar outcomes of weapon systems acquisition—such as cost growth, schedule delays, large support burdens, and reduced buying power. Some of these undesirable outcomes are clearly due to honest mistakes and unforeseen obstacles. However, they also occur not because they are inadvertent but because they are encouraged by the incentive structure. It is not sufficient to define the problem as an objective process that is broken. Rather, it is more accurate to view the problem as a sophisticated process whose consistent results are indicative of its being in equilibrium. The rules and policies are clear about what to do, but other incentives force compromises. The persistence of undesirable outcomes such as cost growth and schedule delays suggests that these are consequences that participants in the process have been willing to accept. These undesirable outcomes share a common origin: decisions are made to move forward with programs before the knowledge needed to reduce risk and make those decisions is sufficient. There are strong incentives within the acquisition culture to overpromise a prospective weapon’s performance while understating its likely cost and schedule demands. Thus, a successful business case—one that enables the program to gain approval—is not necessarily the same as a sound one. Incentive to overpromise: The weapon system acquisition culture in general rewards programs for moving forward with unrealistic business cases. Strong incentives encourage deviations from sound acquisition practices. In the commercial marketplace, investment in a new product represents an expense. Company funds must be expended and will not provide a return until the product is developed, produced, and sold. In DOD, new products represent revenue, in the form of a budget line. A program’s return on investment occurs as soon as the funding decision is made. Competition with other programs vying for defense dollars puts pressure on program sponsors to project unprecedented levels of performance (often by counting on unproven technologies) while promising low cost and short schedules. These incentives, coupled with a marketplace that is characterized by a single buyer (DOD), low volume, and limited number of major sources, create a culture in weapon system acquisition that encourages undue optimism about program risks and costs. Program and Funding Decisions: Budget requests, Congressional authorizations, and Congressional appropriations are often made well in advance of major program decisions, such as the decision to approve the start of a program. At the time these funding decisions are made, less verifiable knowledge is available about a program’s cost, schedule, and technical challenges. This creates a vacuum for optimism to fill. When the programmatic decision point arrives, money is already on the table, which creates pressure to make a “go” decision prematurely, regardless of the risks now known to be at hand. Budgets to support major program commitments must be approved well ahead of when the information needed to support the decision is available. Take, for example, a decision to start a new program scheduled for August 2016. The new program would have to be included in the Fiscal Year 2016 budget. This budget request would be submitted to Congress in February 2015—18 months before the program decision review is actually held. It is likely that the requirements, technologies, and cost estimates for the new program—essential to successful execution— may not be very solid at the time of funding decisions. Once the hard- fought budget debates result in funds being appropriated for the program, it is very hard to take it away later, when the actual program decision point is reached. To be sure, this is not to suggest that the acquisition process is foiled by bad actors. Rather, program sponsors and other participants act rationally within the system to achieve goals they believe in. Competitive pressures for funding simply favor optimism in setting cost, schedule, technical, and other estimates. Insufficient Business Cases Are Sanctioned by Funding Approvals: To the extent Congress approves funds for such programs as requested, it sanctions—and thus rewards—optimism and unexecutable business cases. Funding approval—authorizing programs and appropriating funds—is one of the most powerful oversight tools Congress has. The reality is once funding starts, other tools of oversight are relatively weak— they are no match for the incentives to overpromise. So, if funding is approved for a program despite having an unrealistic schedule or requirements, that decision reinforces those characteristics instead of sound acquisition practices. Pressure to make exceptions for programs that do not measure up are rationalized in a number of ways: an urgent threat needs to be met; a production capability needs to be preserved; despite shortfalls, the new system is more capable than the one it is replacing; and the new system’s problems will be fixed in the future. It is the funding approvals that ultimately define acquisition policy. Recently, I testified before the Senate Armed Services Committee on the Ford Class Aircraft Carrier. We reported in 2007 that ship construction was potentially underestimated by 22 percent, critical technologies were immature, and schedules were likely to slip. In other words, the carrier did not have a good business case. Nonetheless, funding was approved as requested. Today, predicted cost increases have occurred, the technologies have slipped nearly 5 years, and the program schedule has been delayed. Notably, the carrier represents a typical program without a good business case and its outcomes of cost increases and schedule delays are not unique. Funding approvals rewarded the unrealistic business case, reinforcing its success rather than that of a sound business case. Since 1990, GAO has identified a number of reforms aimed at improving acquisition outcomes. Several of those are particularly relevant to changing the acquisition culture and will take the joint efforts of Congress and DOD. Reinforce desirable principles at the start of new programs: The principles and practices programs embrace are determined not by policy, but by decisions. These decisions involve more than the program at hand: they send signals on what is acceptable. If programs that do not abide by sound acquisition principles receive favorable funding decisions, then seeds of poor outcomes are planted. The challenge for decision makers is to treat individual program decisions as more than the case at hand. They must weigh and be accountable for the broader implications of what is acceptable or “what will work” and be willing to say no to programs that run counter to best practices. The greatest point of leverage is at the start of a new program. Decision makers must ensure that new programs exhibit desirable principles before funding is approved. Programs that present well-informed acquisition strategies with reasonable and incremental requirements and reasonable assumptions about available funds should be given credit for a good business case. Every year, there is what one could consider a “freshman” class of new acquisitions. This is where DOD and Congress must ensure that they embody the right principles and practices, and make funding decisions accordingly. Identify significant program risks upfront and resource them: Weapon acquisition programs by their nature involve risks, some much more than others. The desired state is not zero risk or elimination of all cost growth. But we can do better than we do now. The primary consequences of risk are often more time and money and unplanned—or latent—concurrency in development, testing, and production. Yet, when significant risks are taken, they are often taken under the guise that they are manageable and that risk mitigation plans are in place. Such plans do not set aside time and money to account for the risks taken. Yet in today’s climate, it is understandable—any sign of weakness in a program can doom its funding. Unresourced risk, then, is the “spackle” of the acquisition system that enables the system to operate. This needs to change. If programs are to take significant risks, whether they are technical in nature or related to an accelerated schedule, these risks should be declared and the resource consequences acknowledged and provided. Less risky options and potential off-ramps should be presented as alternatives. Decisions can then be made with full information, including decisions to accept the risks identified. If the risks are acknowledged and accepted by DOD and Congress, the program should be supported. More closely align budget decisions and program decisions: Requesting funding for programs 18 or so months ahead of when they will need it stems from a budgeting and planning process intended to make sure money is available in the future. Ensuring that programs are thus affordable is a sound practice. But, DOD and Congress need to explore ways to bring funding decisions closer in alignment with program decisions. This will require more thought and study. The alternative is that DOD and Congress will have to hold programs accountable for sound business cases at the time funding is approved, even if it is 18 months in advance of the program decision. Separate Technology Development from Product Development: Leading commercial companies minimize problems in product development by separating technology development from product development and fully developing technologies before introducing them into the design of a system. These companies develop technology to a high level of maturity in a science and technology environment which is more conducive to the ups and downs normally associated with the discovery process. This affords the opportunity to gain significant knowledge before committing to product development and has helped companies reduce costs and time from product launch to fielding. Although DOD’s science and technology enterprise is engaged in developing technology, there are organizational, budgetary, and process impediments which make it difficult to bring technologies into acquisition programs. For example, it is easier to move immature technologies into weapon system programs because they tend to attract bigger budgets than science and technology projects. Stronger and more uniform incentives are needed to encourage the development of technologies in the right environment to reduce the cost of later changes, and encourage the technology and acquisition communities to work more closely together to deliver the right technologies at the right time. Develop system engineering and program manager capacity: Systems engineering expertise is essential throughout the acquisition cycle, but especially early when the feasibility of requirements are being determined, the technical and engineering demands of a design are being understood, and when an acquisition strategy for conducting production development is laid out. DOD has fallen short in its attempts to fill systems engineering positions. These positions should be filled and their occupants involved and empowered early to help get programs on a good footing—i.e., a good business case—from the start. Program managers are essential to the success of any program. Program managers handed a program with a poor business case are not put in a position to succeed. Even with a good business case, program managers must have the skill set, business acumen, tenure, and career path to make programs succeed and be rewarded professionally. DOD has struggled to create this environment for program managers. Describing the current acquisition process as “broken” is an oversimplification, because it implies that it can merely be “fixed”. The current process, along with its outcomes, has been held in place by a set of incentives—a culture—that has been resistant to reforms and fixes. Seen instead as a process in equilibrium, it is clear that changing it requires a harder, long-term effort by both DOD and Congress. There have been a number of recent reforms directed at DOD. Congress shares responsibility for the success of these reforms in the actions it takes on funding programs, specifically by creating enablers for sound business cases, and creating disincentives for programs that do not measure up. Chairman Thornberry, Ranking Member Smith, and Members of the Committee, this concludes my statement and I would be happy to answer any questions. If you or your staff has any questions about this statement, please contact Paul L. Francis at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are David Best, Assistant Director; R. Eli DeVan; Laura Greifner; and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DOD's acquisition of major weapon systems has been on GAO's high risk list since 1990. Over the years, Congress and DOD have continually explored ways to improve acquisition outcomes, including reforms that have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering. Too often, GAO reports on the same kinds of problems today that it did over 20 years ago. This testimony discusses (1) the performance of the current acquisition system; (2) the role of a sound business case in getting better acquisition outcomes; (3) systemic reasons for persistent problems; and (4) thoughts on actions DOD and Congress can take to get better outcomes from the acquisition process. This statement draws from GAO's extensive body of work on DOD's acquisition of weapon systems and the numerous recommendations GAO has made on both individual weapons and systemic improvements to the acquisition process. U.S. weapon acquisition programs often take significantly longer, cost more than promised and deliver fewer quantities and capabilities than planned. It is not unusual for time and money to be underestimated by 20 to 50 percent. As the Department of Defense (DOD) is investing $1.4 trillion to acquire over 75 major weapon systems as of March 2015, cost increases of this magnitude have sizeable effects. When costs and schedules increase, the buying power of the defense dollar is reduced. Beyond the resource impact, consequences include the warfighter receiving less capability than promised, weapons performing not as well as planned and being harder to support, and trade-offs made to pay for cost increases—in effect, opportunity costs—not being made explicit. GAO's work shows that establishing a sound business case is essential to achieving better program outcomes. A program should not go forward without a sound business case. A solid, executable business case provides credible evidence that (1) the warfighter's needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources—such as technologies, design knowledge, funding, and time. Establishing a sound business case for individual programs depends on disciplined requirements and funding processes, and calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and makes the above consequences likely. Yet, business cases for many new programs are deficient. This is because there are strong incentives within the acquisition culture to overpromise a prospective weapon's performance while understating its likely cost and schedule demands. Thus, a successful business case is not necessarily the same as a sound one. Competition with other programs for funding creates pressures to overpromise. This culture is held in place by a set of incentives that are more powerful than policies to follow best practices. Moreover, the budget process calls for funding decisions before sufficient knowledge is available to make key decisions. Complementing these incentives is a marketplace characterized by a single buyer, low volume, and limited number of major sources. Thus, while it is tempting to describe the acquisition process as broken, it is more instructive to view it as in equilibrium: one in which competing forces consistently lead to starting programs with slim chances of being delivered on time and within cost. Over the years, GAO has identified a number of reforms aimed at improving acquisition outcomes. Several of those are particularly relevant to changing the acquisition culture and will take the joint efforts of Congress and DOD: Ensure that new programs exhibit desirable principles before funding is approved. Identify significant program risks up front and allot sufficient resources. More closely align budget and program decisions. Mature technology before including it in product development. Develop system engineering and program manager capacity—sufficient personnel with appropriate expertise and skills. |
Species of plants, animals, and microscopic organisms are transported from their native environments around the world to new locations in many different ways, both intentionally and unintentionally. When they arrive in a new location, most of these species do not survive because environmental conditions are not favorable. However, some of the newly arrived species do survive and, unfortunately, a portion of these flourish to the point that they begin to dominate native species and are thus labeled as “invasive.” These invasive, nonnative species can seriously damage ecosystems, businesses, and recreation. Ballast water is one of many pathways by which nonnative and invasive species have arrived in the United States. Ships are designed to sail safely with their hulls submerged to a certain depth in the water. If a ship is not filled to capacity with cargo, it needs to fill its ballast tanks with water to maintain proper depth and balance during its travels. As a ship takes on cargo at ports of call, it must then discharge some of its ballast water to compensate for the weight of the cargo. When ships are fully loaded with cargo, their ballast tanks may be pumped down to the point where only residual water (also referred to as non-pumpable ballast water) is left. Ship masters may also manipulate the amount of water in their ballast tanks to account for different sea conditions. Different classes of ships have different ballast capacities, ranging from tens of thousands to millions of gallons of water. Ships generally fill and discharge their ballast tanks when they are in port, and the water and associated sediment they take in is likely to contain living organisms or their eggs. Because the ballast water may be fresh, brackish, or salty depending on where it is obtained, the organisms in the water will also vary accordingly. Worldwide, ships discharge an estimated 3 billion to 5 billion metric tons of ballast water each year, and it is estimated that several thousand different species may be transported globally in ballast tanks on any given day. Well-known examples of invasive species brought to the United States in ballast tanks include the zebra mussel, round goby, Japanese shore crab, Asian clam, and Black Sea jellyfish. Collectively, these and other aquatic species transported in ballast water have caused billions of dollars in damage to our economy and unmeasured damage to the environment. For example, we reported in 2002 that the Great Lakes commercial and recreational fishing industry— which is worth about $4.5 billion annually—was being damaged or threatened by the sea lamprey, round goby, Eurasian ruffe, and two invertebrates from eastern Europe, just to name a few. While the Great Lakes feature prominently in today’s hearing, many other waters around the United States have also been invaded by harmful species. Notably, invasive species are found in virtually all of our coastal bays and estuaries—resources that are typically enormously productive and support multibillion dollar commercial fisheries and recreation industries. Given the pace and expansion of global trade, the movement of additional invasive species to these and other ecosystems can only be expected to continue. The federal government has been taking steps to address the introduction of potentially invasive species via the ballast water in ships for well over a decade. Congress recognized ballast water as a serious problem in 1990 with the passage of the Nonindigenous Aquatic Nuisance Prevention and Control Act, legislation intended to help reduce the number of species introduced into U.S. waters, focusing on the Great Lakes. Congress reauthorized appropriations for and amended that law in 1996, making it more national in scope. In 1999, the President issued an executive order to better address invasive species in general, including those transported in ballast water. In addition to these domestic developments, members of the United Nation’s International Maritime Organization have adopted a convention on ballast water management that, if ratified by a sufficient number of countries, could affect the global fleet. Ballast water as a conduit for invasive species was first legislatively recognized in 1990 with the passage of the Nonindigenous Aquatic Nuisance Prevention and Control Act (NANPCA). This law was a response to the introduction of the zebra mussel in the Great Lakes and findings that the discharge of ballast water results in unintentional introductions of nonindigenous species. The zebra mussel reproduces rapidly, and soon after its introduction clogged municipal and industrial water pipes, out-competed native mussels for food and habitat, and cost millions of dollars in economic losses and remedial actions. Specifically, NANPCA called for regulations to prevent the introduction and spread of aquatic invasive species into the Great Lakes through the ballast water of ships. Among other things, it specifically called for the regulations to require ships carrying ballast water and entering a Great Lakes port after operating beyond the Exclusive Economic Zone (EEZ)—a zone generally extending 200 nautical miles from a country’s shores—to take one of the following actions: Carry out what is known as ballast water exchange beyond the EEZ before entering a Great Lakes port; Exchange ballast water in other waters where the exchange does not threaten introduction of aquatic invasive species to the Great Lakes or other U.S. waters; or Use an environmentally sound alternative method of removing potentially invasive organisms if the Secretary determines that such method is as effective as ballast water exchange in preventing and controlling aquatic invasive species infestations. Exchanging ballast water in the ocean serves two purposes—to physically flush aquatic organisms from ships’ tanks and to kill remaining organisms that require fresh or brackish water with highly saline ocean water. After first issuing guidelines that became effective in March 1991, the Coast Guard replaced them with ballast water management regulations in April 1993 for ships carrying ballast water and entering the Great Lakes from outside of the EEZ. In 1992, Congress amended NANPCA and called for the promulgation of regulations for ships entering the Hudson River north of the George Washington Bridge; in December 1994, the Coast Guard extended its regulations to these ships. The regulations required ships with pumpable ballast water to: exchange ballast water beyond the EEZ at a minimum depth of 2,000 meters before entering the Great Lakes or Hudson River; utilize another environmentally sound ballast water management method approved by the Coast Guard; or retain the ballast water on board. The Coast Guard did not approve any alternative method and, therefore, ships that did not exchange their ballast water beyond the EEZ were required to retain it on board. The Coast Guard also required these ships to submit reports attesting to, among other things, their ballast water management actions. NANPCA also established the Aquatic Nuisance Species Task Force (ANSTF), consisting of representatives from the U.S. Fish and Wildlife Service, the National Oceanic and Atmospheric Administration (NOAA), the Environmental Protection Agency (EPA), the Coast Guard, the Army Corps of Engineers, and other agencies deemed appropriate, as well as ex- officio members from the Great Lakes Commission and other nonfederal groups or agencies. NANPCA required the task force and the Secretary to cooperate in conducting a number of studies within 18 months of enactment of the act on such issues as: The environmental effects of ballast water exchange on native species in Alternate areas, if any, where ballast water exchange does not pose a threat of infestation or spread of aquatic invasive species in the Great Lakes and other U.S. waters; The need for controls on ships entering U.S. waters other than the Great Lakes to minimize the risk of unintentional introduction and dispersal of aquatic invasive species in those waters; and, Whether aquatic invasive species threaten the ecological characteristics and economic uses of U.S. waters other than the Great Lakes. Recognizing that many water bodies around the country in addition to the Great Lakes had been invaded by harmful, nonindigenous aquatic species, Congress reauthorized appropriations for and amended NANPCA with the passage of the National Invasive Species Act of 1996 (NISA). NISA expanded upon NANCPA and called for voluntary national guidelines for ships equipped with ballast water tanks that operate in waters of the United States. NISA required the voluntary guidelines to direct ships to manage ballast water in a manner similar to the mandatory requirements for ships sailing to the Great Lakes by conducting ballast water exchange beyond the EEZ, exchanging their ballast water in an alternative discharge zone recommended by the ANSTF, or using an alternative treatment method approved by the Secretary. The law also required that the guidelines direct ships to carry out other management practices that were deemed necessary to reduce the probability of transferring species from ship operations other than ballast discharge and from ballasting practices of ships that enter U.S. waters with no ballast water on board. In addition, the law required that the guidelines provide that ships keep records and submit them to the Secretary to enable the Secretary to determine compliance with the guidelines. The Coast Guard issued an interim rule in May 1999 and promulgated a final rule in November 2001 setting forth national voluntary guidelines under NISA. The guidelines encouraged ships carrying ballast water taken on in areas less than 200 nautical miles from any shore or in waters less than 2,000 meters deep to employ at least one of the following ballast water management practices: exchange their ballast water outside of the EEZ in waters at least 2,000 meters deep before entering U.S. waters, retain it on board, use an approved alternative ballast water management method, discharge the ballast water to an approved reception facility, or under extraordinary conditions conduct an exchange in an area agreed to by the Captain of the Port. The voluntary guidelines also encouraged all ships equipped with ballast water tanks and operating in U.S. waters to take various precautions to minimize the uptake and release of harmful aquatic organisms, pathogens and sediments. Such precautions may include regularly cleaning ballast tanks to remove sediment and minimizing or avoiding the uptake of ballast water in areas known to have infestations of harmful organisms and pathogens such as toxic algal blooms. In issuing the voluntary guidelines, the Coast Guard said that it was considering the results of a study on alternate discharge exchange zones but had not decided whether to allow ballast water exchanges in any of the possible locations the task force identified. NISA also required a report to Congress on, among other things, compliance with the voluntary ballast water exchange and reporting guidelines no later than 3 years after their issuance. In addition, NISA required that the guidelines be revised, or additional regulations promulgated, no later than 3 years after the issuance of the guidelines and at least every 3 years thereafter, as necessary. Importantly, NISA required the promulgation of regulations making the guidelines mandatory if the Secretary determined that reporting or the rate of ship compliance was not adequate. As required by NISA, the Coast Guard issued its report to Congress in June 2002, but was not able to evaluate compliance with the voluntary guidelines because the rate of reporting was so poor. (From July 1, 1999, to June 30, 2001, less than one-third of all vessels required to report ballast water management information met the requirement.) Accordingly, as authorized by NISA, the Coast Guard published a proposed rule for a national mandatory program for ballast water management for all ships operating in U.S. waters in July 2003 and a final rule in July 2004. In addition, the Coast Guard promulgated another rule, effective August 13, 2004, establishing penalties for, among other things, ship owners who do not file the required reports on their ballast water operations. Finally, a key provision in NISA recognized the need to stimulate development of ballast water treatment technologies. Specifically, NISA called for the establishment of a grant program to provide funds to nonfederal entities to develop, test, and demonstrate ballast water treatment technologies. The Secretary of the Interior was authorized to enter into cooperative agreements with other federal agencies and nonfederal entities to conduct the program. NOAA and the U.S. Fish and Wildlife Service created the Ballast Water Technology Demonstration Program that provides grants to entities pursuing technologies that could be used to treat ballast water. Addressing concerns with the introduction of potentially harmful organisms via ballast water also falls under the purview of the National Invasive Species Council. The council was created in 1999 under Executive Order 13112, which broadly addressed all types of invasive species. The council consists of the heads of the principal departments and agencies with invasive species responsibilities. The order directed the council to develop a plan for managing invasive species across agencies and to do so through a public process in consultation with federal agencies and stakeholders. The council issued a national invasive species management plan in January 2001 containing 57 primary action items calling for about 168 separate actions to be taken by a variety of federal agencies. Two actions in the plan relate to ballast water. First, because ballast water exchange was recognized as only an interim measure to address nonnative species introductions via ballast water, the plan called for NOAA, the Coast Guard, Interior, and EPA to sponsor research to develop new technologies for ballast water management by July 2001. Second, the plan called for the Coast Guard to issue standards for approving the use of ballast water management technologies as alternative ballast water management methods by January 2002. NANPCA and NISA require that, in order for an alternative ballast water management method to be used, the Secretary must first approve the method as being “at least as effective as ballast water exchange in preventing and controlling infestations of aquatic nuisance species,” however, standards for approving alternative measures had yet to be developed. The effect of the National Invasive Species Council and the national management plan on efforts to address species introductions via ballast water appears to be minimal. While research on technologies has been supported by the Ballast Water Technology Demonstration Program, which is managed by NOAA and the Fish and Wildlife Service, this program began in 1998 in response to NISA—before the management plan was written or before the council was even created. Little action has been taken on developing standards for approving ballast water treatment technologies even though its completion date was January 2002. The council has focused on ballast water in its “cross-cut budget” for invasive species that it began in 2002 (for the fiscal year 2004 budget), although its influence on ballast water management also appears limited. The cross-cut budget effort is intended to encourage agencies to, among other things, develop shared goals and strategies, and to promote cooperation and coordination on invasive species issues. As a part of the cross-cut budget, agencies have developed three performance measures for ballast water management. For fiscal year 2005, agencies were to (1) sponsor eight ballast water technology projects, (2) develop and implement a standardized program to test and certify the performance capabilities of ballast water treatment systems, and (3) conduct a pilot scale verification trial of a full-scale treatment system to validate the standardized program. However, these measures call for agencies to take certain actions as opposed to achieving some desired outcome. This is similar to what we observed in our 2002 report about the actions in the national management plan. In addition, we note that the Coast Guard is not included in the cross-cut budget for ballast water despite being the primary regulatory agency for managing this issue. While Congress, the Coast Guard, and other federal agencies have sought to reduce the threats posed by ballast water through domestic regulation, the United Nation’s International Maritime Organization (IMO) has worked for over 10 years toward a global solution to the problem. In February 2004, IMO member countries adopted the International Convention for the Control and Management of Ships’ Ballast Water and Sediments. The convention calls for ballast water exchange as an interim measure. This would be followed by the imposition of a treatment standard that would place limits on the number of organisms that ships could discharge in their ballast. To enter into force, the convention must be ratified by at least 30 countries constituting at least 35 percent of the gross tonnage of the world’s merchant shipping. As of August 2005, eight countries had signed the convention but only one—the Maldives—had ratified it. The convention’s ballast water performance standard would require ships conducting ballast water management to discharge less than 10 viable organisms greater than or equal to 50 microns in size per cubic meter of water and less than 10 viable organisms less than 50 but greater than 10 microns in size per milliliter of water. In addition, the ballast water performance standard would set limits on the discharge of several disease causing pathogens including cholera and E. coli. The dates by which ships would need to meet the ballast water performance standard, if the convention enters into force, would depend upon when the ship was built and what its ballast water capacity is. For example, the ships first required to meet the standard would be those built in 2009 or later with a ballast capacity of less than 5,000 cubic meters. Ships built before 2009 with a ballast capacity between 1,500 cubic meters and 5,000 cubic meters would have to meet the standard by 2014. Regardless of age or size, all ships subject to the convention would need to meet the standard by 2016. The federal government has continued to take steps to strengthen controls over ballast water as a conduit for potentially harmful organisms. Since 1998, Coast Guard data show that compliance with conducting ballast water exchange, when required, has generally been high. However, key agencies and stakeholders recognize that the recently adopted mandatory national program for ballast water exchange is not a viable long-term approach to minimizing the risks posed by ballast water discharges. Major limitations with this approach include the fact that despite relatively high compliance rates with the regulations, U.S. waters remain vulnerable to species invasions because many ships are still not required to conduct ballast water exchange. In addition, the ANSTF has not recommended alternate areas for ballast water exchange and thus, the Coast Guard has not established alternate discharge zones that could be used by ships. And lastly, ballast water exchange is not always effective at removing or killing potentially harmful species. With the Coast Guard’s mandatory ballast water management regulation for ships traveling into U.S. waters after operating beyond the EEZ and carrying ballast water taken on less than 200 nautical miles from shore— effective September 2004—more ships are generally required to conduct ballast water exchange or retain their ballast water than before. We noted in 2002 that compliance with ballast water exchange requirements for ships entering the Great Lakes was high, and the Coast Guard maintains that it remains high. According to the Coast Guard, from 1998 through 2004, 93 percent of the ships entering the Great Lakes with pumpable ballast water were in compliance with the exchange requirement. More recently, data show that about 70 percent of those arriving from outside the EEZ to ports other than the Great Lakes conducted an exchange. Most notably, reporting on ballast water management activities has increased dramatically. According to the Coast Guard, reporting increased from approximately 800 reports per month in January 2004 to over 8,000 per month since September 2004; this reflects reporting from about 75 percent of ships arriving from outside the EEZ. The Coast Guard attributes the increase in reporting to an effort beginning in 2004 to encourage ship masters to file reports electronically and to the new regulations that allow the Coast Guard to levy penalties for non-reporting. According to data provided by the Coast Guard, nearly five percent of ships arriving at U.S. ports between January 2005 and July 2005 were inspected for compliance with ballast water regulations. On the basis of its inspections, the Coast Guard reports a 96.5 percent compliance rate with the mandatory ballast water management regulations. During the first two quarters of 2005, inspections revealed 124 deficiencies that range from problems with ballast water management reporting to illegal discharge of ballast water in U.S. waters. As a result of these findings, Coast Guard took nine enforcement actions. Although the Coast Guard believes that compliance with ballast water management regulations is high, U.S. waters may still not be adequately protected because many ships are not required to conduct ballast water exchange even though they may discharge ballast water in U.S. waters. NOBOBs. Ships with no ballast water in their tanks (referred to as “no ballast on board” ships or NOBOBs) are not required to conduct ballast water exchange or retain their ballast water. While the term “NOBOB” indicates that a ship has no ballast on board, these ships may, in fact, still be carrying thousands of gallons of residual ballast water and tons of sediment that cannot be easily pumped out because of the design of their tanks and pumps. This water and sediment could harbor potentially invasive organisms from previous ports of call that could be discharged to U.S. waters during subsequent ballast discharges. NOBOBs are a particular concern in the Great Lakes, where greater than 80 percent of ships entering from outside the EEZ fall into this category. While still a concern for other U.S. ports, it appears that a significantly smaller portion (about 20 percent) of ships arriving at U.S. ports other than the Great Lakes from beyond the EEZ claimed NOBOB status. Officials responsible for gathering and managing data on ship arrivals estimate that about 5 percent of those NOBOB ships take on ballast water and discharge it in U.S. waters. When the Coast Guard conducted an environmental assessment of its new national mandatory ballast water exchange regulations in 2003, it did not review the potential threat that NOBOB ships pose to future species invasions, although it received comments raising concerns about this omission. In response to comments on its 2004 rule, the agency noted that NOBOBs were required to submit ballast water reporting forms, that it would continue to explore the issue of NOBOBs, and that these vessels may be included in a future rulemaking. In May 2005, the Coast Guard convened a public workshop in Cleveland to discuss and obtain comments on NOBOBs, particularly as they affect the Great Lakes. Following the public meeting, the Coast Guard held a closed meeting for an invited group of government officials and technology experts. The overall purpose of the closed meeting was to discuss technological approaches that are now available or soon to be available to address the potentially invasive organisms in NOBOB ships. The agency has not published any record of the closed meeting. The Coast Guard just issued a notice, published in the Federal Register on August 31, 2005, containing a voluntary management practice for NOBOBs that enter the Great Lakes and have not conducted ballast water exchange. This practice indicates that such ships should conduct salt water flushing of their empty ballast tanks in an area 200 nautical miles from any shore, whenever possible. Salt water flushing is defined as “the addition of mid-ocean water to empty ballast water tanks; the mixing of the flush water with residual water and sediment through the motion of the vessel; and the discharge of the mixed water, such that the resultant residual water remaining in the tank has as high a salinity as possible, and preferably is greater than 30 parts per thousand.” Scientists believe that this process will either flush out residual organisms from the ballast tanks or kill remaining organisms with highly saline ocean water. The effectiveness of this process, however, has not been demonstrated. A Coast Guard official in the ballast water program explained that issuance of voluntary best management practices were favored over regulations because of the relative speed with which they can be issued. Coastal Traffic. Ships traveling along U.S. coasts that do not travel farther than 200 nautical miles from any shore are also not required to conduct ballast water exchange or to retain their ballast water. One such group of ships includes those that travel within the EEZ from one U.S. port to another, such as from the Gulf of Mexico to the Chesapeake Bay. However, these ships may act as a vector for unwanted organisms between ports. The second group of ships falling in this category includes those that come from foreign ports but do not travel more than 200 nautical miles from any shore. These can include ships arriving from the Caribbean, Central America, South America, Panama Canal, and Canada. The Coast Guard regulations explicitly exempt ships traveling within 200 nautical miles of any shore from conducting ballast water exchange. However, these ships also represent a possible conduit for invasive species. Approximately 65 percent of ships arriving at U.S. ports from outside the EEZ—over 28,000 in 2003—do not travel more than 200 nautical miles from shore. Key stakeholders have raised concerns about this gap in regulatory coverage over coastal traffic. For example, in commenting on the Coast Guard’s proposed regulations for national mandatory ballast water exchange, NOAA, the Fish and Wildlife Service, the states of Washington and Pennsylvania, the Northeast Aquatic Nuisance Species Task Force, a state port association, and environmental advocacy organizations expressed concern that coastal traffic was not addressed by the rulemaking. The Coast Guard has also acknowledged this gap. Specifically, the agency noted in its July 2003 assessment of the potential impacts of its new regulations on mandatory ballast water exchange and in its environmental assessment of the final regulations, that discharges from coastal shipping could result in the introduction or spread of invasive species within regions of the United States. However, the agency did not quantify the additional risks posed by coastal traffic nor did it discuss what should be done to mitigate those risks. Several of the issues described above revolve around the requirement that ballast water exchange be done at least 200 nautical miles from shore. However, Congress recognized that there might be areas within the 200- nautical mile limit of the EEZ in which ballast water exchange might not be harmful. Congress required the Aquatic Nuisance Species Task Force to conduct a study to identify any possible areas within the waters of the United States and the EEZ where ballast water exchange would not pose a threat of infestation or spread of aquatic invasive species. NANPCA, as amended by NISA, called upon the Coast Guard regulations and guidelines to allow or encourage ships to exchange ballast water in alternate locations, based on the Task Force’s recommendations. The required study on alternate exchange areas was delivered to NOAA and EPA— members of the task force—in November 1998. According to the study, it was impossible to guarantee that organisms in ballast water would not be transported by winds or currents toward suitable shoreside habitats when discharged within 200 nautical miles of shore. The study also noted that suitable discharge areas varied depending upon winds and currents at a particular time. However, in looking at conditions around the United States, the study identified many locations where it appeared that ballast water exchange could safely occur less than 200 nautical miles from shore. Ultimately, the Task Force did not recommend alternate discharge areas and the Coast Guard has not authorized ballast water exchange in any such areas under its regulations. In its 2004 final rule for the mandatory national ballast management program, the Coast Guard stated that it was examining the possibility of establishing alternate ballast water exchange zones and that information obtained at an October 2003 workshop, and future workshops, could provide a sound, scientific basis for establishing ballast water exchange zones within the EEZ. In 2004, the Massachusetts Institute of Technology published the proceedings from the October 2003 workshop. The workshop attendees—which included stakeholders from the marine industry, scientific community, policy makers, regulators, and nongovernmental organizations—developed a consensus statement regarding proposed alternate exchange zones along the northeastern coastline of the United States and Canada. The group proposed that alternate ballast water exchange areas, where there is consensus, be adopted as a working policy statement by both the United States and Canada for coastal vessel traffic until other treatment methods are available. In their statement, the attendees focused more on the depth of waters than on the distance from shore, noting that the continental shelf marks a location that helps determine whether organisms are likely to float toward shore or away from shore. However, the Coast Guard reports that it has no plans to consider the use of alternate discharge zones. The ballast water program manager told us that designating alternate zones would take a significant amount of environmental analysis and a lengthy rulemaking process. She also said that alternate discharge zones will not be needed once other treatment technologies are installed on ships. While the United States has not identified alternate locations for conducting ballast water exchange, the IMO and other countries have proposed allowing, or already allow, ballast exchange to occur in locations closer than 200 nautical miles from shore. The IMO convention, should it take effect as adopted, states that all ships conducting ballast water exchange should, whenever possible, do so at least 200 nautical miles from the nearest land and in water at least 200 meters deep. However, the convention recognizes that exchange at that distance may not be possible; if not, exchange should be conducted as far from the nearest land as possible, and in all cases at least 50 nautical miles from the nearest land and in water at least 200 meters deep. Australia requires that exchange be done outside 12 nautical miles in water exceeding 200 meters in depth. The Canadian government proposed regulations in June 2005 that would allow transoceanic ships, unable to exchange ballast water more than 200 nautical miles from shore where the water is at least 2,000 meters deep because it would compromise the stability of the ship or the safety of the ship or of persons on board, to make the exchange in one of five alternate discharge zones that Canada’s Department of Fisheries and Oceans determined could receive ballast water with little risk. For non- transoceanic ships that do not travel at least 200 nautical miles from shore and in waters at least 2,000 meters deep (for example, ships arriving from U.S. ports that travel near the coast), the proposed regulations would require ships to exchange ballast water at least 50 nautical miles from shore where the water is at least 500 meters deep. If that were not practical or possible, the ships would be allowed to use an alternate discharge zone. The minimum allowable depth in the alternative areas would be from 300 to 1,000 meters. In 2002, we reported on numerous concerns about the effectiveness of ballast water exchange in removing potentially harmful organisms. There are two presumptions behind ballast water exchange as a method for ballast water treatment. First, it is presumed that the exchange will physically remove the water and organisms from ballast tanks. Second, ballast water exchange presumes that there are significant differences in the salinity of the original ballast water, mid-ocean water, and the ecosystem into which the water is ultimately discharged, such as the Great Lakes. If the original ballast water were fresh, organisms in that water would, in theory, not survive in the salt water taken on in mid-ocean. Similarly, any mid-ocean organisms taken on during the exchange would not survive in the fresh water of a destination port. Evidence has shown, however, that these presumptions are not always borne out. For one thing, ballast pumps are not always able to remove all of the original water, sediment, and associated organisms. In addition, elevated levels of salinity do not necessarily kill all forms of potentially invasive organisms. Therefore, scientists believe that viable organisms can survive ballast water exchange and possibly become invasive when discharged to a new environment. The National Research Council highlighted the need for alternatives to ballast water exchange by stating in its 1996 report on ballast water management, “while changing ballast may be an acceptable and effective control method under certain circumstances, it is neither universally applicable nor totally effective, and alternative strategies are needed.” We noted in our 2002 report that despite the high compliance rate with mandatory ballast water exchange in the Great Lakes, invasive organisms, such as the fish-hook water flea discovered in 1998, were still entering the ecosystem. Developers are pursuing technologies for use in treating ballast water, some of which show promise that a technical solution can be used to provide more reliable removal of potentially invasive species. However, the development of such technologies and their eventual use to meet regulatory requirements face many challenges, including the daunting technological challenges posed by the need for shipboard treatment systems and the lack of a discharge standard that would provide a target for developers to aim for in terms of treatment efficiency. Researchers and technology companies have been investigating the potential capabilities of many different ballast water treatment options, such as subjecting the water to filtration, cyclonic separation, ultraviolet radiation, chlorine, heat, ozone, or some combination of these methods. NOAA’s Ballast Water Technology Demonstration Program has assisted in this regard by providing over $12 million in grants to 54 research projects since 1998. Related to this issue, the International Maritime Organization convention on ballast water required an assessment of the state of treatment technology to determine whether appropriate technologies are available to achieve the standard proposed in the convention. Toward this end, the United States and five other member countries submitted assessments of the state of treatment technology development. The United States’ assessment was based on a study conducted by the Department of Transportation’s Volpe National Transportation Systems Center. The center assessed about a dozen potential ballast water technologies and identified four basic approaches that it believed are sufficiently well developed to indicate that effective and practicable systems will be available to treat ballast water to some measurable performance standard. These technologies are (1) heat, (2) chlorine dioxide, (3) separation followed by ultraviolet radiation, and (4) separation followed by advanced oxidation treatment. On the basis of this assessment, the United States took the position that developers of treatment technologies have made enough progress to suggest that the first proposed deadline in the convention could be met; namely, that ships built on or after 2009 and with a ballast water capacity of under 5,000 cubic meters could have treatment systems that could meet the discharge standards. However, the United States also stated that it was too early to tell whether treatment systems would be available for other categories of ships that will need them at a later date. After reviewing and discussing the evidence on the status of technology development provided by the United States and other member countries, the IMO’s Marine Environment Protection Committee’s technology review group recommended that there was no need to consider amending the schedule for implementing the convention due to a lack of progress on technology, although it recommended that the committee reexamine the status of technology in October 2006. Several challenges hamper development and use of ballast water treatment technologies. First, development of such technologies is a daunting task given the many operational constraints under which the technologies must operate. Beyond this hurdle, there is no discharge standard for how clean ballast water must be to help developers determine how effective their technologies need to be. Related to this, there is also no process for testing and approving technologies to determine how effective they are in removing potentially harmful organisms from ballast water. Coast Guard and other agencies have some actions underway on these issues, but they have not committed to firm schedules for completion. The challenges of developing technologies to “treat” or remove potentially invasive species from ballast water are numerous. On the one hand, treating ballast water is not unlike treating household and industrial wastewater—now a rather routine treatment process. Like wastewater treatment facilities, ballast water treatment technology will need to be safe for the environment and crew, and achieve a specific level of pollutant removal (in the case of ballast water—removal of potentially invasive species). On the other hand, shipboard ballast water treatment systems will have to meet additional challenges that land-based wastewater treatment facilities do not, such as: (1) treating large volumes of water at very high flow rates and (2) removing or killing a much broader range of biological organisms—including unknown organisms. Importantly, the treatment systems must be able to operate in a manner that does not compromise ship safety. In addition, to make any treatment option palatable to the shipping industry, the systems must not displace an unacceptable amount of valuable cargo space. Consequently, the technologies must be dramatically smaller in scale than those currently used in the wastewater industry while still achieving a high level of removal or “kill” rates. Further complicating matters, because ships differ in their structural designs, it is unlikely that one type of treatment technology will be appropriate for all types of ships. And, depending on how regulations are written, ships may need to be retrofitted to incorporate treatment technology—a potentially complex and expensive proposition. When we reported in 2002, a key part of the Coast Guard’s effort to move forward on dealing more effectively with the ballast water problem was its work to develop a discharge standard for ballast water—that is, a standard for determining how “clean” ballast water should be before it could be discharged into U.S. waters. According to many stakeholders we have spoken with, one reason for the apparent slow progress on developing treatment technology is the lack of a discharge standard. Identifying a standard is necessary to provide a target for companies that develop treatment technologies. The lack of a discharge standard makes it uncertain what level of “cleanliness” treatment technologies will have to achieve. Companies may be hesitant to pursue research and development of a potential treatment technology not knowing what the standard may ultimately be—they stand to lose significant amounts of money if a standard turns in an unanticipated direction that they are unable to accommodate with their technology. In addition, until the shipping industry is required to meet some discharge standard, there is no incentive for ship owners to purchase ballast water treatment technology. In 2002, the Secretary of Transportation reported to Congress that he expected to have a final rule on a ballast water management standard in the fall of 2004. The Coast Guard has been working with the EPA and other agencies to prepare a proposed regulation that will contain a discharge standard as well as an assessment of the environmental impacts of five possible discharge standards. The five alternatives being analyzed are: (1) taking “no action,” which would mean continuing with ballast water exchange, (2) requiring that ballast water be sterilized before discharge, (3) matching the proposed IMO discharge standard, (4) allowing one-tenth the number of organisms allowed by the proposed IMO standard, and (5) allowing one-hundredth the number of organisms in the proposed IMO standard. In December 2004, the Coast Guard announced that it expected to propose a discharge standard by December 2005, however, the agency has since retracted that plan and was not able to give us a new date. Complicating the development of technology is the lack of a process to approve ballast water treatment systems for use on ships. In August 2004, the Coast Guard published a Federal Register notice requesting comments by December 3, 2004, on how to establish a program to approve alternative ballast water management methods. The agency stated in the notice its intention to promulgate the new program in the near future, but it has yet to do so. In the meantime, the Coast Guard, EPA, and the Navy have collaborated on preparing laboratory facilities in Key West, Florida that will be used to verify the performance of ballast water treatment technologies. According to the Coast Guard, the agencies will begin to test the new facilities in a few weeks. On a parallel track, NOAA’s Ballast Water Technology Demonstration Program hopes to help address this gap as well by establishing a Research, Development, Test and Evaluation facility. This facility would be directed to establish standardization and quality control in experiments on ballast water technology. Current plans are to devote nearly $1 million to this facility over a 4-year period beginning in fiscal year 2006; depending on funding availability, operation of the facility could be continued. In addition, EPA’s Environmental Technology Verification program is working to develop testing protocols in order to verify treatment technologies for eventual approval. In 2004, the Coast Guard implemented a new program intended to encourage ship owners to test potential treatment technologies on their ships. With the Shipboard Technology Evaluation Program (STEP), the agency hopes to encourage ship owners to install experimental treatment technologies by agreeing that vessels accepted into the program may be granted an exemption from future ballast water discharge standards for up to the life of the vessel of the system. Notably, the program approves the use of a system on a single ship; it does not approve the use of that system for other ships. To be accepted into the program, the experimental technology needs to be capable of removing or killing at least 98 percent of organisms larger than 50 microns. To date, only two ship owners have applied to this program, but the Coast Guard has not yet accepted their applications. The Coast Guard has recognized that the application process is complex and plans to clarify it in hope of attracting more applicants. Representatives of technology developers, shipping interests, and other stakeholders have offered several reasons for the low participation in the program. According to the stakeholders we spoke with, the primary reason is the lack of a defined discharge standard, rather than any particular aspect of the STEP program itself. The lack of a discharge standard, as well as the fact that use of ballast water treatment technology is not currently required, has made it difficult for technology developers to gather the venture capital needed to proceed aggressively on technology development since use of such technology is not required. Consequently, few technologies are ready to be installed and tested on board ships. One representative of a technology firm believes the Coast Guard should expand the size of the STEP program to provide more incentive to shipping companies and technology developers that want to test variations of technologies or test their technology on different types of ships. Currently, the agency is limiting the number of applicants to about 5 or 6 per year and expects each application to cover just one ship. Another stakeholder echoed this point, saying that the program requires ship owners to go to great lengths for the benefit of getting one ship approved. One representative of a shipping association speculated that, although the STEP program is open to foreign companies, another possible reason for low participation is that foreign ships may spend little time in the United States. Stakeholders to the technology development issue told us that technology development has also been hampered by a lack of resources. I have already noted that without a discharge standard or requirements for use of treatment technologies, it is difficult for companies to expend significant resources on development. In addition, as technology development progresses, the scale of testing required will increase and move beyond what can be done in a laboratory. At this point, developers will need to conduct “operational” testing on-board ships. However, estimates for shipboard studies exceed $1 million. Given the disincentives to pursuing technology development in this time of uncertainty, technology development will likely remain a problem. As we reported in 2002, some states have expressed frustration with the federal government’s progress on establishing a more protective federal program for managing the risks associated with ballast water discharges. Since then, several coastal and Great Lakes states have enacted legislation that is more stringent than current federal regulations. As you know, in June 2005, the governor of Michigan signed a bill into law that will require all oceangoing vessels to obtain a state permit before discharging ballast water into state waters. The state will issue the permit only if the applicant can demonstrate that the vessel will not discharge aquatic nuisance species or, if it will, that the operator of the vessel will use environmentally sound technology and methods as determined by the state department that can be used to prevent the discharge of aquatic invasive species. This requirement takes effect January 1, 2007. Similarly, owing to concerns with possible species introductions via currently unregulated coastal shipping, California, Oregon, and Washington have enacted laws to regulate coastal traffic. The states’ laws provide for additional measures that ships must currently take or will have to take in the future before entering state waters. All three states provide for safety exemptions. California. California law required the State Lands Commission to adopt new regulations governing ballast water management practices for ships of 300 gross tons or more arriving at a California port or place from outside of the Pacific Coast Region by January 1, 2005. The California State Lands Commission has proposed, but not yet finalized, these regulations. Upon implementation of the regulations, California law will require the ships to employ at least one of the following ballast water management practices: (1) exchange its ballast water more than 200 miles from land and at least 2,000 meters deep before entering the state’s coastal waters; (2) retain its ballast water; (3) discharge water at the same location where the ballast water originated; (4) use an alternative, environmentally sound method; (5) discharge the ballast water to a reception facility approved by the commission; or (6) under extraordinary circumstances, exchange ballast water within an area agreed upon by the commission and the Coast Guard. The proposed California regulation would require ships carrying ballast water from within the Pacific Coast Region to conduct any ballast water exchange in waters that are more than 50 miles from land and at least 200 meters deep. Oregon. Oregon law prohibits certain ships from discharging ballast water in Oregon waters unless the ship has conducted a ballast water exchange more than 200 miles from any shore, or at least 50 miles from land and at a depth of at least 200 meters if its ballast water was taken onboard at a North American coastal port. Oregon exempts ships that: (1) discharge ballast water only at the location where the ballast water originated; (2) retain their ballast water; (3) traverse only internal state waters; (4) traverse only the territorial sea of the U.S. and do not enter or depart an Oregon port or navigate state waters; (5) discharge ballast water that has been treated to remove organisms in a manner that is approved by the Coast Guard; or (6) discharge ballast water that originated solely from waters located between 40 degrees latitude north and 50 degrees latitude north on the west coast. Washington. Washington’s ballast water law applies to self-propelled ships in commerce of 300 gross tons or more and prohibits discharging ballast water into state waters unless a ship has conducted an exchange of ballast water 50 miles or more offshore, or further offshore if required by the Coast Guard. Some ships are exempt from this requirement, including ships that retain their ballast water or that discharge ballast water or sediments only at the location where ballast water was taken on. The coordinator of Washington’s aquatic nuisance species program told us that during the legislative process, shipping industry representatives and oceanographic experts concurred that the 50-mile boundary for exchange was both feasible for the ships and protective against invasive species. After July 1, 2007, discharge of ballast water in state waters will be authorized only if there has been an exchange at least 50 miles offshore or if the vessel has treated its ballast water to meet standards set by the Washington Department of Fish and Wildlife. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Numerous invasive species have been introduced into U.S. waters via ballast water discharged from ships and have caused serious economic and ecologic damage. GAO reported in 2002 that at least 160 nonnative aquatic species had become established in the Great Lakes since the 1800s--one-third of which were introduced in the past 30 years by ballast water and other sources. The effects of such species are not trivial; the zebra mussel alone is estimated to have caused $750 million to $1 billion in costs between 1989 and 2000. Species introductions via ballast water are not confined to the Great Lakes, however. The environment and economy of the Chesapeake Bay, San Francisco Bay, Puget Sound, and other U.S. waters have also been adversely affected. The federal government has been taking steps since 1990 to implement programs to prevent the introduction of invasive species from ships' ballast water discharges. However, species introductions are continuing. This testimony discusses the legislative and regulatory history of ballast water management and identifies some of the issues that pose challenges for the federal government's program for preventing the introduction of invasive species via ships' ballast water. Congress recognized ballast water as a serious problem in 1990 with passage of the Nonindigenous Aquatic Nuisance Prevention and Control Act, legislation intended to help reduce the number of species introductions in the Great Lakes. A reauthorization of this law in 1996, the National Invasive Species Act, elevated ballast water management to a national level. As directed by the legislation, the federal government has promulgated several regulations requiring certain ships to take steps, such as exchanging their ballast water in the open ocean to flush it of potentially harmful organisms, to reduce the likelihood of species invasions via ballast water. Initially these regulations applied only to certain ships entering the Great Lakes; now they apply to certain ships entering all U.S. ports. In addition to these domestic developments, the United Nation's International Maritime Organization has recently adopted a convention on ballast water management that could affect the global fleet. Since 1998, Coast Guard data show that compliance with existing ballast water exchange requirements has generally been high. However, key agencies and stakeholders recognize that the current ballast water exchange program is not a viable long-term approach to minimizing the risks posed by ballast water discharges. The primary reasons for this are that: (1) many ships are exempt from current ballast water exchange requirements, (2) the Coast Guard has not established alternate discharge zones that could be used by ships unable to conduct ballast water exchange for various reasons, and (3) ballast water exchange is not always effective at removing or killing potentially invasive species. Developers are pursuing technologies to provide more reliable alternatives to ballast water exchange, some of which show promise. However, development of such technologies and their eventual use to meet ballast water regulatory requirements face many challenges including the daunting technological task of developing large scale water treatment systems that ships can accommodate, and the lack of a federal discharge standard that would provide a target for developers to aim for in terms of treatment efficiency. As a result, ballast water exchange is still the only approved method for treating ballast water despite the concerns with this method's effectiveness. Consequently, U.S. waters remain vulnerable to the introduction of invasive species via ships' ballast water. State governments and others have expressed frustration over the seemingly slow progress the federal government has made on more effectively protecting U.S. waters from future species invasions via ballast water. As a result, several states have passed legislation that authorizes procedures for managing ballast water that are stricter than federal regulations. |
As we reported in April 2011, ICE CTCEU investigates and arrests a small portion of the estimated in-country overstay population due to, among other things, ICE’s competing priorities; however, these efforts could be enhanced by improved planning and performance management. CTCEU, the primary federal entity responsible for taking enforcement action to address in-country overstays, identifies leads for overstay cases; takes steps to verify the accuracy of the leads it identifies by, for example, checking leads against multiple databases; and prioritizes leads to focus on those the unit identifies as being most likely to pose a threat to national security or public safety. CTCEU then requires field offices to initiate investigations on all priority, high-risk leads it identifies. According to CTCEU data, as of October 2010, ICE field offices had closed about 34,700 overstay investigations that CTCEU headquarters assigned to them from fiscal year 2004 through 2010. These cases resulted in approximately 8,100 arrests (about 23 percent of the 34,700 investigations), relative to a total estimated overstay population of 4 million to 5.5 million. About 26,700 of those investigations (or 77 percent) resulted in one of these three outcomes: (1) evidence is uncovered indicating that the suspected overstay has departed the United States; (2) evidence is uncovered indicating that the subject of the investigation is in-status (e.g., the subject filed a timely application with the United States Citizenship and Immigration Services (USCIS) to change his or her status and/or extend his or her authorized period of admission in the United States); or (3) CTCEU investigators exhaust all investigative leads and cannot locate the suspected overstay. Of the approximately 34,700 overstay investigations assigned by CTCEU headquarters that ICE field offices closed from fiscal year 2004 through 2010, ICE officials attributed the significant portion of overstay cases that resulted in a departure finding, in-status finding, or with all leads being exhausted generally to difficulties associated with locating suspected overstays and the timeliness and completeness of data in DHS’s systems used to identify overstays. Further, ICE reported allocating a small percentage of its resources in terms of investigative work hours to overstay investigations since fiscal year 2006, but the agency expressed an intention to augment the resources it dedicates to overstay enforcement efforts moving forward. Specifically, from fiscal years 2006 through 2010, ICE reported devoting from 3.1 to 3.4 percent of its total field office investigative hours to CTCEU overstay investigations. ICE attributed the small percentage of investigative resources it reported allocating to overstay enforcement efforts primarily to competing enforcement priorities. According to the ICE Assistant Secretary, ICE has resources to remove 400,000 aliens per year, or less than 4 percent of the estimated removable alien population in the United States. In June 2010, the Assistant Secretary stated that ICE must prioritize the use of its resources to ensure that its efforts to remove aliens reflect the agency’s highest priorities, namely nonimmigrants, including suspected overstays, who are identified as high risk in terms of being most likely to pose a risk to national security or public safety. As a result, ICE dedicated its limited resources to addressing overstays it identified as most likely to pose a potential threat to national security or public safety and did not generally allocate resources to address suspected overstays that it assessed as noncriminal and low risk. ICE indicated that it may allocate more resources to overstay enforcement efforts moving forward and that it planned to focus primarily on suspected overstays whom ICE has identified as high risk or who recently overstayed their authorized periods of admission. ICE was considering assigning some responsibility for noncriminal overstay enforcement to its Enforcement and Removal Operations (ERO) directorate, which has responsibility for apprehending and removing aliens who do not have lawful immigration status from the United States. However, ERO did not plan to assume this responsibility until ICE assessed the funding and resources doing so would require. ICE had not established a time frame for completing this assessment. We reported in April 2011 that by developing such a time frame and utilizing the assessment findings, as appropriate, ICE could strengthen its planning efforts and be better positioned to hold staff accountable for completing the assessment. We recommended that ICE establish a target time frame for assessing the funding and resources ERO would require in order to assume responsibility for civil overstay enforcement and use the results of that assessment. DHS officials agreed with our recommendation and stated that ICE planned to identify resources needed to transition this responsibility to ERO as part of its fiscal year 2013 resource-planning process. Moreover, although CTCEU established an output program goal and target, and tracked various performance measures, it did not have a mechanism in place to assess the outcomes of its efforts, particularly the extent to which the program was meeting its mission as it relates to overstays—to prevent terrorists and other criminals from exploiting the nation’s immigration system. CTCEU’s program goal is to prevent criminals and terrorists from exploiting the immigration system by proactively developing cases for investigation, and its performance target is to send 100 percent of verified priority leads to field offices as cases. CTCEU also tracks a variety of output measures, such as the number of cases completed their associated results (i.e., arrested, departed, in- status, or all leads exhausted) and average hours spent to complete an investigation. While CTCEU’s performance target permits it to assess an output internal to the program—the percentage of verified priority leads it sends to field offices for investigation—it does not provide program officials with a means to assess the impact of the program in terms of preventing terrorists and other criminals from exploiting the immigration system. We reported that by establishing such mechanisms, CTCEU could better ensure that managers have information to assist in making decisions for strengthening overstay enforcement efforts and assessing performance against CTCEU’s goals. In our April 2011 report, we recommended that ICE develop outcome-based performance measures—or proxy measures if program outcomes cannot be captured—and associated targets on CTCEU’s progress in preventing terrorists and other criminals from exploiting the nation’s immigration system. DHS officials agreed with our recommendation and stated that ICE planned to work with DHS’s national security partners to determine if measures could be implemented. In addition to ICE’s overstay enforcement activities, in April 2011 we reported that the Department of State and CBP are responsible for, respectively, preventing ineligible violators from obtaining a new visa or being admitted to the country at a port of entry. According to Department of State data, the department denied about 52,800 nonimmigrant visa applications and about 114,200 immigrant visa applications from fiscal year 2005 through fiscal year 2010 due, at least in part, to applicants having previously been unlawfully present in the United States for more than 180 days, according to statute. Similarly, CBP reported that it refused admission to about 5,000 foreign nationals applying for admission to the United States from fiscal year 2005 through 2010 (an average of about 830 per year) specifically because of the applicants’ previous status as unlawfully present in the United States for more than 180 days. DHS has not yet implemented a comprehensive biometric system to match available information provided by foreign nationals upon their arrival and departure from the United States. In August 2007, we reported that while US-VISIT biometric entry capabilities were operating at air, sea, and land ports of entry, exit capabilities were not, and that DHS did not have a comprehensive plan or a complete schedule for biometric exit implementation. In addition, we reported that DHS continued to propose spending tens of millions of dollars on US-VISIT exit projects that were not well-defined, planned, or justified on the basis of costs, benefits, and risks. Moreover, in November 2009, we reported that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work that needed to be accomplished to deliver a comprehensive exit solution as part of the US-VISIT program. We concluded that, without a master schedule that was integrated and derived in accordance with relevant guidance, DHS could not reliably commit to when and how it would deliver a comprehensive exit solution or adequately monitor and manage its progress toward this end. We recommended that DHS ensure that an integrated master schedule be developed and maintained. DHS concurred and reported, as of July 2011, that the documentation of schedule practices and procedures is ongoing, and that an updated schedule standard, management plan, and management process that are compliant with schedule guidelines are under review. More specifically, with regard to a biometric exit capability at land ports of entry, we reported in December 2006 that US-VISIT officials concluded that, for various reasons, a biometric US-VISIT exit capability could not be implemented without incurring a major impact on land facilities. In December 2009, DHS initiated a land exit pilot to collect departure information from temporary workers traveling through two Arizona land ports of entry. Under this pilot, temporary workers who entered the United States at these ports of entry were required to register their final departure by providing biometric and biographic information at exit kiosks located at the ports of entry. DHS planned to use the results of this pilot to help inform future decisions on the pedestrian component of the long- term land exit component of a comprehensive exit system. With regard to air and sea ports of entry, in April 2008, DHS announced its intention to implement biometric exit verification at air and sea ports of entry in a Notice of Proposed Rule Making. Under this notice, commercial air and sea carriers would be responsible for developing and deploying the capability to collect biometric information from departing travelers and transmit it to DHS. DHS received comments on the notice and has not yet published a final rule. Subsequent to the rule making notice, on September 30, 2008, the Consolidated Security, Disaster Assistance, and Continuing Appropriations Act, 2009, was enacted, which directed DHS to test two scenarios for an air exit solution: (1) airline collection and transmission of biometric exit data, as proposed in the rule making notice and (2) CBP collection of such information at the departure gate. DHS conducted two pilots in 2009, and we reported on them in August 2010. Specifically, we reported that the pilots addressed one statutory requirement for a CBP scenario to collect information on exiting foreign nationals. However, DHS was unable to address the statutory requirement for an airline scenario because no airline was willing to participate. We reported on limitations with the pilots, such as the reported scope and approach of the pilots including limitations not defined in the pilot evaluation plan like suspending exit screening at departure gates to avoid flight delays, that curtailed their ability to inform a decision for a long-term air exit solution and pointed to the need for additional sources of information on air exit’s operational impacts. We recommended that the Secretary of Homeland Security identify additional sources of information beyond the pilots, such as comments from the Notice of Proposed Rule Making, to inform an air exit solution decision. DHS agreed with the recommendation and stated that the pilots it conducted would not serve as the sole source of information to inform an air exit solution decision. In July 2011, DHS stated that it continues to examine all options in connection with a final biometric air exit solution and has recently given consideration to using its authority to establish an advisory committee to study and provide recommendations to DHS and Congress on implementing an air exit program. In the absence of a comprehensive biometric entry and exit system for identifying and tracking overstays, US-VISIT and CTCEU primarily analyze biographic entry and exit data collected at land, air, and sea ports of entry to identify overstays. In April 2011, we reported that DHS’s efforts to identify and report on visa overstays were hindered by unreliable data. Specifically, CBP does not inspect travelers exiting the United States through land ports of entry, including collecting their biometric information, and CBP did not provide a standard mechanism for nonimmigrants departing the United States through land ports of entry to remit their arrival and departure forms. Nonimmigrants departing the United States through land ports of entry turn in their forms on their own initiative. According to CBP officials, at some ports of entry, CBP provides a box for nonimmigrants to drop off their forms, while at other ports of entry departing nonimmigrants may park their cars, enter the port of entry facility, and provide their forms to a CBP officer. These forms contain information, such as arrival and departure dates, used by DHS to identify overstays. If the benefits outweigh the costs, a mechanism to provide nonimmigrants with a way to turn in their arrival and departure forms could help DHS obtain more complete and reliable departure data for identifying overstays. We recommended that the Commissioner of CBP analyze the costs and benefits of developing a standard mechanism for collecting these forms at land ports of entry, and develop a standard mechanism to collect them, to the extent that benefits outweigh the costs. CBP agreed with our recommendation and stated it planned to complete a cost-effective independent evaluation. Further, we previously reported on weaknesses in DHS processes for collecting departure data, and how these weaknesses impact the determination of overstay rates. The Implementing Recommendations of the 9/11 Commission Act required that DHS certify that a system is in place that can verify the departure of not less than 97 percent of foreign nationals who depart through U.S. airports in order for DHS to expand the Visa Waiver Program. In September 2008, we reported that DHS’s methodology for comparing arrivals and departures for the purpose of departure verification would not inform overall or country-specific overstay rates because DHS’s methodology did not begin with arrival records to determine if those foreign nationals departed or remained in the United States beyond their authorized periods of admission. Rather, DHS’s methodology started with departure records and matched them to arrival records. As a result, DHS’s methodology counted overstays who left the country, but did not identify overstays who have not departed the United States and appear to have no intention of leaving. We recommended that DHS explore cost-effective actions necessary to further improve the reliability of overstay data. DHS reported that it is taking steps to improve the accuracy and reliability of the overstay data, by efforts such as continuing to audit carrier performance and work with airlines to improve the accuracy and completeness of data collection. Moreover, by statute, DHS is required to submit an annual report to Congress providing numerical estimates of the number of aliens from each country in each nonimmigrant classification who overstayed an authorized period of admission that expired during the fiscal year prior to the year for which the report is made. DHS officials stated that the department has not provided Congress annual overstay estimates regularly since 1994 because officials do not have sufficient confidence in the quality of the department’s overstay data—which is maintained and generated by US- VISIT. As a result, DHS officials stated that the department cannot reliably report overstay rates in accordance with the statute. In addition, in April 2011 we reported that DHS took several steps to provide its component entities and other federal agencies with information to identify and take enforcement action on overstays, including creating biometric and biographic lookouts—or electronic alerts—on the records of overstay subjects that are recorded in databases. However, DHS did not create lookouts for the following two categories of overstays: (1) temporary visitors who were admitted to the United States using nonimmigrant business and pleasure visas and subsequently overstayed by 90 days or less; and (2) suspected in-country overstays who CTCEU deemed not to be a priority for investigation in terms of being most likely to pose a threat to national security or public safety. Broadening the scope of electronic lookouts in federal information systems could enhance overstay information sharing. In April 2011, we recommended that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection, the Under Secretary of the National Protection and Programs Directorate, and the Assistant Secretary of Immigration and Customs Enforcement to assess the costs and benefits of creating biometric and biographic lookouts for these two categories of overstays. Agency officials agreed with our recommendation and have actions under way to address it. For example, agency officials stated that they have met to assess the costs and benefits of creating lookouts for those categories of overstays. As we reported in March 2011, the Visa Security Program faces several key challenges in implementing operations at overseas posts. For example, we reported that Visa Security Program agents’ advising and training of consular officers, as mandated by section 428 of the Homeland Security Act, varied from post to post, and some posts provided no training to consular officers. We contacted consular sections at 13 overseas posts, and officials from 5 of the 13 consular sections we interviewed stated that they had received no training from the Visa Security Program agents in the last year, and none of the agents we interviewed reported providing training on specific security threats. At posts where Visa Security Program agents provided training for consular officers, topics covered included fraudulent documents, immigration law, human smuggling, and interviewing techniques. In March 2011, we recommended that DHS issue guidance requiring Visa Security Program agents to provide training for consular officers as mandated by section 428 of the Homeland Security Act. DHS concurred with our recommendation and has actions under way to address it. Further, in March 2011 we reported that Visa Security Program agents performed a variety of investigative and administrative functions beyond their visa security responsibilities, including criminal investigations, attaché functions, and regional responsibilities. According to ICE officials, Visa Security Program agents perform non-program functions only after completing their visa security screening and vetting workload. However, both agents and Department of State officials at some posts told us that these other investigative and administrative functions sometimes slowed or limited Visa Security Program agents’ visa security-related activities. We recommended that DHS develop a mechanism to track the amount of time spent by Visa Security Program agents on visa security activities and other investigations, in order to determine appropriate staffing levels and resource needs for Visa Security Program operations at posts overseas to ensure visa security operations are not limited. DHS did not concur with our recommendation, stating that ICE currently tracks case investigation hours through its data system, and that adding the metric to the Visa Security Program tracking system would be redundant. However, DHS’s response did not address our finding that ICE does not have a mechanism that allows the agency to track the amount of time agents spend on both investigation hours and hours spent on visa security activities. Therefore, we continue to believe the recommendation has merit and should be implemented. Moreover, we found that ICE’s use of 30-day temporary duty assignments to fill Visa Waiver Program positions at posts created challenges and affected continuity of operations at some posts. Consular officers we interviewed at 3 of 13 posts discussed challenges caused by this use of temporary duty agents. The Visa Security Program’s 5-year plan also identified recruitment of qualified personnel as a challenge and recommended incentives for Visa Security Program agents as critical to the program’s mission, stating, “These assignments present significant attendant lifestyle difficulties. If the mission is to be accomplished, ICE, like State, needs a way to provide incentives for qualified personnel to accept these hardship assignments.” However, according to ICE officials, ICE had not provided incentives to facilitate recruitment for hardship posts. ICE officials stated that they have had difficulty attracting agents to Saudi Arabia, and ICE agents at post told us they have little incentive to volunteer for Visa Security Program assignments. Thus, we recommended that DHS develop a plan to provide Visa Security Program coverage at high-risk posts where the possibility of deploying agents may be limited. DHS agreed with our recommendation and is taking steps to implement it. In addition, ICE developed a plan to expand the Visa Security Program to additional high-risk visa-issuing posts, but ICE had not fully adhered to the plan or kept it up to date. The program’s 5-year expansion plan, developed in 2007, identified 14 posts for expansion between 2009 and 2010, but 9 of these locations had not been established at the time of our March 2011 report, and ICE had not updated the plan to reflect the current situation. Furthermore, ICE had not fully addressed remaining visa risk in high-risk posts that did not have a Visa Security Program presence. ICE, with input from the Department of State, developed a list of worldwide visa-issuing posts that are ranked according to visa risk. Although the expansion plan stated that risk analysis is the primary input to Visa Security Program site selection and that the expansion plan represented an effort to address visa risk, ICE had not expanded the Visa Security Program to some high-risk posts. For example, 11 of the top 20 high-risk posts identified by ICE and Department of State were not covered by Visa Security Program at the time of our review. The expansion of the Visa Security Program may be limited by a number of factors—including budget limitations and objections from Department of State officials at some posts—and ICE had not identified possible alternatives that would provide the additional security of Visa Security Program review at those posts that do not have a program presence. In May 2011, we recommended that DHS develop a plan to provide Visa Security Program coverage at high-risk posts where the possibility of deploying agents may be limited. DHS concurred with our recommendation and noted actions under way to address it, such as enhancing information technology systems to allow for screening and reviewing of visa applicants at posts worldwide. As we reported in May 2011, DHS implemented the Electronic System for Travel Authorization (ESTA) to meet a statutory requirement intended to enhance Visa Waiver Program security and took steps to minimize the burden on travelers to the United States added by the new requirement. However, DHS had not fully evaluated security risks related to the small percentage of Visa Waiver Program travelers without verified ESTA approval. DHS developed ESTA to collect passenger data and complete security checks on the data before passengers board a U.S. bound carrier. DHS requires applicants for Visa Waiver Program travel to submit biographical information and answers to eligibility questions through ESTA prior to travel. Travelers whose ESTA applications are denied can apply for a U.S. visa. In developing and implementing ESTA, DHS took several steps to minimize the burden associated with ESTA use. For example, ESTA reduced the requirement that passengers provide biographical information to DHS officials from every trip to once every 2 years. In addition, because of ESTA, DHS has informed passengers who do not qualify for Visa Waiver Program travel that they need to apply for a visa before they travel to the United States. Moreover, most travel industry officials we interviewed in six Visa Waiver Program countries praised DHS’s widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback but expressed dissatisfaction over ESTA fees paid by ESTA applicants. In 2010, airlines complied with the requirement to verify ESTA approval for almost 98 percent of the Visa Waiver Program passengers prior to boarding, but the remaining 2 percent— about 364,000 travelers— traveled under the Visa Waiver Program without verified ESTA approval. In addition, about 650 of these passengers traveled to the United States with a denied ESTA. As we reported in May 2011, DHS had not yet completed a review of these cases to know to what extent they pose a risk to the program. DHS officials told us that, although there was no official agency plan for monitoring and oversight of ESTA, the ESTA office was undertaking a review of each case of a carrier’s boarding a Visa Waiver Program traveler without an approved ESTA application; however, DHS had not established a target date for completing this review. DHS tracked some data on passengers that travel under the Visa Waiver Program without verified ESTA approval but did not track other data that would help officials know the extent to which noncompliance poses a risk to the program. Without a completed analysis of noncompliance with ESTA requirements, DHS was unable to determine the level of risk that noncompliance poses to Visa Waiver Program security and to identify improvements needed to minimize noncompliance. In addition, without analysis of data on travelers who were admitted to the United States without a visa after being denied by ESTA, DHS cannot determine the extent to which ESTA is accurately identifying individuals who should be denied travel under the program. In May 2011, we recommended that DHS establish time frames for the regular review and documentation of cases of Visa Waiver Program passengers traveling to a U.S. port of entry without verified ESTA approval. DHS concurred with our recommendation and committed to establish procedures to review quarterly a representative sample of noncompliant passengers to evaluate, identify, and mitigate potential security risks associated with the ESTA program. Further, in May 2011 we reported that to meet certain statutory requirements, DHS requires that Visa Waiver Program countries enter into three information-sharing agreements with the United States; however, only half of the countries had fully complied with this requirement and many of the signed agreements have not been implemented. Half of the countries entered into agreements to share watchlist information about known or suspected terrorists and to provide access to biographical, biometric, and criminal history data. By contrast, almost all of the 36 Visa Waiver Program countries entered into an agreement to report lost and stolen passports. DHS, with the support of interagency partners, established a compliance schedule requiring the last of the Visa Waiver Program countries to finalize these agreements by June 2012. Although termination from the Visa Waiver Program is one potential consequence for countries not complying with the information- sharing agreement requirement, U.S. officials have described it as undesirable. DHS, in coordination with the Departments of State and Justice, developed measures short of termination that could be applied to countries not meeting their compliance date. In addition, as of May 2011, DHS had not completed half of the most recent biennial reports on Visa Waiver Program countries’ security risks in a timely manner. In 2002, Congress mandated that, at least once every 2 years, DHS evaluate the effect of each country’s continued participation in the program on the security, law enforcement, and immigration interests of the United States. The mandate also directed DHS to determine based on the evaluation whether each Visa Waiver Program country’s designation should continue or be terminated and to submit a written report on that determination to select congressional committees. According to officials, DHS assesses, among other things, counterterrorism capabilities and immigration programs. However, DHS had not completed the latest biennial reports for 18 of the 36 Visa Waiver Program countries in a timely manner, and over half of these reports are more than 1 year overdue. Further, in the case of 2 countries, DHS was unable to demonstrate that it had completed reports in the last 4 years. DHS cited a number of reasons for the reporting delays. For example, DHS officials said that they intentionally delayed report completion because they frequently did not receive mandated intelligence assessments in a timely manner and needed to review these before completing Visa Waiver Program country biennial reports. We recommended that DHS take steps to address delays in the biennial country review process so that the mandated country reports can be completed on time. DHS concurred with our recommendation and reported that it would consider process changes to address our concerns with the timeliness of continuing Visa Waiver Program reports. This concludes my prepared testimony statement. I would be pleased to respond to any questions that members of the Subcommittee may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Rebecca Gambler, Assistant Director; Jeffrey Baldwin-Bott; Frances Cook; David Hinchman; Jeremy Manion; Taylor Matheson; Jeff Miller; Anthony Moran; Jessica Orr; Zane Seals; and Joshua Wiener. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The attempted bombing of an airline on December 25, 2009, by a Nigerian citizen with a valid U.S. visa renewed concerns about the security of the visa process. Further, unauthorized immigrants who entered the country legally on a temporary basis but then overstayed their authorized periods of admission--overstays--could pose homeland security risks. The Department of Homeland Security (DHS) has certain responsibilities for security in the visa process and for addressing overstays. DHS staff review visa applications at certain Department of State overseas posts under the Visa Security Program. DHS also manages the Visa Waiver Program through which eligible nationals from certain countries can travel to the United States without a visa. This testimony is based on GAO products issued in November 2009, August 2010, and from March to May 2011. As requested, this testimony addresses the following issues: (1) overstay enforcement efforts, (2) efforts to implement a biometric exit system and challenges with the reliability of overstay data, and (3) challenges in the Visa Security and Visa Waiver programs. Federal agencies take actions against a small portion of the estimated overstay population, but strengthening planning and assessment of overstay efforts could improve enforcement. Within DHS, U.S. Immigration and Customs Enforcement's (ICE) Counterterrorism and Criminal Exploitation Unit (CTCEU) is the lead agency responsible for overstay enforcement. CTCEU arrests a small portion of the estimated overstay population in the United States because of, among other things, ICE's competing priorities, but ICE expressed an intention to augment its overstay enforcement resources. From fiscal years 2006 through 2010, ICE reported devoting about 3 percent of its total field office investigative hours to CTCEU overstay investigations. ICE was considering assigning some responsibility for noncriminal overstay enforcement to its Enforcement and Removal Operations directorate, which apprehends and removes aliens subject to removal from the United States. In April 2011, GAO reported that by developing a time frame for assessing needed resources and using the assessment findings, as appropriate, ICE could strengthen its planning efforts. Moreover, in April 2011, GAO reported that CTCEU tracked various performance measures, but did not have a mechanism to assess the outcomes of its efforts. GAO reported that by establishing such a mechanism, CTCEU could better ensure that managers have information to assist in making decisions. DHS has not yet implemented a comprehensive biometric system to match available information (e.g., fingerprints) provided by foreign nationals upon their arrival and departure from the United States and faces reliability issues with data used to identify overstays. GAO reported that while the United States Visitor and Immigrant Status Indicator Technology Program's biometric entry capabilities were operating at ports of entry, exit capabilities were not, and DHS did not have a comprehensive plan for biometric exit implementation. DHS conducted pilots to test two scenarios for an air exit solution in 2009, and in August 2010, GAO concluded that the pilots' limitations, such as limitations not defined in the pilot evaluation plan like suspending exit screening at departure gates to avoid flight delays, curtailed DHS's ability to inform a decision for a long-term exit solution. Further, in April 2011, GAO reported that there is not a standard mechanism for nonimmigrants departing the United States through land ports of entry to remit their arrival and departure forms. Such a mechanism could help DHS obtain more complete departure data for identifying overstays. GAO identified various challenges in the Visa Security and Visa Waiver programs related to planning and assessment efforts. For example, in March 2011, GAO found that ICE developed a plan to expand the Visa Security Program to additional high-risk posts, but ICE had not fully adhered to the plan or kept it up to date. Further, ICE had not identified possible alternatives that would provide the additional security of Visa Security Program review at those high-risk posts that do not have a program presence. In addition, DHS implemented the Electronic System for Travel Authorization (ESTA) to meet a statutory requirement intended to enhance Visa Waiver Program security and took steps to minimize the burden on travelers to the United States added by the new requirement. However, DHS had not fully evaluated security risks related to the small percentage of Visa Waiver Program travelers without verified ESTA approval. GAO has made recommendations in prior reports that, among other things, call for DHS to strengthen management of overstay enforcement efforts, mechanisms for collecting data from foreign nationals departing the United States, and planning for addressing certain Visa Security and Visa Waiver programs' risks. DHS generally concurred with these recommendations and has actions planned or underway to address them. |
“Reliable data is a necessary ingredient for credible policy and its implementation.” So stated the U.S. Commission on Immigration Reform in 1994 (p. xxxi). But the Commission found that throughout its own inquiry, inadequate data made it difficult to assess the impact of immigration policy and of immigration itself on American society. Dissatisfaction with information on immigration has also surfaced on Capitol Hill. In spring 1996, the Immigration and Naturalization Service (INS) issued a press release with the headline: “U.S. Legal Immigration Down 10.4 Percent in 1995.” The headline is based on a 1994-95 reduction in the number of green cards authorized, and the reduction appears to have been caused mainly by a logjam in INS’ processing of green-card applications. Subsequently, INS and most of the experts testifying at a congressional hearing reported that, in general terms, legal immigration was increasing. At that hearing, it also became apparent that some policymakers and reporters had been confused about whether immigration was increasing or decreasing. Recent dissatisfaction with immigration statistics comes roughly a decade after a thorough review of immigration statistics (Levine et al., 1985) summed up the situation as a history of neglect and despite some efforts to provide better information. It has been a decade during which levels of legal immigration increased while patterns of immigration shifted further away from the European dominance of the early and mid-20th century to heavier flows of Asian and Latin American immigrants. At the same time, illegal immigration emerged as a major concern, and the INS budget increased dramatically, mainly because of more intensive efforts to curtail illegal immigration. (During the 1990s, INS’ budget quadrupled; it is expected to reach $4 billion in fiscal year 1999—up from less than $1 billion in 1992.) In recent years, the public—particularly California voters—entered the debate about the value and cost of immigration. And a series of major bills—affecting legal immigration, the transition from illegal to legal status, and the public benefits for which immigrants in various statuses are eligible—were introduced and debated in Congress. Debates concerning immigration continue. For example, this year Californians passed a hotly contested proposition to end bilingual education. Also, the computer industry asked Congress to increase the number of temporary visas for high-tech workers. Thus, there is an increased need for valid, reliable, and clear policy-relevant information. Congressman Ed Bryant of Tennessee called for INS to answer direct questions, such as: “How many people—in total, including every category . . . enter the United States each year?” The Commission on Immigration Reform (1994) also issued a call for new methods to be developed to meet some of the difficult challenges in immigration statistics, such as estimating the flow of illegal immigrants into the United States. The resident foreign-born population is defined here as all persons who were born abroad (to parents who were not U.S. citizens) and who now either (1) are in a permanent legal status (naturalized citizen, legal permanent resident, refugee, person granted asylum) or (2) if in a temporary legal status or here illegally, remain in this country for over a year. The requirement for remaining more than a year is based on the U.N. definition. Four basic demographic concepts (categories of statistics) are crucial to understanding information on the foreign-born population: • The inflow—or in this report, flow—refers to the movement of foreign-born persons into the United States and, as explained below, their transition into specific legal statuses. • The size of the foreign-born population in the United States (sometimes referred to as “stock”) is the total number of foreign-born persons residing here at any given time, including those who have naturalized. • Net change in the size of the foreign-born population over a specific period of time can be calculated using a demographic balancing equation, accounting for flow, deaths, and emigration (see Bogue et al., 1993; Pollard et al., 1981). The foreign-born population increases when the flow of new residents into the country exceeds the number who emigrate or die. • Emigration refers to persons moving out of the United States to take up residence in a foreign country. In this report, we are concerned with the emigration of only foreign-born persons—not the emigration of persons born in the United States. The foreign-born population may be subdivided into groups, such as those defined by the various legal immigration statuses: legal permanent residents, refugees and asylees, those legally permitted to reside here on a temporary basis, illegal immigrants, and naturalized citizens. Virtually all laws and policies on immigration differentiate foreign-born persons according to their legal status. Thus, from a policy perspective, legal status is critical information. Consistent with this view, in 1994, the U.S. Commission on Immigration Reform, commenting on the need to improve estimates of the costs and benefits of legal and illegal immigration, stated that confusion results from grouping together illegal immigrants, legal immigrants, and refugees. In considering the process of immigration flow, we recognized that transitions from one immigration status to another must be considered. Persons who enter the United States in one immigration status often adjust or change to a different status, and statistics have been reported on this process. Such transitions are a form of “flow” (i.e., flow to a particular status). This means that flow can indicate not only new entries into the United States, but also transitions or adjustments to a different legal category or status. We believe that it is important to recognize—and to clearly distinguish between—these two, very different types of flow. Most of the available federal statistical information on the foreign-born is provided by two agencies: the Immigration and Naturalization Service and the Bureau of the Census. INS provides information relevant to flow. INS and Census each provide some information on the size of the resident foreign-born population and net change. Census also provides an estimate of emigration (i.e., the estimated number of foreign-born residents who leave the United States to live in another country). Some additional information is maintained by the Departments of State, Health and Human Services, and Labor. The objectives of this report are (1) to identify policy-related information needs for immigration flow and other key demographic concepts that are relevant to migration; (2) to identify federal statistics on the flow of immigrants (and information gaps) and to determine what is known about the quality of existing statistics on flow; (3) to identify federal statistics relevant to other key demographic categories and to determine what is known about their quality; and (4) to identify strategies for improving immigration statistics. The scope of this report is limited to current federal statistics that provide basic demographic, statistical information on the resident foreign-born population. Current federal statistics are defined here as those published by a federal agency for fiscal or calendar year 1996—the most recent year for which statistics were generally available during the time we collected data from federal agencies (from Nov. 1997 to Apr. 1998). With respect to policy-related information needs, our focus is on congressional information needs—that is, the statistical data that can inform congressional debates on immigration issues. In assessing the quality of federal statistics on the foreign-born, we limited our work to determining what is known about their quality, including what can be determined from logical comparisons and analysis. In exploring new strategies, we initially limited our scope to the development of ideas; in one instance, we were subsequently able to conduct a preliminary, qualitative test of a new method for estimating legal status in a census or survey. To identify policy-related information needs, we reviewed recent congressional debates, bills, and laws concerning immigration to determine the kinds of basic demographic, statistical information on the foreign-born needed by congressional policymakers. We also reviewed basic texts on demography, which identify key concepts, and consulted with immigration experts. (App. I lists the immigration experts we consulted.) In reviewing federal agency literature, material from relevant hearings, and laws requiring information on the foreign-born, we found that such information has not been gathered or reported according to a common framework or typology. As noted earlier, there have been instances of confusion in interpreting these kinds of information. Therefore, to identify policy-related information needs concerning the legally and illegally resident foreign-born population, we proceeded through a two-step process, which included • developing a basic typology or set of demographic categories (i.e., a systematic framework defining various types of policy-relevant demographic, statistical information on the foreign-born population and their interrelationships) and • examining, in a general sense, whether these types of information were, in fact, needed by Congress and interested members of the general public. We developed our information typology (set of demographic categories) in consultation with immigration experts. We then examined the need for these types of information by reviewing laws requiring demographic, statistical information on the foreign-born, past congressional requests for information, and recurrent congressional activities. We discussed our typology with staff at the Immigration and Naturalization Service. Using our typology, we identified relevant federal statistics and gaps. Briefly, we reviewed literature published by federal agencies and followed up with officials and staff at INS, the Bureau of the Census, and other agencies. We then evaluated the quality of the relevant statistics, considering technical adequacy and timeliness as well as the adequacy with which the information was reported. To guide our work, we developed checklists for statistical quality, based on a review of literature (including federal agency standards, published empirical assessments of statistical quality, and evaluation and statistical texts) and discussions with agency staff. We used these checklists to ensure comprehensiveness in interviewing federal agency staff, experts, and users about statistical quality and in reviewing relevant literature. We then developed quality ratings to describe published statistics in each demographic category as problem-free or as limited by conceptual problems and confused reporting, as overcounts or undercounts, or as uncertain or unevaluated statistics. If no published federal statistic could be identified for a demographic category, the descriptive rating consists of the notation that a gap exists. To identify strategies to improve federal statistics on immigration, we (1) logically analyzed the problems we had identified, (2) talked with agency staff and experts about possible approaches, (3) reviewed literature, and (4) developed our own new strategy for collecting relevant data. In particular, based on previous research on survey methods for asking sensitive questions and on demographic methods for estimating illegal immigrants, we devised a new method for interviewing foreign-born respondents and collecting data on their immigration status while protecting privacy—the three-card method. We pretested the three-card method in interviews with foreign-born Hispanics in farmwork settings, at a legal clinic for immigration problems, and at a city “drop-in” center. (These interviews were conducted by members of our staff who are fluent in Spanish.) We then conducted a preliminary test of the acceptability of this method to interviewers and respondents by contracting for 81 interviews with foreign-born farmworkers as a supplement to the National Agricultural Workers Survey (NAWS), debriefing the interviewers who administered the questionnaire, and examining the results for signs of respondent comfort with the series of questions. We conducted our audit work in accordance with generally accepted government auditing standards between June 1997 and June 1998. Preliminary work on the three-card method was conducted earlier. We did not conduct an audit of how the INS and Census data were initially gathered and processed by the agencies. We requested comments on a draft of this report from INS, Census, and the Departments of State, Labor, and Health and Human Services. On July 13, 1998, Census provided written comments, and on July 15, INS provided oral comments at a meeting attended by INS officials including the Director of the Statistics Branch. INS’ and Census’ comments are discussed in relevant sections of chapter 5 and appendix III. INS, as well as State and Labor, provided some technical comments and suggestions for clarification, which we incorporated as appropriate. Health and Human Services reviewed a draft and said that it had no comments. Chapter 2 of this report presents our typology of policy-relevant statistical information on the foreign-born population (i.e., the set of demographic categories) and links this typology to policy-related information needs. Chapter 3 identifies and assesses current federal statistical information on immigration flow. Chapter 4 identifies and assesses corresponding information on other key demographic concepts—size of the foreign-born population, emigration, and change in the size of the foreign-born population. Chapter 5 discusses strategies for improvement and makes recommendations to the Commissioner of INS and the Director of the Bureau of the Census. Appendixes provide more detailed information of concern to technical readers. Appendix I lists the immigration experts we consulted. Appendix II briefly reviews available information on the demographic characteristics of foreign-born residents. Appendix III provides information on the three-card method for collecting survey data on legal status and our preliminary test of its acceptability to both respondents and interviewers. Appendix IV reprints the comments from the Bureau of the Census. A basic typology of policy-relevant statistical information on the resident foreign-born population can be defined by combining two dimensions: one consisting of four demographic concepts that are relevant to migration (flow, size of the foreign-born population, net change in size, and emigration) and the other consisting of legal statuses (legal permanent residents, refugees and asylees, persons permitted to reside here on a temporary basis, illegal immigrants, and naturalized citizens). We developed the typology (or set of demographic categories) as a tool for sorting and defining different types of statistical information on the foreign-born population. Reviewing specific policy-relevant information needs with reference to our typology, we found that • Congress has passed laws requiring, either generally or specifically, much of the information included in the typology and, in some instances, has indicated that the information is needed to improve decision-making; • Congressional committees have requested some of the information; and • Virtually all of the information is directly or indirectly relevant to various congressional activities (e.g., information on immigration flow is relevant to establishing or changing numerical limits for certain classes of immigrants and temporary visas). To build a policy-relevant framework for types of demographic, statistical information on the foreign-born, we crossed the two dimensions—demographic concept and legal status of the foreign-born, as shown in table 2.1. In doing so, we defined flow with three columns to distinguish new arrivals, transitions to a new status, and total flow to specific legal statuses (e.g., all new LPRs). We defined each of the other demographic concepts—the size of the foreign-born population, net change in size, and emigration—with an individual column. The rows of the table represent major legal statuses. In total, we identified 33 discrete categories, each of which specifies a distinct type of information. What is critical for policy analysts and for users of information is how accurately measures of immigration reflect the actual patterns of immigration (Kraly and Warren, 1992). The typology in table 2.1 represents a step in the direction of measuring actual patterns and reducing confusion. This is because the typology (or set of demographic categories) can be used as a tool for sorting existing statistics and thus determining where gaps exist. It can also help in the interpretation of information by clarifying concepts such as the distinction between two types of flow. The typology also makes plain the interrelationship of different kinds of statistical information and helps clarify which statistics are directly comparable and which are not. A review of major immigration laws indicated that they often include requirements for federal agencies to report information on flow, the size of the foreign-born population, net change in size, and emigration. Notably, the Immigration Reform and Control Act (IRCA) of 1986 requires triennial reports to Congress, which are to describe the number of persons who are legally admitted or paroled into the United States within a specified interval as well as those who illegally enter or overstay temporary visas during that interval. This corresponds to the demographic concept of immigration flow. Note that with respect to illegal immigrants, it is important to distinguish two different types of flow: those who “enter without inspection” (EWIs) and those who overstay temporary visas. The distinction is important from a policy perspective because, for example, stronger border controls would not address the overstay issue. IRCA also requires information on a variety of impacts of immigration, including, for example, the impact on demographics and population size as well as the impact on social services. We note that the impact on population size would logically involve new entries, but not transitions between legal statuses. The impact on social services would occur as a result of both new entries and transitions of legal status—that is, both types of flow—because the foreign-born resident’s specific legal status determines whether or not he or she is eligible for specific benefits. (It is important to note that although information on both types of flow combined is relevant for certain policy purposes, data on combined-flow to a legal status should be treated with caution. This is because it is not always clear to what extent each type of flow is represented; e.g., a change in a combined-flow statistic might reflect a change in the number of new entries or a change in the number of transitions to the status in question—or some combination of the two.) With respect to the other relevant demographic concepts—the size of the foreign-born population, net change in size, and emigration—the Immigration Act of 1990 requires information on the alien population of the United States as well as rates of emigration and an analysis of trends. Other laws, while not directly requiring statistical information, nevertheless include mandates that imply such a need. For example, the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 mandates an evaluation of the effort to deter illegal entry into the United States. Such an evaluation would require a variety of statistical information, such as trends in the size of the illegal population (see GAO/GGD-98-21). There are other indications of the kinds of information that Congress has wanted in recent years. For example, the House Judiciary Report accompanying IRCA indicated that the requirement for the triennial reports is intended to enable Congress to review and study immigration and refugee programs and to consider possible changes to them with the benefit of reliable and detailed data. Congressional committees have requested that we provide statistical estimates, such as projections of legal immigration, to help in decisions regarding numerical limits. Another example would be the congressional request for estimates of the number of certain nonimmigrant workers who transitioned to green-card status (see GAO/PEMD-92-17). Among the ongoing or recurring congressional activities for which the types of information shown in table 2.1 might be useful are • periodic revisions of numerical limits for LPRs, annual setting of levels of refugee inflow, periodic resetting of limits for certain temporary visas, and annual prioritization of funding and special programs intended to reduce illegal immigration; • periodic redefinitions of (1) the conditions under which illegal immigrants and others can (or cannot) adjust to LPR status and (2) ceilings on the number of asylees who may transition or adjust to LPR status; and • periodic revisions of public benefits available to persons in different immigration statuses, which can in turn influence personal decisions about changing one’s immigration status. In addition, congressional committees have indicated that they wanted information to address issues such as the impact of foreign workers on the U.S. economy and on the working conditions of Americans (see, e.g., GAO/PEMD-92-17 and GAO/HEHS-98-20). There has also been some interest in the trends in the numbers of naturalized citizens, because they have the right to bring in certain relatives. INS’ annual Statistical Yearbook includes several statistics on immigration flow—particularly, statistics on LPR flow (persons with new green-card status), refugees and asylees, and naturalized citizens. But various quality problems limit the utility of these data for policy purposes. • Conceptual problems make a key trend difficult to interpret and valid comparisons of certain reported data difficult to make; confused reporting compounds the conceptual problems. • Administrative data undercount persons granted asylum as well as those attaining naturalized citizenship. • Data gaps occur for key statistics, such as the number of foreign-born persons who take up residence here each year. The relevant statistics and descriptive quality ratings (together with the reasons for our ratings) are presented in table 3.1. The INS Yearbook includes several statistics that seemingly fit our typology for information on immigration flow. As shown in table 3.1, these include, for example, figures for the number of new LPRs who entered the United States in fiscal year 1996 (421,405), the number of persons already here who attained new green-card status (494,495), and the total number of new LPRs (915,900). Similar kinds of data are reported for refugees and asylees and for naturalized citizens (see rows B and E of table 3.1). The INS Yearbook also discusses the sources of these data and some of their limitations. Few, if any, relevant data on flow are reported outside the Yearbook. INS’ administrative count of the number of new LPRs combines the flow of (1) new LPR entries to the United States and (2) transitions (or adjustments) to LPR status. But there is a conceptual problem; namely, this statistic represents two different measures, each of which can vary independently. This makes results—particularly for trends—difficult to interpret. The majority of new LPRs are in category 2, transitions to LPR status. That is, they are not new to the United States; as indicated in tables in the INS Yearbook, they have already been living here for years—typically either illegally or as long-term temporary residents. Various factors can raise or lower the number of green cards authorized for such persons, independently of trends in new entries. Two instances show how this can—and has—happened: • The late 1980s amnesty (through IRCA) for illegal immigrants who had lived here for more than 5 years created a sudden major upswing in the trend line for new LPRs because a large group of illegal residents became eligible to apply for green-card status. This increase was unrelated to any change in the number of persons entering the United States. • A recent change in law (1994) allowed illegal immigrants living in the United States who qualified for green cards to transition to LPR status without leaving the United States—thus shifting the processing of thousands of cases from the Department of State to INS. Because INS could not immediately handle the additional workload, there was a logjam, or slowdown, in issuing the cards to persons already living here. The logical effect of a slowdown is a decrease in the number of cards authorized (i.e., a downturn in the trend line), independent of any change in the number of persons newly taking up residence in the United States. Subsequently, as INS’ capacity to handle the new workload improves, a speed-up in processing would increase the number of cards authorized for persons already living here, creating an upswing in the combined-flow trend line. INS statistical staff told us that annual trends in the number of new LPRs do not convey a meaningful indication of any demographic concept. They also said it is unclear how to disentangle the effects of processing logjams and catch-ups. Despite these problems, INS has repeatedly highlighted annual trends in “immigrants admitted” (what we term the combined-flow LPR statistic in table. 3.1). The 1995 and 1996 INS Yearbooks lead off their introductions with the first highlights of current findings, as follows: • “720,461 persons were granted legal permanent residence status . . . a decrease of nearly 84,000 from the year before” (INS 1995 Yearbook, p. 11). • “915,900 persons were granted legal permanent resident status . . . an increase of more than 195,000 over the year before” (INS 1996 Yearbook, p. 11). In each case, the same page of the Yearbook interprets these trends as either a “decline in immigration to the United States” (1995) or a “rise in immigration to the United States” (1996). The introduction itself does not mention the recent processing problems or that any of the new immigrants counted were already living here. Readers who turn to the body of the Yearbook will find caveats. However, although tables in the INS Yearbook indicate that the majority of new LPRs were already living in the United States, the Yearbook text indicates only that some were already here. “The majority of immigrants [LPRs] enter the United States as immediate relatives of U.S. citizens or through the preference system, consisting of family-sponsored and employment-based immigrants. These categories combined accounted for 78 percent of all admissions in 1996.” (p. 18, emphasis added.) A nonexpert might infer that “enter the United States” means exactly that, not realizing that the majority of new LPRs were already here. (In other words, some readers might not realize that the statement quoted above is supposed to refer to persons entering LPR status—regardless of whether they are already living here, as the majority are.) Another conceptual problem arises because the Yearbook presents statistics relevant to flow for legal immigration—but an estimate for net change in the illegal population. That is, the 1996 Yearbook reports that nearly 1 million persons achieved green-card status in fiscal year 1996 and that the population of illegals is increasing, on average, by 275,000 per year. These two figures are not comparable, however, because flow is a very different concept than net change. Briefly, the difference in concepts is as follows: • The flow of illegal immigrants refers to new illegal EWIs and overstays who resided here for more than a year. (As shown in table 3.1, these two types of flow can be described separately—and a combined number can be provided.) • Net change in the size of the illegal population is calculated, mathematically, as the difference between (1) the flow of illegal residents and (2) legalizations and other exits (emigration, death) from the entire illegal population, as shown in figure 3.1. • Net change may be a positive or negative number. When positive, net change measures the extent to which the flow exceeds legalizations, emigration, and deaths—on the part of the entire population of illegals who were already living here. When legalizations, emigration, and deaths outnumber the entries, net change is negative. Legal flow cannot be validly compared to net change in the illegal population. The fact that the INS Yearbook reports statistics relevant to flow for legal immigration but net change for the illegal population is compounded by the fact that the Yearbook does not discuss demographic concepts. The Yearbook does not clarify the meaning of immigration flow or net change in population size—or the distinction between the two. Reporting flow for legals and net change for illegals, without clearly distinguishing between flow and net change, could lead to misinterpretations; that is, it might invite invalid comparisons. If a comparison of legal and illegal immigration is to be made, the same demographic concept should be used for data on legals and illegals; for example, legal flow should be compared to illegal flow. This is important because numerically, the difference (illegal flow versus net change in illegals) could be great. Large numbers of illegal immigrants transition to green-card status (over 120,000 of those authorized to receive green cards in fiscal year 1996 admit to having entered the United States as EWIs), and the emigration of illegal immigrants may also be large. Thus, the flow of illegal immigrants might—in some years—be considerably larger than the reported net change in the size of the illegal population. We found that INS Yearbook tallies of persons granted asylum are limited to cases processed through INS’ RAPS (Refugee, Asylum and Parole System) data system. Both RAPS and the Yearbook omit asylees whose cases were approved by the Executive Office of Immigration Review (EOIR) rather than INS. Both also omit asylees who enter from abroad after their paperwork is approved by INS—trailing relatives of persons granted asylum (i.e., family members “following to join” a principal asylee)—and processed abroad either by the Department of State or INS. Also omitted were other trailing relatives whose cases were processed in the United States by INS (460 during the first four months of fiscal year 1996). Table 3.2 summarizes the count of asylees published in the INS Yearbook and the additional counts that we were able to identify by talking with various staff at Department of State, EOIR, and INS and by requesting tabulations from various data systems. INS staff told us that in addition to the 460 cases noted above, they estimate that approximately 1,000 more were processed in the United States during fiscal year 1996; however, the number of trailing relatives approved and processed overseas by INS during fiscal year 1996 could not be estimated. In sum, the count of asylees published in the INS Yearbook should be increased—probably by more than 50 percent (i.e., from 18,556 to 28,764 or perhaps to an even higher figure). The undercount is not fully described in the Yearbook, apparently because the administrative processes are complex and it is difficult to identify all cases in which asylum was granted. By contrast, however, we found no biases in the Department of State counts of refugees reported in the INS Yearbook. The number of naturalizations reported by INS (row E of table 3.1) is an undercount of new foreign-born U.S. citizens because the tally excludes most minor children. That is, as explained in the INS Yearbook, minor children automatically receive citizenship (by derivation) when their parents naturalize. A separate form is not required for these children, and they are not listed on their parents’ forms. INS counts persons listed on forms; a complete count cannot be obtained without revising the existing form. (We also note that trends in naturalization are affected by processing speed-ups and slowdowns similar to those discussed above for LPRs. As of July 1998, the logjam for naturalization applications was estimated to be between 1.6 and 2 million unprocessed applications.) Transitions to LPR status may be undercounted. INS statistical staff told us that in 1996, transitions to LPR status apparently continued to be deflated (to some extent). That is, even after the change in law allowing illegal U.S. residents to transition to green-card status without leaving and reentering, some continued to exit and reenter—preferring the travel to paying the fee required for transitioning without leaving. Although this number may be relatively small, it represents a subtraction from the transitions column and an addition to the new entries column—thus distorting what is reported about patterns of flow to some extent. There is a gap in flow statistics for residents who are admitted with temporary visas (i.e., nonimmigrant visas, such as student visas, temporary work visas, and so forth). The gap occurs because the relevant INS data system does not distinguish newly admitted persons from readmissions of the same person. For example, a foreign student living here for 4 years but visiting his parents briefly every 6 months would be represented as eight short stays—for up to eight individuals—rather than one long stay for a single person. Thus, although durations of stay have been calculated (INS, 1996; Lowell, 1996), they do not correspond to the flow of long-term residents who were admitted with legal temporary status. Moreover, problems at INS or with INS contractors (including computer-processing problems and lack of agreement on what constitutes a valid match) have, since 1992, prevented calculations involving matched entries and exits. And with respect to this and other data systems, INS statistical staff told us that there had been no recent audits or evaluations to test the level of error in data processing and data archiving. Such work might reveal, for example, whether double-counting or lost cases occurred. Department of State information on issuances of temporary visas cannot fill this gap for two reasons. First, in some cases, a visa allows multiple entries and exits, whereas other visas allow only a single entry so that a new visa is required for each entry and exit. There is no direct correspondence, then, between the number of visas issued and the number of new residents with visas. Second, a single person can, if qualified, be issued multiple visas in different categories—for example, a temporary work visa and a tourist visa could be issued to the same person on the same day—and it would be very difficult to cross-reference records for statistical purposes. We believe this gap is important because the number of long-term residents living here with temporary visas may be quite large. One analyst (Woodrow, 1998) puts the flow of nonimmigrant residents into the United States at approximately 1 million, perhaps more, in 1990. Under alternative assumptions, possible figures for fiscal year 1996 might range from very roughly 500,000 to 1 million, but the true dimensions are unknown. From the perspective of charting the overall impact of immigration on American society, the most important statistic for which there is a gap may be the total number of new entries who take up residence in the United States each year (total for col. 1 in table 3.1). To fill this gap would require dealing with the gaps and biases in the various estimates for new entries in the various legal status categories: LPRs; refugees and asylees; foreign students, temporary workers, and others here legally for a nonpermanent stay; and illegal residents. Indeed, approximate figures developed for lower and upper bounds could differ by as much as one million persons—leaving policymakers still without a proper information base. The Bureau of the Census provides decennial census and intercensal survey data on the size of the foreign-born population and change in size, and it estimates emigration. Although there are no directly relevant administrative records, INS provides some additional information through “composite estimates.” Overall, information on the size of the foreign-born population, change in size, and emigration is limited, as summarized in table 4.1. There are two main reasons for this: • Decennial census and survey estimates apply only to total foreign-born and naturalized citizens (rows E and total in table 4.1). For specific legal statuses, there are gaps and uncertain estimates (rows A through D in table 4.1). Aside from a question on U.S. citizenship, the census and intercensal surveys do not ask about legal status. (One reason is that questions on the respondent’s legal status are very sensitive and might result in biased answers or affect responses to other questions.) INS efforts to fill data gaps without additional data collection efforts have resulted in uncertain estimates. • The data provided by Census for total foreign-born and naturalized U.S. citizens have not been rigorously evaluated. The unevaluated estimates are at least somewhat uncertain because of the questions about adequate coverage of the foreign-born population and other quality issues that have been raised by a number of analysts. Moreover, problems have cropped up in estimating emigration—and also in reliably quantifying net change. We also found that the information in table 4.1 cannot be accessed by referring to just one or two publications. Rather, policymakers and other information consumers must first identify and access a variety of sources (including the Internet), then piece results together—with no guide to their comparability. The Alien Address Report Program (an annual registration system maintained by INS) was discontinued in 1981. Until its demise, that system was a source of administrative data on the population of legally resident aliens (i.e., noncitizens). Since all resident aliens were required to register, the number living here would be represented by the number registering—provided that all legal aliens complied. The registration system was discontinued partly for budgetary reasons, but also because not all aliens reported, and the value of the information was unclear. Likewise, administrative systems to record emigration by aliens and U.S. citizens, begun early in this century, were discontinued (in the 1950s) partly because they were believed to underestimate permanent departures. After the Alien Address Report Program was discontinued, the decennial census represented, until recently, the only remaining regularly scheduled collection of data on the size of the foreign-born population. Occasional supplements to the Current Population Survey (CPS) included questions on nativity and citizenship. Then, starting in 1994, such questions were added to the CPS on a regular basis, providing information on the foreign-born population in intercensal years. Neither the census nor the CPS asks about the legal status of noncitizens—or whether they are, in fact, here illegally. There are good reasons for this: such questions fall under the heading of “threatening” survey questions (Bradburn and Sudman, 1979); many respondents might not answer these questions truthfully; and others might avoid participating altogether if they hear that such questions will be asked. In addition, the Bureau of the Census is concerned about privacy invasion issues. In an effort to fill data gaps, INS developed “composite estimates” for the number of illegal residents and for the number of legal permanent residents. Although these estimates represent a step forward (because they provide some information that would otherwise not be available), they are necessarily uncertain. That is, it is difficult to determine whether the figures might be underestimates or overestimates and to judge what the magnitude of misestimation might be. INS’ composite estimate of the current number of illegals residing here is based mainly on the following three calculations—each of which is characterized by uncertainty: • First, INS calculates an estimate of illegal “overstays”—persons who entered legally on a temporary basis and failed to depart. These estimates are uncertain for several reasons: INS’ data system does not track many legal entries by Mexicans and Canadians (Department of Justice, 1997); so if such persons overstay, they would not be counted. Although the system records other persons’ legal entries and departures, a substantial portion of the departure data is missing each year, and the assumptions INS uses to differentiate missing departure data from actual overstays are controversial. Moreover, INS made its current (1996) estimate by projecting old overstay estimates forward (Department of Justice, 1997). (Data collected after 1992 are deemed not usable because of computer processing problems and lack of agreement on what constitutes a valid match.) • Second, INS calculates the total number of Mexican illegals by comparing administrative data on Mexican legal immigrants to CPS data on total Mexican foreign-born. Here, uncertainty derives from not knowing survey underrepresentation of illegals and from questions about the estimate of emigration (see INS Yearbook).• Third, INS estimates the number of non-Mexican residents who “entered without inspection” (i.e., EWIs from other countries around the world) based on a variety of data, including data from the late 1980s amnesty (that IRCA provided) as well as more recent data on trends in apprehensions. Translating such data into an estimate of the number of current residents necessarily involves assumptions and uncertainty. Turning to the size of the LPR population, this INS estimate is based on an indirect method that includes subtracting the estimated number of illegals (just discussed) from the number of foreign-born aliens (i.e., noncitizens) estimated in the CPS. Thus, the uncertainties regarding the estimate of illegals are necessarily carried over to the estimate of the number of foreign-born residents with green cards. With respect to the INS estimate of net change in the size of the illegal population from year to year (275,000), we note that this estimate is derived by comparing composite estimates for two points in time (October 1992 and October 1996) and dividing the total change into equal amounts of change for each year. Hence, this estimate is marked by the uncertainty of the composite estimates of the size of the illegal population. It also reflects a general level of change rather than depicting current trends. Valuable as the census and CPS data are—or can be—for estimates of the size of the foreign-born population and other demographic concepts, various analysts have raised questions about quality. The Bureau of the Census believes that the foreign-born are less likely to be enumerated than the native-born, but a rigorous evaluation has not been conducted. The Bureau of the Census has carefully evaluated the quality of 1990 census data in the Post Enumeration Survey (PES), but the PES does not distinguish foreign-born residents from native-born. A wide variety of factors have been hypothesized—or identified in indirect analyses and qualitative studies—as contributing to underrepresentation of the foreign-born (see table 4.2). However, it is also possible that reverse errors could occur (Jasso and Rosenzweig, 1987; Schmidley and Robinson, 1998), and in the absence of a rigorous evaluation targeting the foreign-born, the level of net underrepresentation is unknown. For the 1990 census as a whole, however, a rigorous evaluation indicated that undercounting was the more important factor. (For a discussion of the net undercount in the 1990 census and the gross levels of overcounting and undercounting that occurred, see GAO/GGD-91-113.) Underrepresentation is thought to be concentrated in certain groups of the foreign-born, such as newcomers and illegals, who currently constitute perhaps one-fifth of all foreign-born residents. Unofficial estimates by Census staff put the undercount of illegal immigrants at about 33 percent in the 1980 census; Passel (1986) has suggested a range between 33 and 50 percent. For the 1990 census, various anaylses put the figure at roughly 20-30 percent (Woodrow, 1991; Van Hook and Bean, 1997; Woodrow- Lafield, 1995.) Post Enumeration Survey results are used to adjust the Current Population Survey for misrepresentation of specific groups defined by age, sex, race or Hispanic origin, and state of residence. However, it is not known whether the PES adjustments sufficiently improve representation of the foreign-born. If there were no differences between the coverage of foreign-born and native-born persons of the same age, sex, race, and so forth, the PES adjustments should produce accurate CPS estimates of the foreign-born. But as delineated in table 4.1, foreign-born persons may be less likely to be found or identified as residents than native-born persons are, and it is possible that some foreign-born persons may also falsely claim U.S. birth. Thus, despite the PES adjustments, the CPS data could underrepresent the foreign-born. A different issue could contribute to added underrepresentation of the foreign-born in the CPS: survey nonresponse. As in any survey, some sampled CPS households do not respond to the CPS. CPS response rates are calculated (and adjustments to correct for nonresponse are applied) to geographically large areas—on average, about five areas per state. If foreign-born residents are as likely to respond to the CPS as native-born persons who reside in the same area, there is no problem. But this may not be the case; possible reasons for lower response rates among certain groups of foreign-born include interviewer problems communicating with non-Hispanic immigrants; possible distrust of government or strangers among certain groups (those illegally here, asylees from repressive countries); and for some groups of new immigrants, less familiarity with polling. Thus, higher rates of nonresponse among the foreign-born may contribute to underrepresentation. (An analysis of nonresponse in subareas where foreign-born are concentrated would settle the issue.) Turning to estimates of the number of naturalized citizens, the main concern is not with undercoverage (because citizens are likely to be counted) but rather with the potential for overrepresentation because of false claims of citizenship. This issue is currently being disputed in the literature (Passel et al., 1998; Schmidley and Robinson, 1998). We also note that the CPS, which reinterviews respondents over a 16-month period, has thus far asked the citizenship question only at the first interview. Consequently, the CPS estimates omit some of the most recent naturalized citizens. Currently, this is important because of large numbers of naturalizations—over 1 million persons were naturalized in fiscal year 1996. The Census Bureau is now considering asking the citizenship question in every CPS interview. Census staff have stated that “The CPS nativity data provide a reliable basis for tracking change in the size of the total foreign-born population at the national level” (Schmidley and Robinson, 1998, p. 17). But various Census staff told us that year-to-year change in the size of the total foreign-born population could—and alternatively that it could not—be reliably measured by CPS data. Current estimates of year-to-year net change in the size of the foreign-born population (see table 4.1, total row) seem imprecise; indeed, the confidence intervals are so broad that the estimates might be deemed too imprecise for policy-making purposes. For example, as shown in table 4.1, the 90-percent confidence interval for 1996-97 net change in the size of the foreign-born population ranges from an increase of under a half million to an increase of about 2 million. In other words, the foreign-born population (24 million) may have increased by as little as 2 percent or as much as 8 percent—in a single year. At our request, Census staff prepared an “annual averages” estimate for the 1996 to 1997 change. The annual averages estimate is, again, an increase of 1.2 million; the confidence interval (800,000 to 1.7 million) is smaller, but still seems imprecise. It is also somewhat troubling that the trends in year-to-year change appear to be volatile—going from near zero in one year to an increase of over a million the next. It is not known whether some degree of real change occurred, whether an artifact caused the result, or whether the very large difference in estimates is simply from sampling error. In using 1980 and 1990 census data to estimate emigration of foreign-born persons, analysts at the Bureau of the Census encountered inconsistent results—apparently because of coverage problems. Bureau of the Census analysts attempted to estimate emigration by tracking arrival cohorts—for example, Mexicans who came to the United States to live during the 1970s—across the 1980 and 1990 censuses. The logic was that by observing the extent to which the size of a cohort dwindled between 1980 and 1990 (while accounting for deaths), one could infer the level of emigration. The approach is logical and reflects basic procedures of demographic analysis. But no dwindling was apparent for Mexico and several other countries; instead, cohorts that arrived in the 1970s appeared to grow between 1980 and 1990, thus yielding negative estimates of emigration—a logical impossibility. Census staff determined that data for certain countries were unusable, and emigration rates were calculated only for residents from countries with usable data (such as Spain). These rates were then extrapolated to countries with unusable data (such as Mexico). The result was an estimate of 195,000 emigrants each year. But a number of uncertainties are involved; notably, emigration rates may differ for legals and illegals, and if so, extrapolation from countries such as Spain (with mostly legal entries) to countries like Mexico (with large numbers of illegal entries) would be inappropriate. Census staff have recently determined that—in hindsight—the 195,000 is best interpreted as an estimate of emigration on the part of legal foreign-born residents only. One possible problem with this interpretation is that in calculating the 195,000, the Bureau extrapolated emigration rates to all foreign-born residents—a group that includes some illegals. A policymaker or interested member of the general public would have to access four disparate sources to obtain the estimates shown in table 4.1: (1) the 1996 INS Statistical Yearbook, (2) the INS Web page, (3) an issue of Current Population Reports, and (4) a Bureau of the Census “working paper” on trends and methodological issues. In no case does any one publication refer to all the others, and we could find no central source pointing the interested person to all four. In some cases, less widely distributed publications are needed to understand the methodological bases of the figures. The lack of a central publication relating the results found in one source to those found in another means that readers must gauge the relative quality and comparability of the various estimates, and their interrelationship, for themselves. In some cases, reporting has not provided the complete information that readers need to judge the quality or stability of the estimates. For example, the Bureau of the Census working paper, entitled “How Well Does the Current Population Survey Measure the Foreign-born Population in the United States,” reports that the nonresponse rate for the CPS is about 6.5 percent, but does not let readers know whether the level of response for communities or areas dominated by foreign-born residents is roughly the same as for other areas. To cite another example, the description of the estimates of illegal overstays in the INS 1996 Statistical Yearbook fails to inform readers that data on overstays have not been available since 1992 and that the 1996 estimates of illegals were achieved by projections of data from earlier years. INS has some initiatives underway to fill data gaps, including an attempt to develop a measure of illegal flow into the United States. In addition, we have identified strategies to improve census and survey data on the size of the foreign-born population. These include (1) eliminating data gaps by collecting survey information on immigration status through a less sensitive form of questioning—the three-card method—and (2) achieving greater certainty for estimates of total foreign-born and naturalized citizens through evaluative analyses, and where needed, corrective adjustments. Otra categoria que no se encuentra en A or B (especifique) Assuming that the categories are mutually exclusive and exhaustive, it is possible to obtain an estimate of illegals. That is, extending the hypothetical examples above, we would estimate that 75 percent of the foreign-born are in the major legal statuses (35% + 30% + 5% + 5% = 75%).Suppose also that 1 percent picked Box C (some other category). Subtracting these hypothetical estimates from 100 percent yields 100% - 75% - 1% = 24%. In this hypothetical example, an estimated 24 percent did not claim to be in any legal status, and the implication is that 24 percent of the foreign-born population are illegal immigrants. For informed decisions on immigration issues, policymakers need information on immigration flow, by legal status. Separate information is needed on the two different types of flow—new entries into the United States and transitions to new legal statuses—because, for example, Congress sets levels of funding for programs to deter illegal immigrants from coming into the United States and also defines conditions for allowing illegals to transition to legal status. Policymakers also need information on the size of the foreign-born population—again by legal status. And information on emigration helps to gauge the meaning of statistics on immigration flow and on population sizes; that is, it balances information on entries with information on exits, and it indicates the amount of turnover in the resident population. INS records that are maintained for administrative purposes describe the number of new legal permanent residents (green-card holders), new refugees and asylees, and new naturalized citizens. As reported in the Yearbook, however, these statistics are limited by (1) conceptual problems and confused reporting, (2) undercounts, and (3) information gaps. • Annual trends in the number of green cards issued—a potentially key trend in legal immigration flow—is difficult to interpret because of conceptual problems, and the way it is reported in the Yearbook can confuse readers. Similarly, invalid comparisons of legal flow to data on illegals may occur—again because of conceptual problems exacerbated by confused reporting. • The number of new asylees is an undercount, because the Yearbook tally omits certain categories of persons, such as those who are granted asylum on appeal. The number of persons who newly attained citizenship is also an undercount. • Federal statistics are not available for some categories of immigration flow, such as the number of long-term temporary residents who come to the United States each year. Perhaps most importantly, there is no estimate of the total number of foreign-born persons who take up residence in the United States each year. Turning to relevant demographic concepts other than flow, statistics are reported in a more scattered fashion; indeed, a variety of INS and Bureau of the Census publications, including the INS Web page, must be accessed. The Bureau of the Census provides information on the size of the resident foreign-born population, annual net change in size, and emigration. However, decennial census and survey data on the foreign-born have not been evaluated with respect to coverage, misreporting of nativity, and nonresponse. Moreover, there are no separate data for legal permanent residents, illegal immigrants, or most other statuses. (Neither the decennial census nor surveys that target the general population ask questions about foreign-born respondents’ legal status. This is so, in part, because such questions are very sensitive and might result in problems, such as distorted answers to the legal-status question or to other items on the questionnaire.) The inability to differentiate between key subgroups of the foreign-born population is important from a policy perspective because virtually all laws on immigration are based on specific legal statuses. INS has made efforts to fill gaps for some legal statuses by using the limited data that are available and creating assumption-based models. The resulting estimates are necessarily uncertain because assumptions and judgments are substituted for data. We identified or developed strategies that might improve immigration statistics. Specifically, we devised a new method for collecting survey data on the legal status of foreign-born respondents. The “three-card method” asks questions that are less sensitive than a direct question requiring the respondent to state his or her specific legal status. It ensures absolute privacy of response and requires no unusual interview procedures. Yet this method allows statistically unbiased survey estimates for all major legal statuses. A preliminary qualitative test of the new method indicated that no one refused to answer the questions. The test population consisted of farmworkers, and although the test was not designed to make statistical estimates, farmworkers’ answers were consistent with an interview population that contains a high proportion of illegal immigrants. Thus, the new method appears to show promise and to merit further testing and development. We also identified strategies for evaluating survey data on the foreign- born. For example, if a household does not respond to a survey, it is not known whether the residents are foreign-born. Nevertheless, levels of nonresponse can be compared across communities or areas that are known to differ in terms of nativity (based on decennial census data or on the nativity of those who did participate in the survey). To help correct undercounts, eliminate conceptual problems, and where possible, fill gaps for information on immigration flow, we recommend that the Commissioner of INS (1) evaluate and, where feasible, improve data on flow and (2) utilize an effective information typology (either the one put forward in table 2.1 or an alternative designed by INS) to clearly distinguish different demographic concepts and to determine which statistics can fairly be compared to others. To eliminate confused reporting of data and estimates concerning immigration flow, we recommend that the Commissioner of INS more clearly report information about trends in legal immigration flow and about the difference between the concepts of flow and net change in the INS Yearbook—or develop a new reporting format that communicates effectively to policymakers and interested members of the general public. To reduce the uncertainty associated with statistical estimates of relevant demographic concepts other than immigration flow, fill information gaps for specific legal statuses, and address fragmented reporting, we recommend that the Commissioner of INS and the Director of the Bureau of the Census together • devise a plan of joint research for evaluating the quality of census and survey data on the foreign-born; further develop, test, and evaluate the three-card method that we devised for surveying the foreign-born about their legal status; and • either publish a joint report or closely coordinate reports that present information on population size, net change, and emigration. INS indicated that it is currently working to improve the clarity of statistical reporting in the Yearbook and that it finds the typology very useful. INS also indicated that attempts to fill certain information gaps may be limited by inherent difficulties and cost considerations. In response to this concern, we added the phrase “where feasible” to the relevant recommendation. With respect to our recommendation concerning the three-card method, INS made two comments: • First, INS suggested that because it is not an expert in survey methodology, its appropriate role would be limited to providing support and consultation to Census in that agency’s efforts to develop, evaluate, and test the new method. We believe that, as stated above, the recommendation for joint INS-Census work allows latitude for INS and Census to determine their appropriate roles. • Second, INS indicated that it would need an independent evaluation of the three-card method before committing funds to the method’s development. We agree that INS’ obtaining an independent evaluation of the method before proceeding with further development would be prudent. The Bureau of the Census provided written comments, which raised no objections to our findings on data gaps and the quality of federal statistics on immigration. The Census Bureau also did not object to our recommendation that Census improve reporting and further evaluate existing data on the foreign-born. However, the Census Bureau stated a concern about its involvement in a survey designed to obtain information on the legal status of the foreign-born. Specifically, Census is concerned that even with the privacy protections of the three-card method, such data collection might compromise the trust and cooperation of the public. Our recommendation is only that the Census Bureau be involved in the development, testing, and evaluation of the new method—not necessarily in any resulting survey. We believe that Census would bring essential expertise to designing and overseeing this work. Testing—even large-scale testing—need not involve data collection by the Census Bureau. We have not revised our recommendation, but in the interest of clarity, we modified appendix III to indicate that contractors or other federal agencies might be used for actual data collection involving the three-card method. | Pursuant to a congressional request, GAO identified: (1) policy-related information needs for immigration statistics; (2) federal statistics (and information gaps) on the full range of demographic concepts relevant to immigration policy decisions, including what is known about the quality of those statistics; and (3) strategies for improving statistics. GAO noted that: (1) Congress periodically makes decisions about numerous immigration policies; (2) thus, informed decisionmaking by congressional committees and members of Congress as well as interested members of the general public, requires information on immigration flow, by legal status; (3) Congress also decides on the eligibility of the foreign-born for government benefits and services--with different benefits typically allowed or restricted for different categories of the foreign-born population; (4) GAO identified 33 discrete categories of demographic information that could be relevant to congressional decisionmaking; (5) information on immigration flow is reported in annual Immigration and Naturalization Service (INS) Statistical Yearbooks; (6) statistics on demographic categories other than flow are reported in a more scattered fashion; indeed, a variety of INS and Bureau of the Census publications, including the INS Web page, must be accessed in order to retrieve basic information; (7) INS records that are maintained for administrative purposes are the basis for most federal statistics on flow; (8) these statistics describe the number of new legal permanent residents, new refugees and asylees, and new naturalized citizens; (9) as reported in the INS Yearbook, however, these statistics are limited by conceptual problems and confused reporting, undercounts, and information gaps; (10) the number of new asylees--persons granted asylum--and the number of persons granted citizenship are undercounted in the Yearbook tallies because the data omit certain groups of persons; (11) statistics for other demographic categories are not available; (12) while Census provides some information on the size of the resident foreign-born population, annual net change in size, and emigration, Census has not quantitatively evaluated these data with respect to coverage, accuracy of reported place of birth, or nonresponse rates; (13) there are no separate Census data on legal status because none of the surveys ask questions about legal status; (14) INS has made efforts to fill information gaps for some legal statuses by using the limited data that are available and creating assumption-based models; (15) GAO attempted to identify existing strategies or develop new ones to improve immigration statistics; (16) GAO devised a new method for collecting survey data on the legal status of foreign-born respondents; and (17) GAO also identified strategies for evaluating survey data on the foreign-born. |
The CFO Act of 1990 requires DOD and other agencies covered by the act to improve their financial management and reporting operations. One of its specific requirements is that each agency CFO develop an integrated agency accounting and financial management system, including financial reporting and internal controls. Such systems are required to comply with applicable principles and standards and provide for complete, reliable, consistent, and timely information needed to manage agency operations. Beginning with fiscal year 1991, the CFO Act required agencies, including the Navy, to prepare financial statements for their trust and revolving funds, and for their commercial activities. The CFO Act also established a pilot program under which the Army and Air Force, along with eight other federal agencies or components, were to test whether agencywide audited financial statements would yield additional benefits. The Congress concluded that agencywide financial statements contribute to cost-effective improvements in government operations. Accordingly, the Government Management Reform Act of 1994 made the CFO Act’s requirements for annual audited financial statements permanent and expanded it to include virtually the entire executive branch. Under this legislative mandate, DOD is to annually prepare and have audited DOD-wide and component financial statements beginning with fiscal year 1996. The Office of Management and Budget (OMB) has designated Navy and the other military services as “components” that will be required to prepare financial statements and have them audited. Because the Navy was not one of the pilot agencies, fiscal year 1996 was the first year for which it was required to prepare agencywide financial statements for its general funds. In October 1990, the Federal Accounting Standards Advisory Board (FASAB) was established by the Secretary of the Treasury, the Director of OMB, and the Comptroller General to consider and recommend accounting standards to address the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information. Using a due process and consensus-building approach, the nine-member Board, which, since its formation has included a member from DOD, recommends accounting standards for the federal government. Once FASAB recommends accounting standards, the Secretary of the Treasury, the Director of OMB, and the Comptroller General decide whether to adopt the recommended standards. If they are adopted, the standards are published as Statements of Federal Financial Accounting Standards (SFFAS) by OMB and by GAO. In addition, the Federal Financial Management Improvement Act of 1996, as well as the Federal Managers’ Financial Integrity Act, requires federal agencies to implement and maintain financial management systems that will permit the preparation of financial statements that substantially comply with applicable federal accounting standards. For fiscal year 1996, the Navy prepared two separate sets of statements: one for its operations financed with general funds and another for operations financed using funds provided through the Defense Business Operations Fund (DBOF). The Defense Finance and Accounting Service-Cleveland Center supported the Navy in preparing the fiscal year 1996 financial statements for activities financed by general funds and DBOF. The Navy’s general fund financial statements encompassed those operations financed through 24 general fund accounts. These general funds included moneys the Congress appropriated to the Navy to pay for related authorized transactions for periods of 1 year, multi-years, or on a “no-year” basis. The Navy’s DBOF business activities are financed primarily through transfers from the Navy’s Operations and Maintenance appropriations, based on the costs of goods and services to be provided. The Navy has historically operated many supply and industrial facilities using a working capital fund concept. In fiscal year 1996, the Navy’s business activities comprised the largest segment of DOD’s support operations financed through DBOF. The DOD Inspector General delegated responsibility for auditing Navy’s fiscal year 1996 financial statements to the Naval Audit Service. By agreement with the DOD Inspector General, the Naval Audit Service’s fiscal year 1996 audit encompassed two separate efforts, both limited to the Navy’s Statement of Financial Position and related footnotes. The audit resulted in one set of reports focused on the Navy’s financial statement reporting for its operations financed using general funds and one overall report summarizing the results of its review of the Navy’s DBOF-financed operations. The set of general fund reports included an overall auditor’s opinion report, an overall report on internal controls and compliance with laws and regulations, and eight other more detailed supporting reports. Appendix I shows the status of Navy entities’ financial statement audits in fiscal year 1996. Appendix II provides a complete listing of the Naval Audit Service reports issued as a result of its fiscal year 1996 financial statement audit efforts. The objectives of this report were to (1) analyze the extent to which financial deficiencies detailed in the auditors’ reports may adversely impact the ability of Navy and DOD managers and congressional officials to make informed programmatic and budgetary decisions, (2) provide examples of other issues of interest to budget and program decisionmakers that can be identified by reviewing the financial statements, and (3) describe the additional financial data that, if complete and accurate, could be used to support future decision-making when the Navy implements accounting standards that are effective beginning with fiscal years 1997 and 1998. To accomplish these objectives, we obtained and analyzed the Naval Audit Service’s opinion report and other supporting reports resulting from its examination of the Navy’s fiscal year 1996 financial statements to identify data deficiencies and determine their actual or potential impact on Navy programmatic or budgetary decision-making. To do this, we compared the Naval Audit Service’s audit results with the findings and related open recommendations in our previous reports that discuss the implications of Navy’s financial deficiencies. We also obtained additional details on the Naval Audit Service’s findings through discussions with cognizant Naval Audit personnel, and we discussed the status of our previous findings and recommendations with cognizant Navy and DFAS personnel. Further, we independently reviewed Navy’s financial statements to identify other issues of interest to budget and program decisionmakers, particularly those areas that may indicate the need for future budget resources or that may provide the opportunity to reduce resource requirements. Finally, we analyzed recently adopted federal accounting standards to identify areas where Navy program and budget managers will have additional useful information available to support decision-making, if the standards are effectively implemented as required. Our work was conducted from December 1997 through February 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. On March 9, 1998, the Principal Deputy Under Secretary of Defense (Comptroller) provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix III. To an even greater extent than the other military services, the Navy has been plagued for years by troublesome financial management problems involving billions of dollars. For example, our 1989 report on the results of our examination of Navy’s fiscal year 1986 financial reporting detailed numerous problems, such as understating the value of Navy’s assets by $58 billion, that we attributed to carelessness and the failure to perform required rudimentary supervisory reviews and reconciliations. Seven years later, we found that such problems persisted. In our reporton the Navy’s fiscal year 1994 financial reporting, we reported that the Navy had not taken advantage of the 5 years that had passed since the enactment of the CFO Act or the experiences of its counterparts, the Army and the Air Force, in preparing financial statements. Our report identified a minimum of $225 billion of errors in the $506 billion in assets, $7 billion in liabilities, and $87 billion in operating expenses reported to the Department of the Treasury in the Navy’s fiscal year 1994 consolidated financial reports. Consequently, we concluded that the Navy and DFAS had to play “catch up” if they were to successfully prepare reliable financial statements on the Navy’s operations. Most recently, the Naval Audit Service’s April 1997 report on the results of its audit of the Navy’s fiscal year 1996 financial reporting disclosed that errors, misstatements, and internal control weaknesses continued. A number of the financial data and control deficiencies disclosed in the Naval Audit Service’s reports not only adversely affect the reliability and usefulness of the Navy’s financial reporting but also have significant programmatic or budgetary implications. Our analysis of the auditors’ reports, along with additional examples from our own audit work, is provided in the following sections. The Naval Audit Service report on the results of its financial audit of the Navy’s fiscal year 1996 financial statements disclosed numerous problems with inventory data reported by the Navy, including the following. “The Department of the Navy did not report an estimated $7.8 billion in Operating Materials and Supplies items aboard ships or with Marine Corps activities on the FY 1996 Statement of Financial Position.” We previously reported that DOD has spent billions of dollars on inventory that is not needed to support war reserve or current operating requirements and burdened itself with managing and storing the unneeded inventory. The financial reporting error disclosed by the Naval Audit Service has implications for the budget process because the inventory data used both for the financial statements and as the starting point for the Navy’s process to develop budget requests for additional inventory are incomplete. A Stratification Report is used to prepare data on the quantity and value of the Navy’s inventories, such as operating materials and supplies, included in the Navy’s financial statements. It is also used as the starting point to forecast budget requirements for inventories that will be needed in supply warehouses. To determine Navy-wide inventory requirements, responsible managers must also have accurate, reliable information on the quantities of inventories on ships, including any quantities in excess of needs. However, the auditors found that information on $7.8 billion in inventories, including those on board ships was not included in the Navy’s year-end financial statements. This lack of Navy-wide visibility over inventories substantially increased the risk that Navy may have requested funds to obtain additional unnecessary inventories because responsible managers did not receive information that excess inventories were already on hand in other locations. This happened in the past, as discussed in our report on financial audit work we performed to help the Navy prepare for the fiscal year 1996 audit. We found that for fiscal year 1994, the Navy’s inventory item managers did not have adequate visibility over $5.7 billion in operating materials and supplies on board ships and at 17 redistribution sites. Approximately $883 million of these inventories were excess to current operating allowances or needs. For the first half of fiscal year 1995, inventory item managers had ordered or purchased items for some locations that had been identified as excess at other locations and thus were already available. As a result, we identified unnecessary spending of at least $27 million. Further, a review of inventory item managers’ forecasted spending plans for the second half of fiscal year 1995 and fiscal years 1996 and 1997 found planned purchases of items already available in excess at other locations could result in the Navy incurring approximately $38 million of unnecessary costs. Our recent discussions with Navy officials confirmed that as of December 1997, the process used to accumulate inventory status information still did not provide inventory managers complete information on operating material and supplies inventories, particularly information on the quantities of Navy operating and supply inventories on ships. As a result, the Navy’s budget requests for inventory may continue to not accurately reflect its needs. The Naval Audit Service’s fiscal year 1996 audit report stated the following. “The Department of the Navy could not effectively account for the balance in the Fund Balance with Treasury because Defense Finance and Accounting Service - Cleveland Center had not developed an adequate accounting system to do so. Consequently, the Department of the Navy cannot provide reasonable assurance that: (1) the $64.8 billion account balance reported on the FY 1996 Statement of Financial Position presents fairly its financial position, or (2) transactions that could cause Antideficiency Act violations would be detected as required by Department of Defense guidance. Defense Finance and Accounting Service principally used Department of the Treasury data in reporting the Fund Balance with Treasury because the data was considered more reliable than the data provided by the Navy’s accounting systems. Department of Defense guidance requires that the Fund Balance with Treasury be supported by records of the entity.” This situation is similar to an individual not being able to reconcile his or her checkbook register to the monthly statement received from the bank. Just as with an individual’s checkbook, reconciliations are necessary to ensure that any differences are identified, the cause researched, and appropriate corrective action taken. Such reconciliations allow the individual to identify not only clerical errors but potential fraudulent misuse of his or her account. For example, blank checks can be stolen and forged and the amounts on otherwise legitimate checks can be altered. The potential consequences of the lack of regular reconciliations is increased dramatically for the Navy given that the agency reported $63 billion in fiscal year 1996 general fund expenditures and also has had continuing problems in properly recording billions of dollars of transactions. The lack of complete records for all disbursements and regular reconciliations can also result in the Navy spending more funds than it has available. Federal agencies are required to record obligations as legal liabilities are incurred and make payments from the associated appropriations within the limitations established by the Congress. To the extent that the Navy does not properly record all its disbursements, its ability to ensure that it will have enough funding available to pay for its expenses will continue to be adversely affected. This is similar to an individual not properly maintaining his or her checkbook register by neglecting to record checks written and, at the end of the month, finding that the account is now overdrawn. As noted by the auditors, the lack of controls over the Fund Balance with Treasury may result in Antideficiency Act violations. In addition, in our March 1996 report, we disclosed that problems in keeping records on Navy’s disbursements resulted in understating by at least $4 billion the federal government’s overall budget deficit reported as of June 30, 1995. In the current environment, such errors could make the difference between the federal government reporting a budget deficit or surplus. The extensive problems identified in the Navy’s disbursement process also resulted in erroneous and duplicate payments to vendors, as stated in the auditors’ report. “Defense Finance and Accounting Service Operating Locations processed 110 duplicate or erroneous vendor payments for the Department of the Navy. Of these, 62, valued at $2.5 million, had not been previously identified for collection....The improper payments were the result of input errors, failure to conduct reviews, ambiguous reports, and improper processing of invoices....The $2.5 million in duplicate or erroneous payments we identified and the Operating Locations collected represent funds that can be put to better use.” The auditors’ findings were based on a limited judgmental sample of about 400 payments out of a universe of about 1.2 million payments Navy made during fiscal year 1996. DOD officials informed us that subsequent investigation showed that not all of the $2.5 million were actually duplicate or erroneous payments that could be put to better use. However, the Naval Audit Service has not yet validated these results. Nonetheless, the control weaknesses identified, along with our previous work on DOD’s long-standing problems with overpayments to contractors and vendors, suggest that significant additional, undetected erroneous payments likely exist. Most recently, we reported that for fiscal years 1994 through 1996, contractors returned checks to DFAS totaling about $1 billion a year. These related to payments from the Navy, the other military services, and other Defense agencies. For the first 7 months of fiscal year 1997, DFAS’s Columbus Center received checks returned by contractors totaling about $559 million. DOD’s reliance on contractors to identify these overpayments substantially increases the risk that it is incurring unnecessary and erroneous costs. Because of our continuing concerns with control breakdowns in the contract payment area across the department, we have continued to monitor this area as one of the high-risk federal areas most vulnerable to waste, fraud, abuse, and mismanagement. By establishing DBOF in 1991, the Department of Defense intended to focus management attention on the total costs of its businesslike support organizations to help manage these costs more effectively. DBOF was modeled after businesslike operations in that it was to maintain a buyer-seller relationship with its military customers, primarily the Navy and the other military services. DBOF-funded operations were to operate on a break-even basis by recovering the current costs incurred in conducting its operations, primarily from operations and maintenance funding provided by its customers. The Naval Audit Service reported a number of serious financial deficiencies in its fiscal year 1996 review of Navy’s DBOF activities. “nternal controls were not adequate to detect or prevent errors. For example, inventory records were inaccurate; fixed assets were not capitalized or depreciated properly; depreciation on fixed assets at closing activities was not included on financial statements; payables were not always processed accurately or timely; accruals were inaccurate because of lack of reconciliations; liabilities were inaccurate because of untimely processing and bookkeeping errors; and Military Sealift Command financial accounting information was inaccurate due to inadequate general ledger and subsidiary ledger controls and accounting records.” The following examples of data deficiencies, when considered along with the Naval Audit Service’s overall assessment of material weaknesses in the Navy’s DBOF operations, have an adverse effect on the Navy’s ability to reliably determine DBOF’s net operating results. These financial deficiencies adversely affect not only the Navy’s DBOF financial reporting but also its ability to achieve the goal of operating on a break-even basis. Reliable information on the DBOF’s net operating results is a key factor in setting the prices DBOF charges its customers. As a result of the problems pointed out by the Naval auditors, neither DOD nor congressional officials can be certain (1) of actual DBOF operating results and (2) if the prices DBOF charges its customers are reasonable for the goods and services provided. Our recent reporting demonstrates the Navy’s continuing problems in achieving the goal of operating its businesslike activities on a break-even basis. For example, in March 1997, we reported that DBOF management’s inability to stem continuing losses occurred as a result of, among other factors, inaccurate accounting information concerning the Fund’s overhead costs. More recently, in an October 1997 report, we determined that because one of the Navy’s DBOF business areas did not require its customers to pay for all storage services provided its customers—as is the common practice in most businesslike operations—customers had no incentive to either relocate or dispose of unneeded ammunition and thereby reduce their costs. To the extent that the Navy’s DBOF operations incur losses, future appropriations may be required to cover those losses. DOD officials informed us that they used these financial statements and related audit report findings in their efforts to reduce costs and streamline the Navy’s ordnance business area. Specific examples of problems identified by Naval Audit Service auditors in its fiscal year 1996 financial review of the Navy DBOF included the following. A sample comparison of inventory records and on-hand stock revealed that quantities actually in storage differed from inventory records about 22 percent of the time. The auditors reported that management took action to correct the data deficiencies it reported and that action was underway to correct the systemic causes for the discrepancies indentified. In discussing the possible implications of its findings, the Navy auditors reported that “Inaccurate inventory records distort financial records and financial reports used by senior managers. This, in turn, can result in decisions to buy wrong quantities, which could cause excesses or critical shortages of material.” Depreciation expenses associated with fixed assets at one location were understated by a net amount of about $5 million. This occurred primarily because of a misinterpretation of guidance on reporting depreciation expenses incurred during the year on assets that were to be transferred from that location before the end of the fiscal year. While it did not quantify the extent of depreciation expense understatements, the Naval Audit Service also reported that additional reviews revealed that at least eight other locations also misinterpreted the guidance. In reporting on the implications of this deficiency, the Naval Audit Service stated, “Failure to report depreciation at closing activities understates current year costs and prior year losses that could be eligible for recoupment from Operation and Maintenance, Navy funds . . . . Ultimately, costs that are not recouped will have a direct effect on the cash position of the Department of the Navy Defense Business Operations Fund.” This means that to the extent that the Navy was undercharged as a result of the depreciation understatement, the Navy would have more Operation and Maintenance funds available than it should. The Navy’s DBOF maintained over 2,300 flatracks (containers used to transport Army cargo on Navy ships) solely for the benefit of the Army but did not recover the related estimated costs. The auditors reported that the costs to maintain these flatracks “should have been funded by Operation and Maintenance, Army funds. As a result of the failure to collect reimbursement, the Department of Navy used Operation and Maintenance, Navy funds to support the Army requirements. The funds used were estimated to be $640,000 for Fiscal Year 1997, and taking corrective action could result in the Department of the Navy putting $4.1 million to better use over a 6-year period.” Although this situation did not affect the federal government’s overall financial position, this means that the Navy augmented Army budgetary resources by paying for a service that should have been paid with Army funds. The Navy’s DBOF accounting records included at least $5.8 million in invalid “Other Non-Federal (Governmental) Liabilities.” The auditors reported that “Invalid liabilities cause funds to be unnecessarily set aside either to pay invoices already paid or to plan for costs not yet incurred. Therefore, this $5,793,496 represents potential funds that can be put to better use.” This means that the Navy’s operation and maintenance appropriation requirements are less than previously recognized because the Navy will not be required to pay these “invalid liabilities.” Despite the shortcomings in the Navy’s financial statements, we were able to identify several financial issues that may be of interest to budget and program managers. Specifically, even with the acknowledged deficiencies in the Navy’s financial data, some areas raise questions about whether future budget resources may be needed or whether there may be opportunities to reduce resource requirements. The following are examples of footnote disclosures and the kind of information that can be gleaned from them. Figure 1 provides excerpts from the note intended to explain how the accounts receivable balance presented on the Statement of Financial Position was calculated. Accounts receivable, which represents amounts owed the Navy, is significant to program managers and budget officials. If the amount is overstated, the Navy may not receive amounts that it intended to use to support its operations and may therefore need to obtain additional funding. If the amount is understated, the Navy may lack the visibility necessary to ensure that it is taking appropriate action to collect all amounts due it. For example, the table shows a 14.5 percent allowance for appropriation 1453 (military personnel). This means that nearly 15 percent of the funds Navy personnel owed the Navy were not likely to be collected. In some cases, better and more timely collection of these types of receivables may result in the recovery of amounts that could be used to reduce the Navy’s request for funds to support its military personnel or provide funds to meet other critical resource needs. The note also refers to negative governmental non-entity receivables of $26.7 million. A negative receivable is an unusual disclosure, indicating that the Navy does not know the source of almost $27 million it collected. These funds cannot be used until the source of the collection is determined. If these collections are owed the Navy, recording them improperly and not taking timely action to collect these amounts may have resulted in requests for budgetary resources when these collections could have been used to meet those requirements. Figure 2 shows excerpts from the note that provides information on over $4 billion of cancelled appropriations that the Navy reopened in fiscal year 1996. The note does not clearly indicate how much or for what purpose the cancelled accounts were used. The Congress has long-standing concerns with agencies’ use of funds after their expiration. In 1990, the Congress determined DOD was expending funds from expired accounts without sufficient assurance that authority for such expenditures existed or in ways that the Congress did not intend. To end these abuses, the Congress enacted account closing provisions in the fiscal year 1991 National Defense Authorization Act. The act closes appropriations 5 years after the expiration of their availability for obligation. Once closed, the appropriations are not available for obligation or expenditure for any purpose. In a series of decisions, the Comptroller General has stated, however, that agencies may adjust their accounting records for closed appropriations to record transactions that occurred but were not recorded before closure and to correct obvious clerical mistakes within a reasonable period of time after closure. For example, if an agency discovers, after an appropriation closes, that it had failed to record a disbursement that it had properly made from an appropriation before closure, the agency is expected to adjust its accounting records to reflect that disbursement. Further details would be necessary to assess the implications of the Navy’s note regarding the “reopening” of $4 billion in cancelled appropriations. This information may be related to the Navy’s continuing problems in accounting for its disbursements and may indicate a weakening in the mechanism put in place by the Congress to ensure control over cancelled appropriations. Navy’s fiscal year 1996 Statement of Financial Position includes about $61 billion in “Unexpended Appropriations.” Note 1R of the financial statements defines unexpended appropriations as “amounts of authority which are unobligated and have not been rescinded or withdrawn and amounts obligated but for which neither legal liabilities for payments have been incurred nor actual payments made.” Note 20, as shown in figure 3, disclosed that at the end of fiscal year 1996, Navy had an unobligated balance available of about $13 billion and about $45 billion in undelivered orders, which represent amounts obligated but not expensed. These amounts, along with the $3 billion in unavailable unobligated appropriations included in the note, tie back to the $61 billion reported in the financial statements. This type of information, along with other required disclosures, could serve as a key indicator of how well the Navy is managing the funds provided by the Congress. A portion of the amounts identified as unexpended appropriations relate to funding provided through procurement or other appropriations that are available for obligation for more than 1 year to fund Navy activities. However, this information, along with other required disclosures, can be used to monitor the Navy’s long-standing problems in fully utilizing its resources. For example, OMB requires that agencies disclose the amount of unexpended cancelled appropriations in the note on contingent liabilities. Although the Navy’s fiscal year 1996 financial statement reporting did not include this information, the Navy’s year-end reports to the Treasury state that the Navy cancelled $1.8 billion and $1.5 billion in unexpended appropriations for fiscal years 1996 and 1997, respectively. Also, the Naval Audit Service has issued several reports that highlighted the Navy’s ongoing problems in promptly deobligating unneeded funds that could be better utilized for critical Navy mission needs. In addition, beginning in fiscal year 1998, the Navy will be required to prepare a Statement of Budgetary Resources, which will provide decisionmakers with added information on the status of the Navy’s use of its resources. Although Navy officials represented their fiscal year 1996 financial statements—the first-ever attempt to prepare comprehensive financial statements for the Navy—to be based on the best information available, the usefulness of Navy’s financial statement disclosures is limited at best due to the previously discussed problems with accuracy, reliability, and completeness. The footnotes to the Navy’s financial statements, which should serve as an excellent source of relevant, detailed information on its operations, are lacking in detail and present abnormal information. For example, the statements included a number of footnotes that provided only summary charts or tables or grossly abnormal balances, such as large negative balances in what would normally be expected to be accounts with positive balances, without any accompanying detail or explanation. In addition, because fiscal year 1996 was a first-year effort, the Navy’s general fund financial statements do not offer the benefit of comparative data on the prior year, which can provide useful analysis on trends and changes from year to year. As the Navy and DFAS improve on their first-year efforts to develop reliable financial statements for the Navy, and when the problems identified in the auditors’ reports are corrected, knowledgeable users of the Navy’s financial statements will be better able to identify key issues that may be of interest to budget and program managers. Recently adopted federal accounting standards are intended to enhance federal financial statements by requiring that government agencies show the complete financial results of their operations and provide relevant information on agencies’ true financial status. In addition to the new requirement for the Statement of Budgetary Resources previously mentioned, two other recently adopted accounting standards are particularly significant in terms of the additional information that could be made available to Navy budget and program managers in the future, if the standards are implemented effectively. Specifically, the standards call for reporting on the Navy’s costs associated with (1) the disposal of various types of assets, including environmental clean-up costs, and (2) deferred maintenance. Issued in December 1995 and effective beginning with fiscal year 1997, Statement of Federal Financial Accounting Standard (SFFAS) No. 5, Accounting for Liabilities of the Federal Government, requires the recognition of a liability for any probable and measurable future outflow of resources arising from past transactions. The statement defines probable as that which is likely to occur based on current facts and circumstances. It also states that a future outflow is measurable if it can be reasonably estimated. Because disposal costs are both probable and measurable, they are to be reported under SFFAS No. 5. The Congress has recognized the importance of accumulating and considering disposal cost information. In the National Defense Authorization Act for Fiscal Year 1995, the Congress required DOD to develop life-cycle environmental costs, including demilitarization and disposal costs, for major defense acquisition programs. This means that the Navy is required to estimate and report, as part of the information presented in its financial statements, the estimated cost to dispose of its major weapon systems and the cost to clean up the environmental hazards found on its land and facilities. In our recent report on DOD’s efforts to implement the new reporting requirements as they relate to the disposal of nuclear submarines and ships, we stated that this reported liability could be made more meaningful to decisionmakers if it was presented by approximate time periods when the disposals are expected to occur. Such information could provide important context for congressional and other budget decisionmakers on the total liability by showing the annual impact of disposals that have already occurred or are expected to occur during the budget period. Furthermore, if the time periods used to present these data were consistent with the timing of when funding was being requested for disposal costs as reflected in budget justification documents, such as DOD’s Future Years Defense Program, this type of disclosure would provide a link between budgetary and accounting information, one of the key objectives of the CFO Act. In addition, SFFAS No. 6, Accounting for Property, Plant, and Equipment, issued November 30, 1995, and effective beginning with fiscal year 1998, requires recognition of deferred maintenance amounts by major class of asset along with disclosure of the method used to measure the extent of deferred maintenance needed for each asset class. In our recent reporton DOD’s efforts to implement this standard as it relates to Navy ships, we stated that accurate reporting of deferred maintenance is important for key decisionmakers such as the Congress, DOD, and Navy managers and can be an important performance indicator of mission asset condition, which is a key readiness factor. While the existence of deferred maintenance may indicate a need for additional resources for maintenance, such resources may already be available within the current funding of the military services. As Navy and DFAS move to put in place the systems and procedures required to comply with these new accounting standards, they will not only be better able to prepare a more useful set of Navy financial statements but also to better support more informed programmatic and budgetary decision-making in these areas. Currently, the Navy is unable to produce accurate financial information needed to support either its financial statements or operations and budgetary decision-making. However, through the impetus provided by the CFO Act, it has an opportunity to better integrate financial information into budget and operational management decisions. To seize this opportunity, the Navy and DFAS must establish a greater linkage between financial statement preparation and reporting processes, and resource allocation and oversight decisions. However, such a linkage will yield the benefits envisioned by the CFO Act only if the Navy’s financial information is dramatically improved to the point where it is generated by a systematic process and its accuracy can be verified. Auditable financial statements produced by this type of disciplined process provide the Congress and managers with assurance that the information being used to support the statements is accurate and can therefore be used with confidence for day-to-day decision-making. In this context, efforts to produce auditable financial statements on an annual basis should be viewed not as an end in itself but as the capstone of a vigorous financial management program supported by effective information systems that produce accurate, complete, and timely information for decisionmakers throughout the year. Achieving the far-reaching financial management goals established by the CFO Act, particularly in light of the serious and widespread nature of the Navy’s long-standing financial problems, will only be possible with the sustained, demonstrated commitment of top leaders in DOD, the Navy, and DFAS. In commenting on a draft of this report, DOD stated that it is firmly committed to providing taxpayers and the Congress with accurate financial statements that can pass rigorous audit tests. DOD also said that for some time it has acknowledged that significant improvements are required in its financial management systems and reporting, and that many of the problems found during the audits of the Navy’s fiscal year 1996 financial statements remain. It also stated that financial management is a high priority in DOD and that it is working to improve the basic financial procedures and systems used to collect, categorize, and report financial transactions. DOD expressed concern with what it termed the report’s implication that the Navy’s budget is overstated or could be reduced because its financial statements omitted a line, excluded a footnote, or were otherwise deficient. DOD stated that such an implication is grossly misleading and undermines the rigorous planning, programming, and budgeting process within both DOD and the Navy. In addition, DOD maintained that the report leaves the erroneous impression that there have been no significant improvements in the Navy’s financial operations since our review of the Navy’s fiscal year 1986 financial reports. Furthermore, DOD stated that the report makes broad assertions that deficiencies in the Navy’s financial statements adversely impact the ability to make informed programmatic and budgetary decisions. In this regard, DOD contended that the report did not acknowledge that many of the deficiencies cited, including those from audit reports, are reviewed as part of the Navy’s day-to-day management and internal budget review processes, and again by the Office of the Secretary of Defense. We disagree that our report implies that the Navy’s budget is overstated or could be reduced merely because data were omitted from the Navy’s financial statements or because the statements were deficient in some other way. Our report focuses on deficiencies in the management systems and processes that are used to support not only the Navy’s financial statement preparation, but its budgetary and program decision-making. As a result, the deficiencies discussed in our report focus on those errors or omissions in the Navy’s financial reporting that also raise serious questions about whether decisionmakers had sufficiently reliable information available to make informed budgetary resource allocation decisions. With respect to DOD’s assertion that our report provides a misleading impression that there have been no significant improvements in Navy’s financial operations, our finding that the Navy has been plagued with troublesome financial management problems for many years is warranted. We have not seen the level of expected improvement in the years that have passed since our report on the Navy’s fiscal year 1986 financial reporting. While we are encouraged with DOD’s stated high priority commitment to reforming its financial operations, significant errors, omissions, and misstatements remain uncorrected, as evidenced by the extent and nature of the deficiencies pointed out in auditors’ reports on their examination of the Navy’s fiscal year 1996 financial statements. Efforts to reform DOD’s financial operations, however well-intentioned, have not as yet resulted in the level of improvements needed to put in place a disciplined financial operation that will not only yield accurate, reliable information for the Navy’s financial statements, but also support its program and budget decision-making. It is for this reason that DOD financial management is on our list of high-risk government programs. Lastly, we are encouraged that the Navy auditors’ findings have been used and that the Navy has found them helpful in developing budget estimates. In addition, while the Navy’s planning, programming, and budgeting process was not the focus of the review requested for this report, we recognize that it has been in place for many years and is intended to provide a thorough review of all pertinent information, including the implications of auditors’ findings, in determining Navy budget estimates. However, the Navy should not be forced to rely on such alternative data development and validation procedures as a proxy for a systematic, disciplined financial management and reporting process. Such a process would provide accurate and reliable financial data to support the development of the Navy’s financial statements, as well as day-to-day program and budget decision-making. We are sending copies of this report to the Ranking Minority Member of the House Committee on the Budget, the Director of the Office of Management and Budget, the Secretary of Defense, the Secretary of the Navy, and the Director of the Defense Finance and Accounting Service. We will also send copies to other interested parties upon request. Please contact me at (202) 512-9095 if you or your staff have any questions concerning this report. Major contributors are listed in appendix IV. Depot Maintenance - Naval Shipyards and Financial statements prepared and opinion report issued. Financial statements prepared and reviewed, but no opinion report issued. Financial statements prepared but not reviewed. Department of the Navy Fiscal Year 1996 Annual Financial Report: Report on Auditor’s Opinion (Report No. 022-97, March 1, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Report on Internal Controls and Compliance with Laws and Regulations (Report No. 029-97, April 15, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Fund Balance with Treasury and Cash and Other Monetary Assets (Report No. 004-98, October 31, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Property, Plant, and Equipment, Net (Report No. 051-97, September 25, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Government Property Held by Contractors (Report No. 046-97, August 14, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Ammunition and Ashore Inventory (Report No. 048-97, September 25, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Advances and Prepayments, Non-Federal (Report No. 049-97, September 19, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Accounts Receivable, Net (Report No. 045-97, August 12, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Accounts Payable and Accrued Payroll and Benefits (Report No. 006-98, November 14, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Department of Defense Issues (Report No. 015-98, December 19, 1997). Fiscal Year 1996 Consolidating Financial Statements of the Department of the Navy Defense Business Operations Fund (Report No. 040-97, June 16, 1997). The following are GAO’s comments on the Department of Defense’s letter dated March 9, 1998. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. Our analysis of the Naval Audit Service reports considered the Under Secretary of Defense (Comptroller) and Defense Finance and Accounting Service comments that were included in the reports. 3. As stated in the report, the Navy, like all other federal entities, has been required to prepare and submit a prescribed set of financial information to the Treasury since 1950. In addition, the federal financial accounting standards to which DOD refers were, for the most part, not required or implemented in the fiscal year 1996 statements. We refer to these standards only in the report’s discussion of financial data that will be available when DOD fully implements these provisions. 4. The report was revised to indicate that the checks returned to DFAS applied not only to the Navy, but also to the other military services and Defense agencies. 5. To ensure proper payment, financial management personnel are dependent upon obtaining accurate and complete contract information. To the extent that the financial systems do not contain accurate and complete information from feeder systems or the feeder systems provide erroneous information on, for example, contract modifications, overpayments can occur. 6. As discussed in our August 1996 report, we disagree that operating materials and supplies held on board ships are considered to be in the hands of end users. These items should be reported on the Navy’s financial statements as operating materials and supplies. In addition, we agree that decisions on inventory purchases are not based on amounts reported in the Navy’s financial statements (or, as in the case of the $7.8 billion in operating materials and supplies, amounts excluded from the statements). However, as discussed in our report, the Navy auditors and we have found deficiencies in the management systems and processes which are used not only to support the inventory values included in the Navy’s financial statements, but also to support the Navy’s budgetary and program decision-making concerning needed inventories. As a result, the deficiencies discussed in our report concern not just errors or omissions in the Navy’s financial reporting, but also raise questions about whether decisionmakers had sufficiently reliable information available on which to make informed budgetary resource allocation decisions. 7. Undistributed collections and disbursements represent amounts reflected in Treasury’s records but not recorded by the Navy. The Navy then recorded these amounts in its department-level accounting records without having corroborating support in the form of transaction detail needed to verify that these amounts accurately represent Navy activities. As a result, the Navy does not know whether its records are accurate. 8. While DOD has efforts underway that are intended to match disbursements against valid obligations before payment, this is not currently required for all payments. Consequently, until DOD can establish controls to ensure that all disbursements can be related to a valid obligation at the time of payment, DOD cannot rely on its obligation records for funds control purposes and will continue to lack assurance that it will have sufficient funding available to pay its expenses. 9. DOD’s comment concerning an adequate accounting system at the DFAS Cleveland Center relates to a quote from a Naval Audit Service report and has no impact on the point being made in our report. 10. We disagree that simply recording obligations ensures that fund balances are not exceeded. DOD, under law, must maintain accurate and reliable obligation and disbursement records. The Antideficiency Act prohibits not only overobligations but overexpenditures as well. Obligated balances forecast expenditures and, in that regard, offer some measure of funds control by, in effect, “setting aside” funds for these projected amounts. However, even if all obligations have been recorded, actual expenditures can be more (or less), making it necessary to adjust obligated amounts when payment occurs. By not matching payments to obligations at the time of disbursement, the Navy has undermined this control feature. 11. The report was revised to omit reference to the specific Antideficiency Act violations previously reported by the Navy. 12. The report was revised to indicate that DOD officials stated that the entire $2.5 million discussed in the Naval Audit Service report may not represent erroneous or duplicate payments. 13. After an appropriation cancels, Public Law 101-510 permits agencies to liquidate obligations that had been properly charged to the appropriation during its period of availability. However, the liquidation must be from current funds available for the same purpose, and the agency may not charge expenditures against such accounts in excess of the lesser of 1 percent of that appropriation or the unexpended balance of the cancelled appropriation. To track compliance with these limitations, agencies need to maintain in their records for the cancelled appropriation memorandum account entries to track transaction amounts. We do not agree that maintaining memorandum account balances requires the reopening of cancelled accounts, as implied by DOD’s comments. Public Law 101-510 prohibits agencies from using cancelled appropriations for any purpose whatsoever. As indicated in our report, reopening cancelled accounts provides an opportunity for an agency to inappropriately charge current disbursements against reopened cancelled appropriations, thereby weakening the controls the Congress established in Public Law 101-510. 14. While information on the status of the Navy’s use of its resources is currently available, it has not been audited. Only when this information is compiled through a disciplined process that can withstand the rigors of a financial audit test will congressional and Navy decisionmakers have assurance that this information is accurate and reliable. 15. We agree that OMB is responsible for providing minimum guidance for all agencies to follow in preparing their financial statements. However, it remains the responsibility of each agency to expand on these minimum requirements, as appropriate, so that its financial statements (1) provide sufficiently detailed information on the unique circumstances and operations of that agency and (2) are most relevant and informative for oversight officials and other users. 16. While the Navy was required to record a liability for certain environmental cleanup costs based on existing accounting standards at the date of the financial statements, this report addresses audited information that will be available upon full implementation of the federal financial accounting standards. As a result, the report was revised to delete reference to a Naval Audit Service finding concerning reporting a projected environmental cleanup cost liability. William Cordrey, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reported on the programmatic and budgetary implications of the financial data deficiencies enumerated by auditors' examination of the Department of the Navy's fiscal year 1996 financial statements. GAO noted that: (1) the extent and nature of the Navy's financial deficiencies identified by auditors, including those that relate to supporting management systems, increase the risk of waste, fraud, and misappropriation of Navy funds and can drain resources needed for defense mission priorities; (2) critical weaknesses identified include the following: (a) information on $7.8 billion in inventories on-board ships was not included in Navy's year-end financial statements; (b) failure to follow prescribed procedures for controlling Navy's cash account with Treasury contributes to continuing disbursement accounting problems; (c) until duplicate and erroneous vendor payments were identified and collected as a result of financial audit, the Navy not only paid too much for goods and services but, more importantly, was unable to use these funds to meet other critical programmatic needs; and (d) breakdowns in the controls relied on to prevent or detect material financial errors mean that the Navy cannot tell if its business-type support operations are operating on a break-even basis as intended; (3) although the Navy's 1996 financial statements--its first effort to prepare comprehensive financial statements--did not include all required information and were not verifiable, they still provided data GAO could use to identify several financial issues that may be of interest to budget and program managers; (4) for example, footnote disclosures on the Navy's accounts receivable and unexpended appropriations raise questions about whether future budget resources may be needed or whether there may be opportunities to reduce resource requirements; (5) when the findings presented in the auditors' reports are corrected, the financial statements themselves and related notes can become an excellent source of information on the financial condition and operations of the Navy; and (6) also, if properly implemented, new accounting standards that require information such as data on asset disposal costs and deferred maintenance will provide the Navy and the Defense Finance and Accounting Service with an opportunity to improve the extent and usefulness of information that is currently available to support program decision-making and accountability in these areas. |
Chattanooga is located in VA’s Mid South Healthcare Network, which comprises Tennessee and portions of nine other states. For CARES purposes, the Mid South Network designated a 75-county area as a health care delivery market—referred to as the Central Market. In fiscal year 2001, 78,656 enrolled veterans resided in this market. As figure 1 shows, Chattanooga, Tennessee, is located in the southeastern part of the Central Market, which serves veterans residing in the central portion of Tennessee, as well as veterans in southern Kentucky and northern Georgia. Within this market, VA currently operates hospitals located in Murfreesboro and Nashville, Tennessee, and six community-based clinics (including one located in Chattanooga). Although VA does not operate a hospital in the Chattanooga area, a broad range of non-VA medical services and providers is available in the Chattanooga area, including 16 hospitals. Of 5 hospitals located in the city itself, the largest is the Erlanger Medical Center—a tertiary care referral center and the region’s only Level One trauma center. In addition, there is a wide variety of specialty care, such as cardiology and rheumatology, provided by non-VA physicians in the Chattanooga area. Imaging, diagnostic, and laboratory services, such as endoscopy, colonoscopy, or nuclear medicine scanning, are also available. The range of inpatient medicine and surgery services available at Chattanooga-area hospitals is comparable to services provided at VA hospitals in Nashville and Murfreesboro, according to VA Mid South Network officials. For purposes of our study, we defined the Chattanooga area as Hamilton County, which includes the City of Chattanooga, and 17 surrounding counties. In fiscal year 2001, 21 percent (16,379 enrolled veterans) of all enrolled veterans in the Central Market resided in this area. Figure 2 highlights the 18-county Chattanooga area. As figure 3 shows, VA estimates that the veteran population in the Chattanooga area will decline by about 25,600 veterans from fiscal year 2001 through fiscal year 2022—a decrease of almost 27 percent. During that same period, however, VA projects that Chattanooga-area veterans enrolled in VA’s health care system will rise by about 5,000—an increase of more than 30 percent. Moreover, within the Central Market, VA expects the enrolled veterans’ workload for inpatient hospital and outpatient primary and specialty care to double through fiscal year 2022, in large part, as a result of the projected growth in the Chattanooga-area enrolled population as well as the aging of that population. For example, 43 percent of the 16,379 enrolled veterans were 65 years of age or older as of September 2001. Almost all Chattanooga-area veterans faced travel times that exceeded VA’s travel time guidelines for accessing inpatient hospital care. Also, about half faced travel times that exceeded VA’s guideline for outpatient primary care. In addition, appointment waiting times for initial outpatient primary care and specialty care consultations exceeded VA’s guidelines, although VA officials recently have taken several steps to shorten appointment waiting times. Almost all (99 percent) of the 16,379 Chattanooga-area enrolled veterans, as of September 2001, faced travel times that exceeded VA guidelines for travel to the nearest VA hospitals in Murfreesboro and Nashville. Almost two-thirds of Chattanooga-area veterans whose travel times exceeded VA guidelines lived in five urban counties to which the 60-minute guideline applies—Hamilton and Bradley counties in Tennessee and Catoosa, Walker, and Whitfield counties in Georgia. The rest (36 percent) lived in rural counties to which the 90-minute guideline applies. As figure 4 shows, Chattanooga is about 120 minutes by car from Murfreesboro, the nearest VA hospital. Therefore, those veterans residing in the five urban counties faced travel times to Murfreesboro or Nashville that were double VA’s 60- minute urban travel guideline; veterans living in most of the 13 rural counties also faced travel times well beyond VA’s 90-minute rural guideline. Moreover, VA provided over 95 percent of its inpatient hospital workload for Chattanooga-area veterans at VA hospitals in Murfreesboro and Nashville during fiscal year 2002, with less than 5 percent provided by non- VA hospitals in Chattanooga. During that fiscal year, Chattanooga-area veterans had a total of 685 admissions that resulted in a total workload of 7,213 bed days of care. Of these admissions, 580 (6,895 bed days of care) were to the VA hospitals in Murfreesboro or Nashville; the remaining 105 admissions (318 bed days of care) were to Chattanooga hospitals, primarily the Erlanger Medical Center. Local admissions were few, in part, because Mid South Network officials imposed restrictions on the VA Chattanooga clinic’s referral practices. For example, when purchasing care on a fee-for-service basis, providers were to refer veterans to local hospitals only when care was not available at VA hospitals in Murfreesboro or Nashville or the veterans’ medical conditions precluded travel to those sites. Also, in implementing a contract with the Erlanger Medical Center, network officials instructed VA clinic providers to limit referrals to Erlanger to only veterans with less severe medical conditions, such as those who did not require surgery or hospital stays longer than 5 days. Network officials stated that restrictions were not related to the availability of local care, in that the array of services available at Chattanooga-area hospitals was comparable to services provided at VA hospitals in Murfreesboro and Nashville. Rather, they said that such restrictions were necessary to manage resources effectively, as well as to ensure the patient workload needed to support medical education activities at VA’s Murfreesboro hospital. We estimate that during fiscal year 2002, these referral restrictions applied to 246 admission decisions that were recommended by Chattanooga clinic providers. Of these admissions, almost 60 percent were to VA hospitals in Murfreesboro or Nashville rather than non-VA hospitals in Chattanooga and were generally consistent with the restrictions imposed by the Mid South Network. The remaining 40 percent (101 admissions) were to non- VA hospitals in Chattanooga, with about two-thirds financed on a fee-for- service basis and the rest through the VA-Erlanger contract. In fiscal year 2001, more than half (about 8,400) of the 16,379 Chattanooga- area enrolled veterans faced travel times that exceeded VA’s 30-minute travel guideline for accessing care at VA’s nearest primary care clinic. The remaining 8,000 Chattanooga-area enrolled veterans lived within 30 minutes of VA community-based clinics in Chattanooga, Tullahoma, or Knoxville. Although VA also operates outpatient primary care clinics in its hospitals in Murfreesboro and Nashville, these clinics are all considerably farther than the 30 minutes travel time from the Chattanooga-area veterans’ residences. Of the 8,400 enrolled veterans who faced travel times to a VA primary care clinic that were longer than 30 minutes, about 3,375 (40 percent) were in four counties, each of which had from 775 to 884 such enrolled veterans. The remaining 5,030 enrolled veterans were in 14 other Chattanooga-area counties, each of which had from 117 to 608 enrolled veterans who faced travel times that exceeded VA’s guideline. As figure 5 shows, 4 counties had fewer than 250 such veterans. Of 1,858 Chattanooga-area veterans awaiting initial visits with Chattanooga clinic outpatient primary care providers during fiscal year 2002, fewer than 7 percent (126) received appointments within VA’s appointment waiting time guideline of 30 days or less from the time of the request. Chattanooga clinic officials explained that these scheduling delays were exacerbated by increased requests for outpatient primary care initial appointments—averaging 50 per week. In response, Chattanooga clinic officials have taken a variety of actions to expedite the scheduling of initial outpatient primary care appointments. For example, they have increased the number of providers and necessary support personnel and extended the clinic’s hours of operation to include Saturdays and evenings. Also, they made arrangements for a provider at VA’s Tullahoma, Tennessee, clinic to see some Chattanooga-area enrolled veterans for initial outpatient primary care appointments, with subsequent outpatient primary care appointments scheduled with Chattanooga clinic providers. As a result of these efforts, waiting times for many Chattanooga-area veterans were shorter than they otherwise would have been, although they continued to exceed VA’s 30-day guideline. For example, in the first quarter of fiscal year 2002, 99 percent of veterans seeking initial primary care appointments waited longer than 6 months; by the fourth quarter of fiscal year 2002, 66 percent waited 6 months or longer. Moreover, Chattanooga clinic officials told us that appointments for enrolled veterans seeking initial outpatient primary care visits, as of July 2003, were generally scheduled within 60 days—a significant improvement but still twice as long as VA’s 30-day appointment waiting time guideline. Clinic officials said that given the challenges involved in hiring providers and support staff at the clinic and the increasing workload, further waiting time reductions will be difficult to achieve. Waiting times for outpatient specialty care appointments that exceed VA’s 30-day guideline have been a long-standing problem for Chattanooga-area veterans. For example, using data from VA’s 1999 IG report on Chattanooga veterans’ care, we found that for veterans served at the Chattanooga clinic, only 9 percent of 353 sampled outpatient specialty consultation requests were scheduled within 30 days. Moreover, 45 percent of Chattanooga-area veterans seeking outpatient specialty care appointments waited more than 60 days, including 16 percent who waited longer than 90 days. Similarly, our analysis of 468 requests for outpatient specialty care appointments made by Chattanooga clinic providers during October 2002 found long waiting times. For example, 21 percent of these specialty care appointments took more than 90 days to be scheduled, compared to 16 percent in 1999, based on data from the IG report. However, a slightly higher percentage of the October 2002 requests for appointments were scheduled within 30 days—13 percent compared to 9 percent, based on the IG’s data. However, during fiscal year 2003, VA officials took several steps—such as expanded use of non-VA specialists in the Chattanooga area—that they said significantly shortened the long waiting times that enrolled veterans previously experienced to obtain outpatient specialty care appointments. Chattanooga clinic officials informed us that as of July 2003, providers’ requests for outpatient specialty care appointments—with the exception of dermatology, neurology, and urology appointments—were generally scheduled within VA’s 30-day waiting time guideline. Chattanooga clinic officials attributed the fiscal year 2003 reduction in the time necessary to obtain an outpatient specialty care appointment primarily to the expanded use of local specialists on a fee-for-service basis. Other steps that VA officials took to reduce the time necessary to obtain outpatient specialty care appointments included increased use of telemedicine—a system that allows patients and providers physically located in a specially equipped Chattanooga clinic exam room to consult with VA specialists in Murfreesboro and Nashville without actually traveling to those locations. Also, support staff in the Chattanooga clinic was increased, including the addition of an administrator to coordinate the scheduling of local fee-basis specialty care. To emphasize the importance of VA’s 30-day appointment waiting time guideline to clinic staff and the flexibility of obtaining care locally, the clinic manager said that when one provider could not schedule an appointment within 30 days, the manager contacted other local providers to determine who could meet the time frame, so that VA’s waiting time guideline could be met as often as possible. VA’s draft CARES plan includes a proposal to shorten Chattanooga-area veterans’ travel times by purchasing inpatient care from non-VA hospitals in Chattanooga. However, it also proposes to shift inpatient workload from VA’s Murfreesboro hospital to VA’s Nashville hospital, which would lengthen travel times for Chattanooga-area veterans who are unable to receive care locally and who would have otherwise been served at the Murfreesboro hospital. Regarding outpatient care, the draft CARES plan calls for a range of actions, including opening new community-based clinics, that could shorten both travel and appointment waiting times for initial outpatient primary care and specialty care appointments. As a result of the draft CARES plan, travel times for inpatient care for some veterans would decrease while it would increase for others. The plan proposes increased purchasing of inpatient medicine and surgery from non-VA hospitals in Chattanooga, as well as shifting inpatient surgery and medicine workload not necessary to support the needs of long-term psychiatry and nursing home patients in the Murfreesboro facility to its hospital in Nashville. The plan, however, does not describe the extent to which these changes could affect veterans in the 18-county Chattanooga area. To assess the potential impact of the proposed changes, we compared VA’s workload data for Chattanooga-area veterans during fiscal year 2002 and Mid South Network officials’ estimates of Chattanooga-area veterans’ workload to be provided in Murfreesboro, Nashville, and non-VA hospitals as a result of the proposed workload shifts. During fiscal year 2002, about 5 percent of Chattanooga-area veterans’ workload was purchased locally and 95 percent was provided in VA hospitals in Murfreesboro and Nashville. The draft national CARES plan does not quantify the extent to which VA plans to contract locally for the inpatient medicine and surgery workload in Chattanooga. Based on our analysis of workload projections contained in the plan’s supporting documents, we estimate that local purchases would amount to 29 percent of the inpatient medicine and surgery workload from the 18 Chattanooga-area counties, compared to 5 percent that VA purchased in fiscal year 2002—a fivefold increase. While this represents a significant improvement, it nonetheless means that over 70 percent of the inpatient medicine and surgery workload generated by Chattanooga-area veterans would continue to be served at the VA hospitals in Murfreesboro or Nashville. Furthermore, three-quarters of all local purchases are expected to benefit enrolled veterans in Hamilton and Bradley counties, primarily because these two counties have the largest enrolled populations. Mid South Network officials told us that as in the past, the inpatient workload to be purchased from non-VA hospitals in Chattanooga would be based on the severity of veterans’ medical conditions. Chattanooga-area veterans with less severe conditions would be served in Chattanooga; those with more severe conditions would continue to travel to Nashville to receive inpatient care. However, VA expects to place fewer restrictions on local purchases of hospital care than under the VA-Erlanger contract. For example, under the draft CARES plan, inpatient surgeries would be performed locally. All such surgeries were routinely referred to VA hospitals in Murfreesboro or Nashville during fiscal year 2002. Also, we estimate that shifting inpatient workload from the VA hospital in Murfreesboro to Nashville would result in lengthened travel times for Chattanooga-area veterans who do not have care purchased locally and who otherwise would have been served at the Murfreesboro hospital. We estimate that 14 percent of the Chattanooga-area veterans’ workload would be affected by the shift, given that an estimated 54 percent of the total workload would be handled in Nashville, compared to 40 percent in fiscal year 2002. Affected veterans would experience diminished access to inpatient care, in that their travel times, which already exceed VA’s travel time guidelines, would be about 20 minutes longer than the travel times they would experience if care were provided in Murfreesboro. The draft CARES plan calls for opening new community-based clinics and other changes that would reduce travel and waiting times for enrolled veterans residing in the 18-county Chattanooga area. In fiscal year 2001, about 8,400 Chattanooga-area enrolled veterans faced travel times for primary care that exceeded VA’s 30-minute guideline. The proposed clinics, to be located in McMinn, Roane, and Warren counties in Tennessee and Whitfield County in Georgia, would reduce travel times for about 2,700 (one-third) of those enrolled veterans so that they would be within the 30- minute guideline. The remaining 5,700 enrolled veterans would continue to face travel times longer than VA’s 30-minute guideline. Figure 6 shows the distribution by county of those Chattanooga-area enrolled veterans who, as of September 2001, would have lived more than 30 minutes from a VA primary care clinic had the four proposed clinics been operational in that year. The draft CARES plan does not provide a target date for opening the Chattanooga-area clinics because VA did not classify them as the highest national priorities, and as such, did not include them on the list of clinics to be opened by the end of fiscal year 2010. To be considered the highest priority, the number of enrolled veterans who do not meet access guidelines would have to be greater than 7,000 enrollees per clinic. The four proposed clinics are significantly smaller in that they are expected to provide 30-minute access for a total of about 2,700 additional Chattanooga- area enrolled veterans. If opened, Mid South Network officials expect the four new community- based clinics to shift a portion of the outpatient primary and specialty care workload away from the Chattanooga clinic. Redistributing workload in this way would likely benefit many veterans whose outpatient primary and specialty care appointment waiting times exceed VA’s guidelines. Moreover, these new clinics would be expected to complement other actions that could enhance outpatient primary and specialty care access, including reduced appointment waiting times for Chattanooga-area veterans. For example, the draft CARES plan proposes to expand capacity at existing community-based clinics and increase the use of telemedicine and purchases of specialty outpatient services from non-VA providers. The plan does not provide specifics or time frames for what, where, or when such actions would occur. In making nationwide CARES decisions, we recognize that the Secretary of Veterans Affairs will need to make trade-offs regarding the costs and benefits of alternatives for better aligning VA’s capital assets and services. As part of this process, the Secretary will need to decide whether additional improvements to access, beyond those in the draft national CARES plan, are warranted in the Chattanooga area. Although the draft CARES plan proposes actions that could enhance Chattanooga-area veterans’ access to VA health care, the majority of Chattanooga-area veterans are expected to continue to face travel times for inpatient medicine and surgery services that far exceed VA’s inpatient travel guidelines, even if VA purchases an estimated 29 percent of inpatient workload from non-VA, Chattanooga-area providers as the draft CARES plan proposes. Moreover, access to hospital care for some Chattanooga-area veterans could actually worsen because the proposed transfer of inpatient workload from VA’s Murfreesboro hospital to its Nashville hospital would require some veterans previously served in Murfreesboro to drive farther for inpatient care, affecting an estimated 14 percent of Chattanooga-area veterans’ workload. Given that the non-VA hospitals in Chattanooga can provide an array of inpatient medicine and surgery services comparable to VA’s hospitals in Murfreesboro and Nashville, it seems possible that VA could purchase more than 29 percent of Chattanooga-area veteran’s inpatient workload locally. Moreover, even though the draft CARES plan proposes opening four community-based clinics, these clinics would likely not be opened before fiscal year 2011. Although they would enhance outpatient access for 2,700 Chattanooga-area veterans, about 5,700 enrolled veterans would continue to face travel times for outpatient primary care that exceed VA’s guideline because existing and proposed clinics are more than 30 minutes from where they live. We recommend that as part of his deliberations concerning whether additional access improvements for Chattanooga-area veterans beyond those contained in the draft CARES plan are warranted, the Secretary of Veterans Affairs explore alternatives such as purchasing inpatient care locally for a larger proportion of Chattanooga- area veterans’ workload, particularly focusing on those veterans who may experience longer travel times as a result of the proposed shift of inpatient workload from Murfreesboro to Nashville; expediting the opening of the four proposed community-based clinics; and providing primary care locally for more of those veterans whose access will remain outside VA’s travel guideline, despite the opening of the four clinics. In written comments on a draft of this report, VA’s Under Secretary for Health thanked us for our recommendations and stated that he will provide them to the Secretary for consideration during his review of the CARES Commission’s report and ask that he consider them in the final CARES decision-making process. VA also provided technical comments that we included, where appropriate, to clarify or expand our discussion. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, call me at (202) 512-7101. Other GAO staff who contributed to this report are listed in appendix II. Our objectives were to (1) assess how Chattanooga-area veterans’ access to inpatient hospital and outpatient primary and specialty care compared to the Department of Veterans Affairs’ (VA) established travel time and appointment waiting time guidelines and (2) determine how VA’s draft Capital Asset Realignment for Enhanced Services (CARES) plan could affect Chattanooga-area veterans’ access to such care. For purposes of our work, Chattanooga-area veterans comprise those residing in 18 counties— Hamilton County, which includes the city of Chattanooga, and 17 surrounding counties; the 18 counties are all closer (as measured by travel time) to the VA clinic and non-VA hospitals in Chattanooga than to VA hospitals and clinics in Murfreesboro and Nashville. We obtained information from and interviewed officials at VA’s Mid South Network and its Chattanooga clinic; VA headquarters, including the CARES National Program Office; the Erlanger Medical Center in Chattanooga, Tennessee; and the VA Inspector General’s Office of Healthcare Inspections. Regarding travel times, we examined how Chattanooga-area veterans’ access to VA health care compared to VA guidelines by using a model developed by the Department of Energy to calculate the time needed for enrolled veterans to travel from their residences to the nearest VA hospitals and clinics. This model takes into account key variables affecting travel times, including speed limits attainable on different types of roads, such as rural roads or interstate highways. We evaluated its methodology and assumptions and found them to be sufficiently accurate for our purposes. We used VA’s CARES databases for demographic and workload information for the 16,379 veterans from those 18 counties who were enrolled in VA’s health care system as of fiscal year 2001. We compared these results with the inpatient and outpatient primary care travel time guidelines that VA used in its CARES planning to determine the percentage of enrollees, by county, who lived within the inpatient and outpatient access guidelines. We did not analyze travel times for outpatient specialty care because VA did not have guidelines for such care. In addition, we determined Chattanooga veterans’ access to inpatient care at non-VA Chattanooga hospitals by obtaining inpatient admissions data and other information from officials of the Mid South Network; the VA Chattanooga clinic; the Erlanger Medical Center in Chattanooga; and VA’s network data service centers in Atlanta, Georgia, Chicago, Illinois, Tuscaloosa, Alabama, and Durham, North Carolina. We used VA’s Computerized Patient Record System to extract data from 60 of 580 medical records to compile a generalizable profile of all fiscal year 2002 admissions of Chattanooga-area veterans to VA hospitals in Murfreesboro and Nashville. To evaluate information contained in the VA-Erlanger inpatient contract, we reviewed contract documents and conducted interviews with VA’s clinic staff and network officials, including those in the network’s business office, as well as legal and other officials from the Erlanger Medical Center. Regarding waiting times, we interviewed Mid South Network and Chattanooga clinic staff and analyzed workload data compiled by clinic staff. For example, we analyzed the clinic’s fiscal year 2002 waiting lists to identify the number of veterans who enrolled for primary care and the number of days they waited for their first appointment with a primary care provider. We compared these results to VA’s 30-day appointment waiting time guideline. In addition, using automated medical records and clinic data, we collected information on Chattanooga clinic providers’ requests for specialty consultations. We used this information to determine the number of days needed to obtain an appointment with a specialist. In May 2003, we reviewed all such requests made by clinic providers in October 2002, selecting this time frame to ensure that VA staff had sufficient time to schedule the requested appointments by the time we conducted our review. We then analyzed the results from this review and compared these results to VA’s 30-day waiting time guidelines and also to the waiting times reported by VA’s Inspector General in his office’s 1999 performance review of the Chattanooga clinic. To determine how VA’s draft CARES plan could affect Chattanooga-area veterans’ access to VA inpatient health care services, we examined the draft national CARES plan; the Mid South Network’s CARES planning documents; and workload data produced by VA’s CARES Program Office, the Mid South Network office, and the Chattanooga clinic. We also held discussions with VA officials. To evaluate effects of the CARES proposal to shift inpatient workload from VA’s Murfreesboro hospital to Nashville and non-VA hospitals in Chattanooga, we analyzed Mid South Network data for Chattanooga-area veterans’ inpatient workload at those locations during fiscal year 2002 and estimated the workload that would be served at those locations if the CARES proposal were implemented. In addition, we used the Department of Energy driving time model to analyze the extent to which access would change if VA opened the additional primary care clinics proposed in the national draft CARES plan. Also, we analyzed the reliability of key databases to ensure that there were no material errors or inconsistencies. For example, we used information obtained through our medical record review to cross-check inpatient workload data regarding admissions to Murfreesboro and Nashville during fiscal year 2002 and found those data to be sufficiently reliable. Also, we compared outpatient specialty consultation information with appointment scheduling information contained in VA’s computerized record system. Lastly, we compared CARES demographic data on Chattanooga-area veterans with data in VA’s national enrollment data file for fiscal year 2002. Lisa Gardner, Julian Klazkin, John Mingus, Daniel Montinez, Keith Steck, and Paul Reynolds made major contributions to this report. VA Health Care: Framework for Analyzing Capital Asset Realignment for Enhanced Services Decisions. GAO-03-1103. Washington, D.C.: August 18, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. VA Health Care: Improved Planning Needed for Management of Excess Real Property. GAO-03-326. Washington, D.C.: January 29, 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. VA Health Care: More National Action Needed to Reduce Waiting Times, but Some Clinics Have Made Progress. GAO-01-953. Washington, D.C.: August 31, 2001. VA Health Care: Community-Based Clinics Improve Primary Care Access. GAO-01-678T. Washington, D.C.: May 2, 2001. Veterans’ Health Care: VA Needs Better Data on Extent and Causes of Waiting Times. GAO/HEHS-00-90. Washington, D.C.: May 31, 2000. VA Health Care: VA Is Struggling to Address Asset Realignment Challenges. GAO/T-HEHS-00-88. Washington, D.C.: April 5, 2000. VA Health Care: Improvements Needed in Capital Asset Planning and Budgeting. GAO/HEHS-99-145. Washington, D.C.: August 13, 1999. VA Health Care: Challenges Facing VA in Developing an Asset Realignment Process. GAO/T-HEHS-99-173. Washington, D.C.: July 22, 1999. Veterans’ Affairs: Progress and Challenges in Transforming Health Care. GAO/T-HEHS-99-109. Washington, D.C.: April 15, 1999. VA Health Care: Capital Asset Planning and Budgeting Need Improvement. GAO/T-HEHS-99-83. Washington, D.C.: March 10, 1999. Executive Guide: Leading Practices in Capital Decision-Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. VA Health Care: Status of Efforts to Improve Efficiency and Access. GAO/HEHS-98-48. Washington, D.C.: February 6, 1998. | Veterans residing in Chattanooga, Tennessee, have had difficulty accessing Department of Veterans Affairs (VA) health care. In response, VA has acted to reduce travel times to medical facilities and waiting times for appointments with primary and specialty care physicians. Recently, VA released a draft national plan for restructuring its health care system as part of a planning initiative known as Capital Asset Realignment for Enhanced Services (CARES). GAO was asked to assess Chattanooga-area veterans' access to inpatient hospital and outpatient primary and specialty care against VA's guidelines for travel times and appointment waiting times and to determine how the draft CARES plan would affect Chattanooga-area veterans' access to such care. Almost all (99 percent) of the 16,379 enrolled veterans in the 18-county Chattanooga area, as of September 2001, faced travel times that exceeded VA's guidelines for accessing inpatient hospital care. During fiscal year 2002, only a few Chattanooga-area veterans were admitted to non-VA hospitals in Chattanooga--constituting about 5 percent of inpatient workload. In addition, over half (8,400) of Chattanooga-area enrolled veterans faced travel times that exceeded VA's 30-minute guideline for outpatient primary care. Also, waiting times for scheduling initial outpatient primary and specialty care appointments frequently exceeded VA's 30-day guideline. VA's draft CARES plan would shorten travel times for some Chattanooga-area veterans but lengthen travel times for others. Under the plan, the amount of inpatient care VA purchases from non-VA hospitals in Chattanooga would increase from 5 percent to 29 percent, thereby reducing those veterans' travel times to within VA's guidelines. The plan also proposes to shift some inpatient workload from VA's Murfreesboro hospital to its Nashville hospital. As a result, an estimated 54 percent of inpatient workload for Chattanooga-area enrolled veterans will be provided in Nashville compared to 40 percent in fiscal year 2002, thereby lengthening some veterans' travel times by about 20 minutes. The plan also proposes opening four new community-based clinics, which would bring about 2,700 more Chattanooga-area enrolled veterans within VA's 30-minute travel guideline for primary care, leaving about 5,700 enrolled veterans with travel times for such care that exceed VA's guideline. These clinics likely would not open before fiscal year 2011, given priorities specified in the plan. |
In the Medicare Part D program, drug plan sponsors compete to deliver prescription drug benefits and attract enrollees. Sponsors offer, through their PDPs, one or more benefit packages that differ in their levels of premiums, deductibles, cost sharing, and coverage in “the gap”—the period when beneficiaries would otherwise pay all of the costs of their drugs. Sponsors must offer plans with standard prescription drug coverage established under the MMA or actuarially equivalent prescription drug coverage as approved by CMS, or may opt to offer plans with supplemental prescription drug coverage. In 2008, only about 10 percent of PDPs—5 of 47—offered the defined standard coverage. Each plan must cover a set of drugs—generally known as a formulary— that meets certain criteria. Beyond the minimum formulary requirements, sponsors have discretion in designing their formularies and may exclude particular drugs from coverage, thus contributing to variation in formularies across PDPs. For drugs included on a plan formulary, sponsors may assign drugs to tiers that correspond to different levels of cost sharing. In general, sponsors encourage the use of generic medications by putting them on a cost-sharing tier that imposes the lowest out-of-pocket costs on beneficiaries. PDP sponsors are required to implement drug utilization management programs to reduce cost when medically appropriate. As part of these programs, sponsors may apply various utilization management restrictions to specific drugs on their formularies. Utilization management restrictions may include (1) prior authorization, which requires the beneficiary to obtain the sponsor’s approval before a drug is covered for that individual; (2) quantity limits, which restrict the dosage or number of units of a drug provided within a certain period of time; and (3) step therapy, which requires that a beneficiary try lower-cost drugs before a sponsor will cover a more costly drug. From year to year, sponsors can make changes in their plans’ benefit structure including premiums, cost-sharing levels, formularies, and utilization management restrictions. Prior to the AEP, plan sponsors must submit to CMS bids for approval in order to implement significant changes to benefit structures that will go into effect the next benefit year. As part of this bid negotiation process, CMS reviews formularies and the design of the plan and benefits offered, including removals of drugs from formularies, use of utilization management restrictions, and the application of copayments for drugs. Sponsors have limited opportunities to change benefit structures that may adversely impact enrollees, including removing particular drugs from formularies or moving drugs to higher cost-sharing tiers. Changes to plan benefit structures can have significant effects on a beneficiary’s out-of-pocket costs and access to particular drugs. For example, a 2008 analysis conducted by Avalere Health—a health care research and consulting firm—showed that average monthly premiums in the 10 most popular Medicare PDPs increased by 16 percent from 2007 to 2008. The analysis also noted that some of the most popular plans raised their premiums by more than 50 percent. In addition to changes in premiums, PDPs made significant changes from 2007 to 2008 in cost sharing and utilization management of formulary drugs, according to a study sponsored by the Kaiser Family Foundation. For example, the study found that the average copayment for a 30-day supply of nonpreferred name-brand drugs increased 13 percent from $63.31 in 2007 to $71.31 in 2008. Additionally, it reported that PDPs increased their use of utilization management restrictions, such as step therapy and quantity limits, from 25 percent of a sample of the most commonly prescribed drugs in 2007 to 30 percent in 2008. The design of written materials for Medicare beneficiaries is particularly important in light of evidence that some older individuals have challenges reading and retaining written information. For example, studies have found that individuals ages 65 and older were less proficient than younger adults in locating information in documents and making health decisions based on what they read. Further, a 2003 national survey of adult literacy showed that 27 percent of Medicare beneficiaries were unable to understand information in short, simple texts. Previous GAO work highlights the importance of assuring that Part D materials, in particular, communicate clearly to this population. Our 2006 study noted that the reading levels of a sample of Part D materials exceeded the capacity of 40 percent of the seniors surveyed. Additionally, the Part D materials reviewed in this study did not include about half of 60 common elements of effective communication, such as language that is free of jargon and consists of familiar words in short sentences, sometimes referred to as “plain language.” As a result of these findings, we recommended that CMS ensure that its written materials describe the Part D benefit in a manner that is consistent with commonly recognized communications guidelines and is responsive to the reader’s needs. Since the mid-1990s, a group of federal employees from different agencies and specialties has promoted the use of plain language in government communications, particularly those that describe federal benefits and services for the public. Documents that conform to principles of plain language are organized with the reader in mind and: include only the information the reader needs; omit unnecessary words and use short sentences of 15 to 20 words; avoid technical terms and use simple words and active voice; facilitate comprehension using headings, tables, lists, and white space; and incorporate customer feedback using surveys, focus groups, or protocol testing. Currently, no formal plain language initiative is in place for federal executive agencies; however, some agencies have chosen to incorporate plain language principles in their documents. For example, officials from CMS’s Office of External Affairs indicated that they will conduct a plain language review of CMS documents if requested by agency staff. Other consumer research supported the use of information customized to individuals’ preferences and circumstances to reduce the risk that consumers dismiss the information as irrelevant. Under this approach, consumers are provided with only data that are most relevant to them, thus making it less likely they will be overwhelmed by the information communicated. One study noted the importance of providing streamlined information that helps individuals understand the consequences of their choices. CMS requires sponsors to mail all enrollees an ANOC that shows how various features of their drug plan—such as the premium, coverage, cost sharing, and formulary—will change for the next benefit year. In addition to the ANOC, sponsors are required to send beneficiaries other enrollment-related documents, such as the EOC, a Summary of Benefits (SB), and a comprehensive or abridged formulary. Table 1 describes selected information included in each of these documents, as prepared by sponsors in our study. For the 2008 AEP, sponsors could opt to mail the EOC along with the ANOC to beneficiaries for receipt by October 31, 2007, or separately mail the EOC to enrollees for receipt by January 31, 2008. If sponsors chose to mail EOCs separately, sponsors were required to mail a SB along with the ANOC. For the 2009 AEP, CMS required plan sponsors to mail the ANOC, a formulary, and the EOC together for receipt by enrollees by October 31, 2008. In developing these materials for the 2008 AEP, sponsors could choose to adopt CMS’s model documents or create nonmodel documents that contained CMS’s required elements. Sponsors are required to submit all AEP materials to CMS for review prior to mailing to enrollees. As a result, use of model versus nonmodel documents had implications for the amount of time for CMS review and the time frame for plan sponsors to mail materials. When sponsors used CMS model documents without modification, CMS reviewed the materials within 10 days; for documents considered nonmodel, CMS required a 45-day review period. For the 2009 AEP, CMS required plan sponsors to use a standardized ANOC-EOC with no modifications to the text permitted. PDP sponsors reported to CMS that nearly all enrollees received their ANOCs on time for the 2008 AEP and most of our study sponsors used CMS’s model ANOC. Our analysis of CMS’s Readiness Checklist and information reported from certain sponsors indicated that 99.8 percent of the approximately 17.2 million PDP beneficiaries enrolled as of November 1, 2007, received ANOCs from their plan sponsors by the required October 31 deadline. Of the 35,630 beneficiaries who received ANOCs late, the majority (21,902) received them by November 15, 2007— the start of the AEP—and the remainder received them by December 10, 2007. Given the limited delays in distributing ANOCs, CMS did not take enforcement action against any of the sponsors reporting late mailings for the 2008 AEP. Six sponsors in our study adopted CMS’s model ANOC to inform beneficiaries of upcoming plan changes rather than develop their own ANOCs. Five of these six sponsors using the model reported doing so to qualify for a shorter CMS review period—10 days versus 45 days—and in some instances to help ensure that they could produce and send their ANOCs by the October 31 deadline. The remaining two study sponsors chose to develop nonmodel ANOCs for the 2008 AEP. Although their ANOCs contained all of CMS’s required information elements, we found that they were substantially different in format from the CMS template and from each other. For example, these ANOCs contained sponsor-developed text rather than CMS’s model language. Also, information was presented in a different order and with different section headings than CMS’s model ANOC. These two sponsors told us that in electing to create nonmodel materials, such as the ANOC, they were able to highlight unique aspects of their plans. For example, one of the sponsors included information that one of its plans would begin covering the cost of administering Part D vaccines for the 2008 benefit year. Stakeholders have expressed various concerns regarding the readability of the ANOC and we found that prior to the 2008 and 2009 AEPs CMS did not systematically evaluate its effectiveness in conveying plan changes to beneficiaries. Stakeholders in our study noted that, because CMS’s model ANOC was not beneficiary friendly, it was difficult for individuals to determine how changes would affect them personally. To help ensure that their enrollees understood the significance of plan changes, two sponsors in our study mailed supplemental information that showed changes in coverage and costs for the specific drugs the enrollee took in the past year. Although CMS used in-house experts when developing its model ANOC, the agency did not formally assess whether its 2008 model materials incorporated commonly recognized communication guidelines to effectively inform beneficiaries about plan changes. CMS officials recently reported that on October 1, 2008, they initiated an evaluation of their annual Medicare beneficiary materials—particularly the combined ANOC- EOC for the 2010 AEP—to assess their reading level, effectiveness, and length, among other factors. Such an evaluation is particularly important in light of changes that CMS made for the 2009 AEP, which raised further concerns among stakeholders. However, it is unclear whether the evaluation will consider potential benefits of alternative formats for communicating plan changes to beneficiaries. Five sponsors and other stakeholders in our study noted that CMS’s model materials for the 2008 AEP were not sufficiently concise or beneficiary friendly and felt that the language in the model ANOC was at a reading level too high for some beneficiaries. One sponsor—concerned about the reading level of the documents and the complexity of the language used— noted that the model ANOC contained some sentences with more than 40 words. According to another sponsor, the inclusion of excess information that did not contribute to understanding the benefit made it more likely that beneficiaries would become overwhelmed and less likely to find the information they need. Another sponsor stated that beneficiaries generally do not read the ANOC because it is a confusing document that lacks more basic information on plan changes and simpler language. Additionally, one study sponsor pointed out that the CMS model materials were not reader friendly or concise; it preferred to create its own simplified materials that use plain language and easy-to-read graphics and layouts. Similarly, stakeholders that assisted individuals during the AEP told us that some beneficiaries found the ANOC overwhelming and had difficulty processing the large amounts of information provided. Health care researchers, advocates, and SHIP counselors we spoke with concluded that much of the information contained in the model ANOC was too general in nature or irrelevant to the reader, making it hard for beneficiaries to determine how changes would affect them personally. To help ensure that enrollees understood their plan changes, two sponsors in our study provided additional information to beneficiaries that showed how coverage of their specific drugs would change in the next benefit year. For the 2008 AEP, these sponsors reported supplementing their use of the CMS model ANOC with additional personalized mailings distributed prior to and during the AEP. Using their pharmacy claims database, these sponsors reported producing documents that indicated whether the particular medications each beneficiary used in 2007 would continue to be covered in 2008. The documents also showed any changes in cost sharing and utilization management restrictions to be applied to these drugs in the new benefit year. These personalized mailings went to approximately 3.6 million of the 7.3 million PDP enrollees served by our study sponsors. Figure 1 shows a hypothetical example that we created of a sponsor’s mailing containing personalized information on changes for the next benefit year. One of the two sponsors reported including personalized drug information as a separate communication to beneficiaries shortly after the ANOC was sent. This sponsor sent these personalized mailings only to those beneficiaries whose drugs would undergo coverage changes for the upcoming year. Sponsor staff told us they created personalized mailings because they made substantial modifications to their formularies for 2008 and they wanted to make certain that beneficiaries were aware of these changes. Similarly, the other sponsor reported that it wanted to ensure that beneficiaries would not be surprised by drug benefit changes effective in January and thus felt that it was important to provide more information than required by CMS. This sponsor inserted personalized information on upcoming plan changes in its monthly Explanation of Benefits (EOB) sent to beneficiaries in October, November, and December 2007. The sponsor reported that it sought to communicate information in a succinct format to enable beneficiaries to focus only on the plan changes relevant to them, individually, without having to examine the longer plan formulary document. Additionally, this sponsor had conducted focus groups to identify the specific information elements, such as individual drug coverage and cost changes, beneficiaries look for when reviewing their plan notices. Sponsor staff noted that they chose to supplement the EOB with this information because industry research indicated that readership of the EOB is high—90 to 95 percent. Agency officials told us that they had not systematically evaluated the effectiveness of the agency’s ANOCs or incorporated beneficiary feedback or the principles of plain language. Instead, in developing the 2008 model, CMS officials said the agency relied on in-house expertise gained through the development of other Medicare materials. They reported making slight wording modifications to the draft 2008 ANOC based on comments from the agency’s Office of External Affairs, but said that plain language review was not part of the document clearance process. CMS officials said they recognize the need to incorporate feedback from beneficiaries and increase the use of commonly recognized elements of plain language in their documents. However, they cited a lack of adequate in-house resources to conduct consumer testing. Additionally, a CMS official cited an insufficient number of staff to meet additional requests for plain language review. After the release of standardized materials for the 2009 AEP, a CMS official reported that the agency had awarded a contract for an evaluation of its annual Medicare beneficiary materials—particularly the combined ANOC-EOC—which CMS required sponsors to send to beneficiaries in a single mailing for the 2009 AEP. The official also noted that this assessment began on October 1, 2008, and is to be completed by February 2009. According to the contract and CMS officials, this evaluation will redesign and standardize beneficiary materials including the ANOC-EOC to provide more understandable information to beneficiaries. CMS officials indicated that the redesigned ANOC-EOC should be completed for the 2010 AEP. This effort by CMS is particularly important in light of changes that CMS has made for the 2009 AEP, which raised further concerns among stakeholders. One such change requires that sponsors use only a standardized ANOC, rather than giving sponsors the option of creating their own. In adopting this standardized document, sponsors may insert plan-specific information where appropriate, but cannot otherwise modify the language. According to CMS, this standardization is consistent with the materials for the Federal Employees Health Benefits Program (FEHBP). Additionally, the agency expects the required use of a standardized ANOC to reduce misinformation on plan changes because this document was more likely to contain errors than other documents submitted to CMS for review. CMS also expects this change to speed its review of materials by eliminating the 45-day review of nonmodel materials. When sponsors use CMS standardized documents without modification they are available for use 5 days after submission to CMS. While one sponsor and two beneficiary advocates supported greater standardization, six of the eight sponsors expressed concerns that the 2009 ANOC requirements will render notices to beneficiaries less effective in communicating plan changes. In formal written comments to CMS and in interviews on the proposed changes for the 2009 AEP, some study sponsors noted that the mandatory use of CMS’s standardized documents requires the use of language already considered to be at a reading level that is too high for some beneficiaries. Additionally, two sponsors told us that, in the past, they have conducted focus groups with their enrollees on the clarity of their materials. They pointed out that the requirement to use only standardized materials prevents them from developing an ANOC that incorporates enrollees’ feedback and provides information in a way that they believe would enhance beneficiaries’ comprehension of plan changes. Some of these concerns may be addressed in CMS’s forthcoming evaluation of the combined ANOC-EOC. The contract includes the development of beneficiary materials at an appropriate reading level and that incorporates plain language principles. Additionally, a CMS official said that the evaluation will involve in depth interviews with consumers in an effort to develop more effective beneficiary materials for PDP enrollees. The interviews will include special populations such as dual eligibles. Another change CMS implemented for the 2009 AEP was a requirement that the EOC be sent to beneficiaries with the ANOC in a single mailing. For the 2008 AEP, sponsors had the option to mail a short Summary of Benefits (SB) with the ANOC in October and follow this with the more detailed EOC in January. For the 2009 AEP, CMS requires all sponsors to mail to beneficiaries the ANOC and EOC together for receipt by October 31; the SB will be available to beneficiaries only upon request. According to CMS, this change gives beneficiaries comprehensive information on their current plan in advance of the AEP. Furthermore, CMS considers the EOC a resource document that beneficiaries will keep and refer to as appropriate. Sponsors, advocates, and SHIPs expressed concern that sending the ANOC and EOC as one mailing results in a lengthy document that could confuse some beneficiaries and deter others from reading the materials. Because CMS’s new set of required materials replaces the abbreviated SB (approximately 4 pages in length) with the more detailed EOC (that could be more than 100 pages in length), the size of sponsors’ mailings for the AEP could grow from about 51 pages in 2008 to about 86 pages in the 2009 AEP. One sponsor noted that beneficiaries want information about changes that affect them significantly, such as changes to their formulary or cost-sharing responsibility. Another sponsor cited the importance of highlighting changes to step therapy and prior authorization for beneficiaries. These sponsors expressed concern that the long combined ANOC-EOC may not be useful for beneficiaries in understanding such changes. According to the contract and CMS officials, the evaluation of beneficiary materials including the ANOC-EOC requires the development of shorter documents. However, it is unclear whether the evaluation will include an examination of alternatives such as the use of specific information tailored to the individual—an approach favored by communications researchers. It is also unclear whether the evaluation will include an assessment of the decision to combine the ANOC and EOC mailings despite concerns expressed in comments to CMS on the draft 2009 call letter. Despite improved AEP enrollment procedures, one in seven of the approximately 1 million beneficiaries choosing to switch PDPs were not fully enrolled in their new plan by January 1. CMS and sponsors made modifications to the enrollment process that resulted in a median processing time of 5 days for applications submitted throughout the 2008 AEP. However, the statutorily mandated AEP schedule—November 15 to December 31 with coverage effective January 1—lacks sufficient time in which to fully process enrollment applications. As a result, stakeholders reported inaccurate charges, additional administrative burden, and inconveniences for the beneficiary following the 2008 AEP. CMS officials and sponsors in our study agreed that creating an interval for enrollment processing between the end of the AEP and the effective date of new coverage would reduce the risk of these challenges. For the 2008 AEP, CMS and sponsors implemented changes to expedite enrollment processing in order to avoid difficulties encountered the previous year. To be fully enrolled in a new plan a beneficiary must submit a complete application, eligibility must be verified by sponsors and CMS, billing codes must be assigned and disseminated, adjustments must be made to premium amounts and any relevant subsidies, and the beneficiary must receive documentation of new coverage, in the form of either a confirmation letter or a new membership card. The complete processing of an enrollment application requires data exchanges among not only CMS, sponsors, and pharmacies, but also the Social Security Administration (SSA), state Medicaid programs for certain beneficiaries, and various CMS contractors. The multiple data exchanges among partners that are necessary for processing a beneficiary’s change in plans occur sequentially over a period of time. A previous study by GAO on Part D complaints and grievances found that 63 percent of the complaints received by CMS over an 18-month period cited problems related to processing beneficiary enrollments and disenrollments. We noted that CMS received numerous complaints about late or missing membership cards, incorrect enrollments and disenrollments, inaccurate premiums, involuntary switching, and problems regarding cost-sharing amounts. To address concerns, CMS continually works with its partners through work groups, task forces, and coalitions to improve program quality and improve Part D processes. The changes CMS implemented for the 2008 AEP reduced the time needed for processing enrollments by at least 14 days, compared to the previous year. CMS achieved this reduction by requiring sponsors to: transmit new enrollment information to CMS “as early as possible” or, at most, within 7 days instead of the 14 days permitted during the 2007 AEP; contact beneficiaries regarding missing information within 10 days of receipt versus 21 days as previously required; and provide beneficiaries with information on their enrollment in a new plan from CMS’s earliest notification, which is likely to be the weekly report, rather than waiting for CMS’s monthly report as previously permitted. In addition, CMS improved the process by requiring sponsors to assign billing codes to new enrollments earlier in the process. Pharmacies need these codes in order to identify and charge the appropriate plan and collect the correct copayment amounts from beneficiaries. Previously, sponsors assigned this information at various times and in separate transactions that sometimes followed CMS’s confirmation of a beneficiary’s Part D eligibility by as much as 14 days, delaying availability of billing codes at the pharmacy. In contrast, for the 2008 AEP, the agency required sponsors to submit the billing codes for each beneficiary simultaneously with the initial enrollment transaction. Since 2006, CMS has required sponsors to include this information on the acknowledgement and confirmation letters sent to beneficiaries informing them of the effective date of their new coverage. These letters could be used by pharmacists to verify coverage prior to the beneficiaries’ receipt of their new plan membership cards. Streamlining the assignment of the billing codes significantly decreased the time to complete enrollment processing and increased overall program efficiency. Four sponsors in our study also took steps to complete AEP enrollments and transmit the billing codes to their claims systems prior to receiving the weekly report from CMS confirming eligibility. Two sponsors went further to expedite enrollments by establishing processes to independently obtain information needed to complete an application or verify enrollees’ eligibility for Part D. Because these sponsors did not wait for CMS to confirm eligibility via their weekly or monthly reports, they were able to provide pharmacies with information regarding a beneficiary’s new coverage more quickly. Additional time-saving modifications to the enrollment process identified by CMS officials and sponsors in our study included: use of customer service representatives to complete telephone enrollments, conduct the preliminary eligibility check required by CMS, and ask the caller pertinent questions; more detailed beneficiary information provided by CMS to sponsors in the more detailed information provided to pharmacies by CMS and sponsors during the query used to obtain billing information for beneficiaries without membership cards or other documentation of coverage; increased staffing during December to manage the volume of late-month enrollments; and automation of certain tasks to reduce processing time. Despite these improvements, we found that 15 percent of the approximately 1 million beneficiaries choosing to switch plans during the 2008 AEP were at risk of not having access to their new coverage on January 1, 2008. Reasons for these delayed enrollments included the volume of late-December applications combined with the processing time required to fully enroll a beneficiary in a new plan, and the AEP schedule that accepts enrollments through December 31 with coverage effective the next day. To prevent problems in January resulting from these delays, in 2007 CMS sponsored a media campaign and distributed guidance to its partners to encourage beneficiaries to submit their 2008 AEP applications by December 7, 2007. Despite this effort, CMS weekly report data revealed that 50 percent of the applications to change PDPs were received after December 10. Furthermore, sponsors received 5 percent of enrollment applications for beneficiaries during the last 2 days of the AEP. Figure 2 shows the distribution of enrollment applications received during the 2008 AEP. CMS data also show that the time needed to fully process an enrollment application varied over the course of the AEP. Overall, the median processing time was 5 days. However, in week 1 of the AEP the median processing time was 20 days and in week 5 the median was 3 days. Of the applications received after December 15—which accounted for nearly one third of all 2008 AEP applications—40 percent were not processed until after January 1. On average, applications received in late December took 3 days longer to process than applications received earlier in the month. Not until January 11—2 weeks into the new coverage year, did CMS and its partners complete the processing of 97 percent of the applications of beneficiaries choosing to switch PDPs during the 2008 AEP. Figure 3 shows when enrollment processing for the 2008 AEP was completed. The capacity to process late December enrollments also varied across sponsors. We found that larger sponsors were quicker in processing enrollments received after December 15. Thus, sponsors with more than 3 million PDP beneficiaries were able to process 81 percent of late December applications prior to the effective date of new coverage. In contrast, sponsors with less than 100,000 PDP enrollees were able to process 48 percent of applications before the new year. Regardless of sponsor size, the AEP schedule provided insufficient time in which to fully process all enrollment applications. The inability of CMS and sponsors to complete all the steps in the enrollment process prior to the effective date of new coverage for beneficiaries choosing to switch plans created a number of challenges. One consequence is the heightened risk of inaccurate charges or payment amounts for beneficiaries, pharmacies, and sponsors. Following the 2008 AEP, beneficiary advocates, SHIPS, and pharmacists reported that some individuals were charged the wrong copayment or deductible, especially those who had applied for a low-income subsidy—which must be authorized by SSA—to reduce their cost-sharing levels. Two pharmacy associations reported that if new coverage could not be verified, their members risked filling a prescription for which they would not get reimbursed or the beneficiary would need to pay for a temporary supply of the medication while coverage issues were resolved. Pharmacy association representatives also reported delayed receipt of payment from sponsors related to prescriptions filled when a beneficiary’s enrollment status changed. In addition, stakeholders told us about significant payment inaccuracies that took time to resolve. Additional administrative burden was another consequence of the incomplete processing of all AEP enrollments prior to January 1, according to several stakeholders we interviewed. Until beneficiaries switching plans received documentation of their new coverage from the sponsor on a membership card or letter, pharmacies had to use alternative means to locate the updated enrollment and billing information in order to fill a prescription and submit a claim correctly. CMS guidance requires pharmacists to complete electronic inquiries and, if necessary, phone calls to try to identify the beneficiary’s correct enrollment status and billing codes if that information is not in the claims system. Pharmacy associations reported that this procedure was overly complex and required extensive staff time. Sponsors and pharmacy associations said that claims filed under a beneficiary’s old plan had to be reversed retroactively and charged to the correct plan, requiring additional time and resources. Stakeholders we interviewed noted that beneficiaries lack a single point of contact to resolve enrollment issues promptly and might need to follow up with multiple sources including their plan, CMS, SSA, and the prescribing physician. CMS, sponsors, pharmacy associations, SHIPS, and beneficiary advocates recognize that the current AEP schedule—November 15 through December 31—is problematic. Some stakeholders we interviewed in our study said that creating an interval for enrollment processing between the end of the AEP and the effective date of coverage would help ensure that coverage for a beneficiary switching plans would be in place on January 1. Additional time for enrollment processing would also help beneficiaries receive their new membership information prior to the effective date of coverage. In addition, we recently reported that such an interval may address some challenges that result from premium withholdings from Medicare beneficiaries’ Social Security checks. Requiring a “quiet period” is standard practice in private health insurance and other federal health programs, allowing sponsors time to process applications and provide enrollees appropriate information about their new coverage prior to its effective date. For example, in Medicare Part B, open enrollment is followed by a 3-month processing interval that extends from March 31 to July 1. Similarly, in the FEHBP, enrollment is followed by approximately a 3-week processing interval that extends from the second Monday in December to January 1. More than half the sponsors interviewed supported the creation of an enrollment processing interval. CMS officials and two sponsors recommended ending enrollment in mid December, while two other sponsors suggested ending the AEP on November 30. Stakeholders also noted that some of the difficulties associated with an AEP schedule that includes the end-of-the-year holidays could be avoided with an earlier end date. Several sponsors recommended an earlier start date to address concerns they have regarding CMS’s guidance prohibiting sponsors from processing applications submitted between the receipt of the ANOC (October 31) and the beginning of the AEP (November 15). Although CMS officials told us they expected beneficiaries to use this time to become informed about their choices, sponsors reported that this waiting period was inconvenient for those beneficiaries who were prepared to make an enrollment change. Similarly, SHIP counselors pointed out that they too must wait until November 15 to forward beneficiaries’ completed applications. SHIP counselors in one state reported that the need to double check enrollment applications completed prior to November 15 limited the number of Part D beneficiaries they were able to assist during the AEP. However, stakeholders noted that starting the AEP earlier would have implications for the preceding AEP deadlines. Effective written communication about plan changes helps beneficiaries determine whether their current PDP will continue to meet their needs and may reduce the risk of surprises at the pharmacy when beneficiaries fill their prescriptions in the new benefit year. From a program perspective, beneficiaries must be sufficiently aware of plan changes in order to fully use their ability to switch plans to foster the competition that Congress intended in designing the Part D program. To this end, CMS designed a model ANOC to provide a consistent format for the information sent to beneficiaries about upcoming plan changes. Sponsors as well as advocates have voiced concerns that the model lacked the attributes— particularly simplicity and personalization—that researchers say are needed for beneficiaries to understand and act on the information provided. Although CMS told us that they recently initiated an evaluation of its annual notification materials, it is unclear whether alternative formats for the ANOC-EOC will be considered. Based on our findings as well as research on the need to make information relevant to the reader, CMS’s evaluation should consider alternative models that incorporate a beneficiary’s personal drug information. Such an alternative may be more effective in highlighting key changes in drug costs and coverage for plan enrollees. Two sponsors in our study supplemented the 2008 ANOC by mailing additional information on specific drug coverage and cost changes to nearly 3.6 million enrollees, thus demonstrating the feasibility of providing such personalized information to their members. Although CMS and sponsors implemented improvements to better manage the 2008 AEP enrollment process, their efforts are hampered by a schedule that lacks a sufficient interval in which to complete processing beneficiary enrollment changes prior to the effective date of new coverage. Under the current mandated schedule, it is not possible to guarantee that beneficiaries choosing to switch plans late in December are fully enrolled in their new plans with pharmacists having sufficient evidence of the new coverage by January 1. In addition, sponsors and pharmacies reported excessive administrative complexity and diminished program efficiency as a result of the nearly 15 percent of enrollments still in process in January. Stakeholders agree that modifying the schedule and creating an interval between the end of the AEP and the effective date of coverage would minimize these challenges as well as mitigate issues related to low-income subsidies and premium withholding from beneficiaries’ Social Security checks. Establishing a processing interval would be consistent with the open enrollment periods in Medicare Part B, the FEHBP, and commercial insurance and create a more streamlined program to better serve beneficiaries. To improve the Part D enrollment process, Congress should consider authorizing the Secretary of HHS to amend the current AEP schedule to include a sufficient processing interval to fully enroll beneficiaries prior to the effective date of their new coverage. To ensure that beneficiaries are informed effectively of plan changes, we recommend that the Acting Administrator of CMS strengthen the agency’s evaluation of the ANOC-EOC by reviewing alternative formats that include personalized drug coverage and cost information. We provided CMS with a draft of this report for their review and comment. The agency provided written comments, which have been reprinted in appendix I. It also provided technical comments that we incorporated as appropriate. CMS concurred with our recommendation and agreed that beneficiaries need plan benefit information that is easy to read and understand. The agency noted that its process for developing the model ANOC and the EOC included a review of other programs’ model materials such as those used in the FEHBP, as well as restructuring the model to eliminate duplication and increase readability. CMS pointed out that, in preparation for the 2010 AEP, it has engaged a contractor to evaluate and improve the required notification materials sent to beneficiaries. CMS expects this evaluation to further address issues of length and readability. The contractor will obtain input from beneficiaries as part of its effort to redesign the ANOC for the 2010 AEP. Citing the near perfect timeliness rate for plans’ 2008 AEP ANOC mailing, CMS asserted that beneficiaries had the information they needed to make informed decisions about their plan options. However, we remain concerned that despite this timeliness, the volume and complexity of documents exceeding 100 pages continue to pose challenges for some beneficiaries. While CMS’s effort to evaluate the ANOC-EOC is an important, worthwhile step, such a review would benefit by a focus on streamlining as opposed to maximizing the amount of information that beneficiaries receive. Beyond the current evaluation, CMS should continue its efforts to improve the understandability of its AEP materials by testing more concise formats with beneficiaries. As we note in this report, a single-page, customized model that shows each beneficiary’s drug use and any upcoming changes in coverage and costs may more clearly communicate the essential information Part D beneficiaries need to be adequately informed. CMS acknowledged the challenges inherent in the current AEP schedule. The agency reiterated its strategies for ensuring that beneficiaries are able to access their new plan benefits while their enrollment is still being processed. For example, CMS highlighted its consistent efforts to encourage beneficiaries to submit their enrollment applications by early December. However, as we discuss in this report, one consequence of this approach is a reduction in the amount of time beneficiaries have to consider and enroll in an alternative plan that could better meet their needs. We are sending copies of this report to the Administrator of CMS, committees, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Rosamond Katz, Assistant Director; Jennie Apter; Ramsey Asaly; Anne Hopewell; JoAnn Martinez- Shriver; Jessica Smith; and Hemi Tewarson made major contributions to this report. Medicare Part D: Complaint Rates Are Declining, but Operational and Oversight Challenges Remain. GAO-08-719. Washington, D.C.: June 27, 2008. Medicare Part D Low-Income Subsidy: SSA Continues to Approve Applicants, but Millions of Individuals Have Not Yet Applied. GAO-08-812T. Washington, D.C.: May 22, 2008. Medicare Part D: Plan Sponsors’ Processing and CMS Monitoring of Drug Coverage Requests Could Be Improved. GAO-08-47. Washington, D.C.: January 22, 2008. Medicare Part D Low-Income Subsidy: Additional Efforts Would Help Social Security Improve Outreach and Measure Program Effects. GAO-07-555. Washington, D.C.: May 31, 2007. Retiree Health Benefits: Majority of Sponsors Continued to Offer Prescription Drug Coverage and Chose the Retiree Drug Subsidy. GAO-07-572. Washington, D.C.: May 31, 2007. Medicare Part D: Challenges in Enrolling New Dual-Eligible Beneficiaries. GAO-07-272. Washington, D.C.: May 4, 2007. Medicare Part D: Prescription Drug Plan Sponsor Call Center Responses Were Prompt, but Not Consistently Accurate and Complete. GAO-06-710. Washington, D.C.: June 30, 2006. | In Medicare Part D, enrollees in stand-alone prescription drug plans (PDPs) are allowed to switch plans during an annual coordinated election period (AEP) set under law from November 15 to December 31, with new coverage effective January 1. The Centers for Medicare & Medicaid Services (CMS) required that plan sponsors send an Annual Notice of Change (ANOC)--using either its model or a nonmodel format--before the 2008 AEP. Among other things, GAO examined: (1) stakeholders' views of the model ANOC and CMS's efforts to assure its effectiveness, and (2) how the scheduling of the AEP affects the enrollment process for beneficiaries switching PDPs. Among the largest PDP sponsors, we selected eight to interview along with other stakeholders involved in the AEP. We also obtained and analyzed data from CMS. Sponsors, pharmacists, beneficiary advocates, and counselors GAO interviewed expressed concern that CMS's model ANOC for the 2008 AEP did not effectively communicate drug plan changes to enrollees. They noted that it contained language at a reading level too high for some beneficiaries as well as too much, often irrelevant, information. To help ensure their enrollees understood how plan changes would affect them personally, two study sponsors mailed additional information detailing specific changes in coverage and costs for drugs the beneficiary took in the past year. Despite GAO's previous recommendation that CMS ensure that its Part D materials meet communications guidelines, CMS's process for developing its model ANOC did not include a systematic evaluation of its effectiveness. However, CMS officials reported that they recently initiated an evaluation of their annual Medicare beneficiary materials for the 2010 AEP that will examine reading levels, effectiveness, and length, among other factors. Such an evaluation is important in light of changes CMS has made for the 2009 AEP, which have raised further concerns among stakeholders. It is unclear whether alternative formats for communicating plan changes to beneficiaries will be considered. Although CMS and plan sponsors made improvements to the enrollment process, CMS data showed that about 15 percent of beneficiaries who chose to switch plans in the 2008 AEP were not fully enrolled in their new plan by January 1. Modifications to the enrollment process for the 2008 AEP reduced the time needed to enroll beneficiaries in a new plan to a median of 5 days. However, the volume of applications submitted late in the AEP contributed to beneficiaries being at risk of not having access to their new coverage by January 1. In fact, among the beneficiaries who submitted applications after December 15, 40 percent were not completely processed until after the effective date of their new coverage. As a result, stakeholders reported that beneficiaries, pharmacies, and sponsors faced various operational challenges, including the risk of inaccurate charges and additional administrative burden. Some stakeholders we interviewed for our study said that creating an interval for enrollment processing between the end of the AEP and the effective date of coverage would help ensure that beneficiaries switching plans would have their coverage in place on January 1. |
A refugee is generally defined as a person who is outside his or her country and who is unable or unwilling to return because of persecution or a well-founded fear of persecution on account of race, religion, nationality, membership in a particular social group, or political opinion. The Refugee Act of 1980, which amended the Immigration and Nationality Act, provided a systematic and permanent procedure for admitting refugees to the United States and maintains comprehensive and uniform provisions to resettle refugees as quickly as possible and to encourage them to become self-sufficient. Several federal, state, and local government agencies coordinate with private organizations to implement the admission and resettlement process. Each year the President, after appropriate consultation with the Congress and certain Cabinet members, determines the maximum number of refugees the United States may admit for resettlement in a given year. The number actually resettled is typically below this maximum number and has varied over time—sometimes due to security concerns (see fig. 1). The federal government gives private, voluntary agencies responsibility to determine where refugees will live in the United States, with approval from the Department of State. Refugees are assigned first to a national voluntary agency and then the voluntary agency decides where the refugee will live. More specifically, the nine national voluntary agencies, which maintain a network of about 350 affiliates in communities throughout much of the United States, meet weekly to allocate individual refugees based on an annual evaluation of the communities’ capacity to See figure 2 for the number of refugees that arrived in serve refugees.each state during fiscal year 2011. Appendix III provides additional detail about the countries of origin for arrivals to the 20 states with the largest refugee populations. In the last 10 years, refugees have come to the United States from an increasing number of countries, and the issues associated with these diverse populations have become more complex. For example, many refugees today arrive after having lived in refugee camps for years, and may have little formal education or work experience, or untreated medical or mental health conditions. In turn, receiving communities have needed to adjust their language capabilities and services in order to respond to the changing needs of these diverse refugee populations. Figure 3 shows the top 20 countries of origin for refugees arriving in the United States in fiscal year 2011. Three federal agencies are involved in the refugee resettlement process. The Department of Homeland Security (DHS) approves refugees for admission to the United States. State’s Bureau of Population, Refugees, and Migration (PRM) is responsible for processing refugees overseas. Once refugees are processed and arrive in the United States, PRM partially funds services to meet their immediate needs. PRM enters into cooperative agreements with national voluntary agencies under its Reception and Placement Program to provide funding that helps refugees settle into their respective communities during their initial 30 to 90 days and covers housing, food, clothing, and other necessities. Each local affiliate receives $1,850 per refugee to provide these services. Figure 4 illustrates the general path of refugee resettlement in the United States. Many refugees are then eligible to receive temporary resettlement assistance from the Office of Refugee Resettlement (ORR), located within HHS. In most states, ORR funds cash and medical assistance as well as social services to help refugees become economically self-sufficient. ORR provides these funds through grants to state refugee coordinators, who may be employed by a state agency or by a nonprofit organization depending on how a state’s program is set up.grants provide funding for employment and other support services. States also receive funding from ORR to award discretionary grants—including school impact, services to older refugees, and targeted assistance grants—to communities that are particularly affected by large numbers of refugees or to serve specific refugee populations such as the elderly. See table 1 for a list of selected refugee assistance programs. Voluntary agencies consider a variety of factors when they propose the number of refugees to be resettled in each community (see table 2). Before preparing their annual proposals for PRM’s Reception and Placement Program for approval, national voluntary agencies ask local voluntary agency affiliates to assess their own capacity and that of other service providers in the wider community and propose the number of refugees that they will be able to resettle that year. In making these assessments, local voluntary agency affiliates typically consider both their own internal capacity and the capacity of the community, with different For example, when determining levels of emphasis on one or the other.how many refugees their community can accommodate, local affiliates in one community told us that they primarily consider their internal capacity—such as staffing levels, staff skills, long-term funding needs, the number of refugees they have served in the past, and success in meeting refugee employment goals in the previous year. Local affiliates in another community explained that they primarily consider community-based factors, such as housing availability and employment opportunities. To help make this process more consistent, Refugee Council USA, a coalition of the nine national voluntary agencies, developed guidance and a list of factors that local affiliates could use when evaluating community capacity. However, national voluntary agencies do not require their local affiliates to use the guidance. Moreover, national voluntary agencies may adjust the numbers proposed by local affiliates. Because refugees are generally placed in communities where national voluntary agency affiliates have been successful in resettling refugees, the same communities are often asked to absorb refugees year after year. One state refugee coordinator noted that local affiliate funding is based on the number of refugees they serve, so affiliates have an incentive to maintain or increase the number of refugees they resettle each year rather than allowing the number to decrease. Even though they are required to coordinate and consult with state and local governments about their resettlement activities, voluntary agencies have received only limited guidance from PRM on how to obtain input from these and other community stakeholders when assessing communities’ capacity. The Immigration and Nationality Act, as amended, states that it is the intent of Congress that local voluntary agency activities should be conducted in close cooperation and advance consultation with and the cooperative agreements that the state and local governments,Department of State enters into with national voluntary agencies require the agencies to conduct their reception and placement activities in this manner. Driven by concerns that voluntary agencies were not consulting sufficiently with state and local stakeholders when developing their proposals, PRM directed local voluntary agencies to do more to document consultations with state and local stakeholders regarding the communities’ capacity to serve refugees. However, PRM’s guidance on consultation with state and local governments does not provide detailed information regarding the agency’s expectations for the content of these discussions. While the guidance provides some examples of state and local stakeholders that the voluntary agencies could potentially consult, it does not state which stakeholders must be consulted. PRM officials said that they allow local voluntary agencies to decide whom to consult because the voluntary agencies know their communities best and because local circumstances vary. Most local voluntary agencies we visited have not taken steps to ensure that other relevant service providers are afforded the opportunity to provide input on the number and types of refugees that can be served. As a result, many local service providers experienced challenges in properly serving refugees. Most of the local voluntary agencies told us they generally consult with private stakeholders such as apartment landlords or potential employers prior to resettling refugees in an area. They also stated that they consult with some public entities, such as state refugee coordinators; however, most public entities such as public schools and health departments generally said that voluntary agencies notified them of the number of refugees expected to arrive in the coming year, but did not consult them regarding the number of refugees they could serve before proposals were submitted to PRM. Moreover, service providers in one community noted that because the local voluntary agencies did not consult them on the numbers and ethnicities of refugees they were planning to resettle, there were no interpreters or residents that spoke the language of some of the refugees who were resettled there, even though the providers could have served refugees that spoke other languages. Voluntary agencies may not consult with relevant stakeholders if they perceive them to be unaware of the resettlement process or if they believe that refugees do not use certain services. For example, local voluntary agency staff in one community said they did not consult with certain stakeholders because they believed that they were not well informed about the resettlement process and might unnecessarily object to the proposed number of refugees to be resettled. In fact, one local, elected official we spoke to was unaware that refugees were living in the community. Other elected officials noted that it was difficult to tell if or when refugees accessed services, even though school and health department officials in those same communities had frequent interactions with refugees and wanted opportunities to provide input. Although they bear much of the responsibility for providing services to refugees, some of the health care providers and schools that had not been consulted on, or even notified of, the number of refugees that were to be resettled sometimes felt unprepared to do so. For example, health care providers in two communities told us that they were not notified in advance that refugees would be arriving in their communities, and thus, had no time to set up screening procedures. They were also unaware of the specific needs and health challenges of the communities they were serving. In addition, in some instances when voluntary agencies were unable to adequately prepare the community as a whole for the new arrivals and provide refugees with the services they needed, some community members expressed opposition toward the refugees. For example, in Fort Wayne, Indiana, a few case studies show that the community, which had been receiving fewer than 500 refugees per year prior to 2007, experienced a rapid increase that more than tripled the number of refugees resettled in the community. The community, in turn, was forced on short notice to obtain new sources of funding and establish a new infrastructure in order to serve their new arrivals. This unplanned increase in refugees, combined with a growing unemployment rate, engendered frustrations that resulted in backlash from the community. Moreover, a number of other factors, including the high frequency of communicable diseases among certain populations, unmet needs for mental health services, overcrowding in homes, and cultural practices caused existing residents to become concerned or even hostile. Similarly, officials in Clarkston, Georgia, another community that was not initially consulted regarding the resettlement of thousands of refugees beginning in 1996, described the flight of long-time residents from the town in response to refugee resettlement and the perceived deterioration of the quality of schools. In a few of the communities we visited, after reaching a crisis point due to the influx of refugees, stakeholders took the initiative to develop formal processes for providing input to the local voluntary agencies on the number of refugees they could serve. For example, an influx of refugees in Fargo, North Dakota, in the 1990s overwhelmed local service providers. In response, those service providers and the local voluntary agency formed a Refugee Advisory Committee to provide a formal, community-based structure for finding solutions to challenges in resettling refugees. The committee includes representatives from the local voluntary agency, state and county social services departments, various city departments, school districts, as well as local health care providers, nonprofit organizations, and the assistant state refugee coordinator. The local voluntary agency solicits input from the committee annually on the number of refugees the community has the capacity to serve in the coming year and also meets quarterly to address other issues such as the needs of service providers. Committee members told us that the number of new refugees arriving in Fargo declined after the committee was developed. Committee members and voluntary agency officials said that their close communication allows them to better educate the community and better serve the refugees, and both believe the number being resettled is manageable. Similarly, in Boise, Idaho, city officials formed a roundtable group to develop a Refugee Resource Strategic Community Plan in 2009 to work with the local voluntary agencies, the state refugee coordinator’s office, and community organizations to identify strategies for successful resettlement of Boise’s refugees, in light of the most recent economic downturn. The group includes representatives from the state coordinator’s office, local voluntary agencies, various city departments, school district representatives, nonprofit organizations, as well as employers, health care providers, and other community stakeholders. The group meets quarterly to review progress on the objectives outlined in the strategic plan. The local voluntary agencies obtain input from the group members on the community’s capacity for serving refugees, but they do not discuss the specific number of refugees that will be proposed to the national voluntary agency and PRM for resettlement. Roundtable members told us that the local voluntary agencies have worked with their national offices to reduce the proposed number of refugees to resettle in Boise in 2011 based on community capacity. The state of Tennessee has passed legislation that creates formal processes for communication between voluntary agencies and local stakeholders. Specifically, the Refugee Absorptive Capacity Act,was passed in 2011, requires the state refugee program office to enter into a letter of agreement with each voluntary agency in the state. The which letter of agreement must contain a requirement that local stakeholders mutually consult and prepare a plan for the initial placement of refugees in a community as well as a plan for ongoing consultation. In addition, the state program office must ensure that local voluntary agencies consult upon request with local governments regarding refugee placement in advance of the refugees’ arrival. Communities can benefit socially and economically from refugee resettlement. In all of the communities we visited, stakeholders said that refugees enriched their cultural diversity. For example, local service providers in Fargo commented that refugees bring new perspectives and customs to a city with predominately Norwegian ancestry. Some city officials and business leaders we spoke with in several communities said that refugees help stimulate economic development by filling critical labor shortages as well as by starting small businesses and creating jobs. For instance, new refugee-owned businesses revitalized a neighborhood in Chicago after other businesses in the area closed. In addition, an official in Washington State told us that diverse resettlement communities with international populations attract investment from overseas businesses. According to ORR officials, refugees also bring economic benefits to communities by renting apartments, patronizing local businesses, and paying taxes, and the presence of refugees may increase the amount of federal funding that a community receives. In Boise, officials commented that the refugee students helped stabilize the public school population, which had been declining before the city established a refugee resettlement program. While refugees can benefit their communities, they can also stretch the resources of local service providers, such as school districts and health care systems. In several communities we visited, school district officials said that it takes more resources to serve refugee students than nonrefugee students, because they sometimes lack formal schooling or have experienced trauma, which can require additional supports, such as special training for school staff. In addition, newly arrived refugee students often have limited English proficiency, and hiring interpreters can be costly. Similarly, some health care providers expressed concerns about serving refugees, because they said that they are required to provide interpreter services to patients with limited English proficiency. One provider told us that their clinic spent more than $100,000 on interpreter services in the previous year, costs that were not reimbursed. In addition, in some communities we visited, school district officials and health care providers said that locating interpreters for certain languages can be difficult. ORR and PRM officials noted that these impacts are not unique to refugees and that serving immigrants may pose similar challenges. ORR offers discretionary grants to assist school districts that serve a large number of refugees, but we learned that district officials may be unaware of these grants or may decide that the effort involved in applying for them outweighs the potential benefits. For example, through its school impact grant, ORR funds activities for refugee students such as English as a Second Language instruction and after-school tutorials. However, school district officials in one community that was new to the refugee resettlement program said they had no information about where they could find assistance in serving refugee students. In another community, district officials were aware of the school impact grant, but said they did not apply for it because they found the application process to be burdensome and the funding level would have been insufficient to meet their needs. In addition to stretching school district resources, refugee students can also negatively affect district performance outcomes. School district performance is measured primarily by students’ test scores, including the scores of refugee students. School district officials in several communities said that even though refugee students often have limited English proficiency, they are evaluated against the same metrics as their native English-speaking peers, which can result in lower performance outcomes for the district. In one community, officials told us that the district had not demonstrated adequate yearly progress under the state standards in recent years, and they attributed this in part to the test scores of refugee students. Furthermore, refugees who exhaust federal refugee assistance benefits and are not self-sufficient can strain local safety nets. Refugees who are no longer eligible to receive cash and medical assistance from ORR after 8 months but are unemployed—or are working in low-wage jobs that do not provide sufficient income—may seek help from local service providers such as food pantries, organizations providing housing assistance, and even homeless shelters. If service providers are unprepared to serve these refugees in addition to their other clients, it can stretch their budgets and diminish the safety net resources available to others in the community. Table 3 lists the benefits and challenges of refugee resettlement identified by stakeholders in the communities we visited. Migration from one community to another after initial resettlement— referred to as secondary migration—can unexpectedly increase the refugee population in a community, and communities that attract large numbers of secondary migrants may not have adequate, timely funding to provide resettlement services to the migrants who need them. According to ORR, refugees relocate for a variety of reasons: better employment opportunities, the pull of an established ethnic community, more welfare benefits, better training opportunities, reunification with relatives, or a more congenial climate. Not all refugees who migrate choose to access resettlement services in their new communities, according to PRM officials. However, for those migrants who need resettlement services, federal funding does not necessarily follow them to their new communities, even though refugees continue to be eligible for some resettlement services for 5 years after arrival. According to ORR officials, refugees who relocate while they are receiving cash assistance, medical assistance, or refugee social services are eligible to continue receiving those services in their new communities for a limited time.However, ORR does not coordinate this continuation of service, and state refugee coordinators must communicate with one another to determine eligibility for each refugee who relocates. In addition, ORR provides grants to communities and states affected by secondary migration, but the annual cycle of these grants may not provide ORR the flexibility to respond in a timely manner. ORR uses secondary migration data submitted by states once a year, among other data, to inform refugee social services funding allocations for future fiscal years. According to ORR officials, these formula grants are awarded annually to states based on the number of refugee arrivals during the previous 2 years. As a result, a year may pass before states experiencing secondary migration receive increased funding. For example, Minnesota reported to ORR that 1,999 refugees migrated into the state during fiscal year 2010, but under ORR’s current formula funding process, the state would not have received increased funding until fiscal year 2011. In another example, social services funding did not keep pace with a large number of arrivals of both newly resettled refugees and secondary migrants in Detroit in fiscal year 2008. According to a report commissioned by ORR, after this rapid influx of arrivals, caseloads rose to 150 clients per caseworker in the employment and training program, and caseworkers were forced to devote a majority of their time to paperwork and case management, which limited their ability to provide job development and training services. Further, ORR will not adjust a state’s level of social services funding to account for secondary migration until it verifies that the refugees migrated to the state. According to one state refugee coordinator, ORR rejects the data states submit if the refugee’s information does not match the information in ORR’s database or if two or more states claim to have served the same refugee. ORR officials said that, while their process allows states to update missing data and correct formatting errors, it does not allow states to resubmit data that does not match the information in ORR’s database or that was submitted by two or more states. ORR offers supplemental, short-term funding to help communities affected by secondary migration. For example, the Supplemental Services for Recently Arrived Refugees grant is designed to help communities provide services to secondary migrants or newly arriving refugees when the communities are not sufficiently prepared in terms of linguistic or culturally appropriate services or do not have sufficient service capacity. However, this grant is only available to communities that will serve a minimum of 100 refugees annually, and the funding is for a fixed period of time. Communities must apply and be approved for the grant, and funding may not arrive until many months after the influx began. For example, in a draft report on secondary migration commissioned by ORR, the Spring Institute for Intercultural Learning found that one community did not receive supplemental funding until 14 months after secondary migrant refugees began arriving. Without comprehensive secondary migration data, ORR cannot target supplemental assistance to communities and refugees in a timely way. Currently, the data that PRM and ORR collect on secondary migration are limited and little is known about secondary migration patterns. PRM collects data from local voluntary agencies regarding the number of refugees who move away from a community within the first 90 days after arrival, but does not collect data on the estimated number of refugees who enter the community during the same time period. PRM officials said that they use these out-migration data to assess the success of refugee placement decisions. In contrast, ORR collects secondary migration data annually from each state, but does not collect community-level data. Specifically, ORR collects information on the number of refugees who move into and out of each state every year. However, ORR officials explained that they can only collect these data when secondary migrants access services. As a result, refugees who move into or out of a state but do not use refugee services in their new communities are not counted. Even so, these refugees access other community services and their communities may need additional assistance to meet their needs. Secondary migration can strain local resources significantly. For example, the draft report on secondary migration prepared for ORR by the Spring Institute for Intercultural Learning found that refugees who migrate to new communities can overwhelm local service providers, such as health departments, that are unprepared to serve them. In addition, a report prepared for ORR by General Dynamics Information Technology, Inc. found that, in one community, the influx of a large number of secondary migrants who lacked resources led to a homelessness crisis that stressed the capacity of both the shelter system and the other agencies serving refugees. Some communities that face challenges in serving additional refugees have requested restrictions or even temporary moratoriums on refugee resettlement. According to PRM, the cities of Detroit and Fort Wayne, Indiana, requested restrictions on refugee resettlement due to poor economic conditions. In response, PRM limited resettlement in Detroit and Fort Wayne to refugees who already have family there.the mayor of Manchester, New Hampshire, asked in 2011 that PRM temporarily stop resettling refugees in the city because of a shortage of jobs and sufficient affordable housing. While PRM did not grant the requested moratorium, the agency reduced the number of refugees to be resettled there in fiscal year 2011 from 300 to about 200. PRM officials said that a moratorium on resettlement would not have made sense because nearly all of the refugees slated to be resettled in Manchester have family there and would likely relocate to Manchester eventually— even if they were initially settled in another location. Tennessee recently created a process by which communities could request a temporary moratorium on refugee resettlement for capacity reasons. The state’s Refugee Absorptive Capacity Act allows local governments to submit a request to the state refugee office for a 1-year moratorium on resettling additional refugees if they document that they lack the capacity to do so and if further resettlement would have an adverse impact on residents. The state refugee office may then forward this request to PRM. Passed in 2011, the law states that local governments should consider certain capacity factors—the capacity of service providers to meet existing needs of current residents, the availability of affordable housing, the capacity of the school district to meet the needs of refugee students, and the ability of the local economy to absorb new workers—before making such a request. According to PRM, to date, no community in Tennessee has submitted such a request. PRM conducts regular on-site monitoring of national voluntary agencies and about 350 local affiliates to ensure that the voluntary agencies deliver the services outlined in their cooperative agreements. Under the cooperative agreements, local voluntary agencies must provide certain services to refugees in the first 30 to 90 days after they arrive. PRM monitors national voluntary agencies annually and local affiliates once every 5 years, and requires national voluntary agencies to monitor their affiliates at least once every 3 years. During its local affiliate monitoring visits, PRM reviews case files and interviews staff. PRM officials also visit a small sample of refugees in their homes to ensure that the refugees received clean, safe housing and appropriate furniture. PRM also requires voluntary agencies to report certain outcome measures for each refugee they resettle. In recent years, PRM found most local affiliates generally compliant, and for those that were not, PRM made recommendations and required immediate corrective action. For fiscal years 2009 through 2011, according to PRM, it conducted 136 on-site monitoring visits. In over three-quarters of those visits, PRM determined that the local affiliate was compliant or mostly compliant. In about one-quarter of the cases, however, PRM determined that they were partially or mostly noncompliant (about 20 percent) or simply noncompliant (about 5 percent). PRM or national resettlement agencies can make return, on-site monitoring trips to assess the progress of affiliates when problems are identified. Furthermore, if the problems persist, national voluntary agencies can close an affiliate’s operation or PRM can decide not to allow placement of refugees at an affiliate. For fiscal year 2011, PRM determined that the most common recommendation made to local affiliates was that the local affiliate should document the reason core services could not be provided in the required time frames. (See table 4 for the top 10 recommendations made for fiscal year 2011.) Whereas PRM’s oversight focuses on services provided, ORR’s oversight focuses more on performance outcomes. In order to assess the performance of its programs that provide cash, medical assistance, and social services to refugees, ORR monitors employment outcomes and cash assistance terminations (see table 5). It uses a similar set of measures for its Matching Grant program. According to ORR, its focus on employment outcomes as a measure of effectiveness is based on the Immigration and Nationality Act, as amended, which requires ORR to help refugees attain economic self- sufficiency as soon as possible. ORR considers refugees self-sufficient if they earn enough income that enables the family to support itself without cash assistance—even if they receive other types of noncash public assistance, such as Supplemental Nutrition Assistance Program benefits or Medicaid. ORR conducts its on-site monitoring at the state level to ensure the program is able to collect and report accurate data and to ensure that the state is able to provide services to refugees. ORR’s on-site monitoring identifies deficiencies as well as best practices. ORR generally monitors state refugee coordinators onsite once every 3 years, as the state coordinator is responsible for administering and overseeing ORR’s major grants. During the on-site visit, ORR also monitors a sample of subgrantees. In monitoring reports from its most recent on-site monitoring in the states we visited, ORR identified a number of deficiencies including: failure to inform refugees that they were eligible for certain services for up to 5 years, failure to ensure that medical assistance was terminated at the end of the 8-month eligibility period, failure to ensure that translators were available when providing services to refugees, and missing documentation in case files. The monitoring reports contained ORR’s recommendations and noted when corrective action was required. ORR’s monitoring reports also identified program strengths and best practices that monitors observed while on site. For example, one ORR monitoring report noted that having a state refugee housing coordinator was a program strength, because this coordinator can locate affordable housing and research funding sources, which saves the caseworkers time and effort. In the same state, ORR found that having an employment specialist at a voluntary agency who can help refugees obtain job upgrades and pursue professional certificates was also a program strength. According to ORR officials, they supplement this on-site monitoring with desk monitoring, which may include reviews of case files, or reviews of information provided in periodic reports. Neither ORR nor PRM has formal mechanisms for collecting and sharing information gleaned during monitoring to improve services, such as solutions to common problems or promising practices. ORR and PRM officials identified some informal mechanisms for sharing such information with service providers, but relied mostly on service providers to network among themselves or share information during quarterly conference calls and annual consultations. ORR also relies on external technical assistance providers to disseminate best practices when training grantees and expects state refugee coordinators to share findings of monitoring reports with their local partners. However, monitoring reports are not publicly available, and, unless the state coordinators share this information, service providers may not be able to identify promising practices, track monitoring results, identify trends, and address common issues. As a result, service providers do not always get the information they need to improve services, whether by preventing a problem or implementing a best practice. ORR’s performance measures focus on short-term outcomes, even though refugees remain eligible for social services funded by ORR for up to 5 years. Because it is important for refugees to become employed before their cash assistance runs out—8 months or less, depending on the service delivery model—ORR’s performance measures provide incentives for service providers to focus on helping refugees gain and maintain employment quickly. Specifically, ORR requires grantees to measure entered employment at 6 months for the Matching Grant program or 8 months for statewide cash assistance programs. In addition, ORR requires grantees to measure job retention 90 days after employment. This focus on short-term employment, however, can result in a one-size-fits-all approach to employment services and may, in turn, limit service providers’ flexibility to provide services that may benefit refugees after the 6 to 8 month time frame. That is, with limited incentives to focus on longer-term employment and wages, service providers may not help refugees obtain longer-term services and training, such as on- the-job or vocational training, which could significantly boost their income or benefit the refugee in the long-term or after employment is measured.For example, when assisting refugees who arrive with college degrees and professional experience, service providers may not help them earn a credential valid in the United States, because the providers’ effectiveness is measured by whether the refugee is employed. Additionally, ORR does not allow skills certification training to exceed 1 year and requires the refugees to be employed when receiving training and services. Several service providers mentioned this as a challenge for highly skilled Iraqi refugees, in particular, some of whom include doctors and engineers. In addition, voluntary agency officials noted that ORR’s employment measures do not allow them to report on the longer-term or non- employment-related outcomes of the other refugee resettlement services they provide. As a result, services such as skills training, English language training, or mental health services—which provide longer-term benefits and benefits unrelated to employment—may not be emphasized. According to some local voluntary agency officials we spoke to, given the current performance measures, there is a disincentive to dedicate necessary time and resources to the nonemployment activities that create pathways to success for refugees. It may be particularly difficult to serve those who do not arrive in the United States ready to work due to trauma, illness, or lack of basic skills. While much of ORR’s grant funding focuses on short-term employment, ORR does have some discretionary grants that provide funding for particular purposes that may include services that focus on longer-term goals or more intensive case management. For example, the individual development account program provides matching funds to help refugees save money for the purchase of a vehicle or a home. For these relatively small competitively awarded discretionary grant programs, ORR gathers data on how much money was saved and what assets were purchased, but does not gather data on how these asset purchases affected earnings or self-sufficiency. Descriptions of discretionary grants that can be used to fund services beyond the initial resettlement period, as well as other selected ORR and PRM grant programs, can be found in appendix IV. In addition to the employment measures’ focus on short term outcomes, one state coordinator also noted that these employment measures leave room for interpretation. Specifically, some voluntary agencies may have a narrow definition of employment services while others may have a broader definition. In turn, the percentage of refugees who become employed after receiving employment services could vary based on what types of services are considered employment services. As a result, according to a state coordinator, measures may not provide consistent information about how well a program is performing in different communities. While federal refugee resettlement programs generally provide only short- term assistance, PRM and ORR both aim to prepare refugees for long- term integration into their communities. Although there is no single, generally accepted definition of integration in the literature, integration can be defined as a dynamic, multidirectional process in which newcomers and the receiving communities intentionally work together, based on a shared commitment to acceptance and justice, to create a secure, welcoming, vibrant, and cohesive society. The federal government’s efforts to facilitate integration begin before refugees even enter the United States, as PRM offers cultural orientation for all refugees and recently piloted English language training for refugees in certain overseas locations. According to PRM, this cultural orientation and language training is intended to lay the groundwork for refugees’ long- term integration into the United States. Integration is also a part of ORR’s mission and overall goal, and officials told us that they consider integration to be a central aspect of refugee resettlement. Although ORR only provides refugees with cash and medical assistance for a maximum of 8 months, officials noted that this initial assistance helps set the foundation for long-term integration. Other ORR programs provide longer- term services that are intended to further facilitate integration, but these services may not be as widely available as cash and medical assistance. For example, ORR’s social services grant program funds employment services and other support services to refugees for up to 5 years after arrival, but communities may choose to provide these services for a shorter period of time due to local resource constraints. ORR’s discretionary grants for micro-enterprise assistance and individual development accounts are also designed to facilitate integration by helping refugees start businesses in the communities where they live, among other goals. However, these discretionary grants are competitively awarded and are thus not available to all communities. ORR has studied approaches that facilitate refugee integration. In 2006, ORR created an integration working group to identify indicators of refugee integration and ways in which ORR could more fully support the integration process. In a 2007 interim report, the working group made both short-term and long-term recommendations to ORR, including that it (1) consider expanding ORR’s discretionary grant programs; (2) focus on integration in the areas of employment, English language acquisition, health, housing, and civic engagement; and (3) identify lessons learned from communities where refugee integration appears to be taking place. ORR officials told us that they have implemented many, but not all, of the working group’s recommendations due to funding constraints. For example, ORR commissioned a study to identify promising practices that appear to facilitate integration in four U.S. cities. Neither PRM nor ORR currently measure refugee integration as a program outcome. According to PRM, it does not measure refugee integration due to the short-term nature of the Reception and Placement Program. While refugee integration is part of ORR’s mission and overall goal, ORR officials said they have not measured it because there is no clear definition of integration, because it is unclear when integration should be measured, and because the Refugee Act focuses on self- sufficiency outcomes related to employment. Even so, ORR officials told us that they collect some data related to refugee integration. Specifically, as part of its annual report to Congress, ORR conducts a survey to gauge refugees’ economic self-sufficiency that includes integration-related measures such as employment, English language proficiency, participation in job training, attendance in a high school or university degree or certificate program, and home ownership. However, ORR officials noted that the survey is not designed to measure integration and should not be used for this purpose, especially since there is no clear definition of integration. In addition, the survey has had a low response rate, which may affect the quality of the data. Studies on refugee resettlement do not offer a broad assessment of how well refugees have integrated into the United States. Of the 13 studies we identified that addressed refugee integration, almost all were limited in scope in that they focused on particular refugee groups in specific geographic locations.of specific refugee groups, including factors that help refugees successfully integrate into their communities. However, because of the studies’ limited scope and differences in their methodologies, they provide limited insight into how refugees overall have integrated in the United States or how the experiences of different groups compare to one another. The studies describe the integration experiences Although the studies we reviewed were not directly comparable, together they identified a variety of indicators that can be used to assess progress toward integration for both individuals and communities, as well as common facilitators of integration. Indicators of integration include employment, English language acquisition, housing, physical and mental health, and social connections, as well as political involvement, citizenship status, and participation in community organizations. One study noted that when assessing integration, it is important to ask refugees whether they consider themselves to be integrated. The studies we reviewed also identified a range of barriers to integration. Some frequently cited barriers were a lack of formal education, illiteracy or limited English proficiency, and insufficient income from low-paying jobs. For example, refugees who are illiterate or have limited English proficiency may be limited to low-paying jobs such as hotel housekeepers and may not earn sufficient income to meet their needs. Furthermore, one study found that the timing of employment can be a barrier to integration. Specifically, the study found that taking a job soon after arrival can slow down the acquisition of English language skills because refugees may have less available time to attend language classes. In addition, the studies we reviewed identified facilitators of integration— circumstances and strategies that can help refugees integrate successfully into their communities. English language acquisition is an important facilitator of integration. For example, one study found that refugees who are proficient in English are better able to connect with nonrefugees in their communities, expanding their social connections and sources of support. Other facilitators of integration included employment, social support from other refugees, and affiliation with or sponsorship by a religious congregation. For example, religious congregations may provide refugees with language classes, social activities, emotional and financial support, and linkages with employment and educational opportunities, medical care, and transportation. See table 6 for additional examples of indicators of integration, barriers to integration, and facilitators of integration. While most of the communities we visited had not established formal goals or strategies to facilitate refugee integration, two of the eight communities had developed formal plans to promote integration. The City of Boise, for example, developed a plan to facilitate the successful resettlement of refugees that includes goals related to integration. Specifically, the plan aims to facilitate integration by (1) establishing refugee community centers, (2) using a media campaign to increase community awareness and support of refugees, and (3) creating a mentoring program for refugee youth, among other things. Similarly, the Village of Skokie, Illinois, a suburb of Chicago, created a strategic plan to help facilitate the integration of immigrants, including refugees, by (1) establishing a coordinating council of key service providers, (2) developing a system to improve providers’ access to interpreters, and (3) recruiting and training immigrant and refugee community leaders for government commissions and school boards, among other strategies. Additionally, in Lancaster, Pennsylvania, Franklin & Marshall College had taken a variety of steps to help facilitate the integration of refugees, including using student volunteers to teach refugees English, tutor refugee students, and help refugee families enroll their children in school and access public health services. In addition, at the time of our visit, the college was partnering with a local voluntary agency affiliate to plan a community conference on refugee integration with the goals of (1) better understanding and addressing the needs of refugees, (2) identifying strategies for fostering rapid integration, and (3) developing a broad coalition of organizations serving refugees that could continue to work together on these issues in the future. Each year, as part of its humanitarian role in the international community, the United States admits tens of thousands of refugees who add richness and diversity to our society but can also have a significant impact on the communities in which they live, particularly in cases where relevant state and local stakeholders are not consulted before refugees are resettled. Advance consultation is important because stakeholders need time to plan so that they can properly serve refugees when they arrive, and because their input on the number of refugees to be resettled can help communities avoid reaching a crisis point. Information about communities that have developed effective strategies for consultation would likely benefit other communities facing similar obstacles. Without more specific guidance and information on effective strategies for consultation, communities may continue to struggle to meet refugees’ needs, which may negatively affect both refugees and their communities and would likely deter integration. Similarly, while ORR has recognized that some service providers have particularly effective strategies for resettlement, neither ORR nor PRM disseminate this information to other service providers. As a result, not all communities are aware of ways they can do their work more effectively. Furthermore, while refugees can receive resettlement services for up to 5 years, some find it difficult to access those services when they relocate to another community. In addition, states do not receive increased funding for serving secondary migrants until the year after refugees relocate. As a result, in communities that experience high levels of secondary migration, voluntary agencies and service providers may not have the resources to provide services to the migrants who need them. Without a funding process that would respond more quickly to localities experiencing high rates of secondary migration, voluntary agencies may have to prioritize serving recently arrived refugees and communities may find their resources for refugees stretched too thin. As required by the Immigration and Nationality Act, as amended, ORR’s programs are designed to help refugees become employed as quickly as possible. ORR’s measures of effectiveness, which focus on whether refugees gain employment in the short term, in turn, influence the types of services that refugees receive. Specifically, service providers may choose to provide services that encourage short-term independence from cash assistance, but might not help refugees achieve long-term self- sufficiency. However, refugees may face unique challenges such as a lack of formal education or work experience, language barriers, and physical and mental health conditions that can make the transition to the United States difficult. Without some incentives to focus on long-term self- sufficiency in addition to short-term independence from cash assistance, refugees may be more likely to need government assistance again in the future, and it may take longer for both refugees and their communities to experience the benefits of integration. We are making the following four recommendations based on our review: To help ensure that state and local stakeholders have the opportunity to provide input on the number of refugees resettled in their communities, we recommend that the Secretary of State provide additional guidance to resettlement agencies and state coordinators on how to consult with local stakeholders prior to making placement decisions, including with whom to consult and what should be discussed during the consultations; and the Secretaries of State and of Health and Human Services collect and disseminate best practices related to refugee placement decisions, specifically on working with community stakeholders, as well as other promising practices from communities. To assist communities in providing services to secondary migrants, we recommend that the Secretary of Health and Human Services consider additional ways to increase the responsiveness of the grants designed for this purpose. This could include asking states to report secondary migration data more often than once a year, allowing resubmission of secondary migration data from states that was rejected because it did not match ORR’s database, creating a process for counting migrants who received services in more than one state, and establishing an emergency grant that could be used to more quickly identify and assist communities that are struggling to serve high levels of secondary migrants. To give service providers more flexibility to serve refugees with different needs and to create incentives to focus on longer term goals, including integration, independence from any government services, and career advancement, we recommend that the Secretary of Health and Human Services examine ORR’s performance measures in light of its goals and determine whether changes are needed. We shared a draft of this report with HHS and State for review and comment. In its written comments, reproduced in appendix VI, HHS generally concurred with our recommendations. Specifically, HHS stated that it supports our recommendation to disseminate best practices, including promising practices from communities, while noting that State and nonprofit community-based and faith-based organizations have traditionally taken the lead on resettling refugees. HHS highlighted the efforts it has made in conducting quarterly placement meetings, which include resettlement agencies and refugee coordinators. While these meetings may be helpful, we believe that HHS can also implement this recommendation by disseminating best practices and program strengths that it documents through its monitoring of states and service providers. In addition, HHS concurred with our recommendation that it consider additional ways to increase the responsiveness of grants that help communities provide services to secondary migrants, but noted that it already provides Supplemental Services grants, which provide short-term assistance to areas that are impacted by increased numbers of new arrivals or secondary migrants. In addition, it raised concerns that an increase in the frequency of data collection would significantly increase the reporting burden without a mandatory need for the data. HHS also stated that it has a process in place for notifying states of technical problems with population data submitted and allowing them to make corrections. While we recognize that HHS has strategies in place to serve secondary migrants, we continue to believe that (1) the Supplemental Services grants can be improved to be more responsive; (2) more up-to- date population data can help HHS respond more quickly to communities experiencing high levels of secondary migration; and (3) improvements can be made to the process for correcting population data. HHS also stated that it will consider modifying its performance measures and will also continue to assess the usefulness of data elements collected through required reporting to ensure that the program addresses both self-sufficiency and integration. HHS noted, for example, that it has already begun collecting more information about health through its annual survey of refugees and expanded the number of reporting elements pertaining to health in its program performance reporting form. In addition, it is developing approaches to increase the overall participation rates in its annual survey. In its written comments, reproduced in appendix VII, State generally concurred with our recommendations and outlined steps it will take to address them. HHS and State also provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees, the Secretary of Health and Human Services, the Secretary of State, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. To identify the factors resettlement agencies consider when deciding where refugees are initially placed, we reviewed relevant federal and state laws and regulations and other relevant documents, and conducted interviews with federal agency officials and national voluntary agency staff. We interviewed officials from the U.S. Department of State’s Bureau of Population, Refugees, and Migration (PRM) and the Department of Health and Human Services’ Office of Refugee Resettlement (ORR), as well as representatives from several national voluntary resettlement agencies. We also reviewed documents related to the refugee placement process, such as relevant federal and state laws and regulations, guidance for determining community capacity to resettle refugees, the terms of the cooperative agreements between PRM and national voluntary agencies, and funding opportunity announcements for PRM’s Reception and Placement Program. To understand the effects refugees have on their communities, we met with experts on refugee programs and conducted site visits to eight communities across the United States where we met with representatives from state and local government entities, voluntary agency affiliates, community-based organizations, local businesses, and other relevant individuals and groups, including refugees, professors from local universities, and a local church that provided assistance to refugees. For our site visits, we selected Boise, Idaho; Chicago, Illinois; Detroit, Michigan; Fargo, North Dakota; Knoxville, Tennessee; Lancaster, Pennsylvania; Owensboro, Kentucky; and Seattle, Washington. These eight communities represent a nongeneralizable sample that was selected to include geographically distributed communities with variations in their population sizes, levels of experience resettling refugees, and racial and ethnic diversity. In addition to these factors, several communities were selected because they are considered examples of best practices in refugee resettlement by federal officials. All of the selected communities were receiving refugees at the time we visited. We developed site selection criteria based on available literature that discussed factors that influence the impact of refugees on their respective communities and factors that either facilitate or hinder refugee integration. We used these criteria in combination with one another to arrive at a diverse set of communities with varying characteristics. To assess the effectiveness and integrity of refugee resettlement programs, we interviewed federal agency officials, state coordinators, and local voluntary agencies. We also reviewed federal agencies’ monitoring plans, protocols and selected monitoring reports for the communities we visited. We reviewed the terms of the cooperative agreements between PRM and national voluntary agencies, as well as reporting guidance, sample performance reports, and performance measures federal agencies use to monitor their programs. To determine what is known about refugees’ integration into the United States, we conducted a literature review of academic research on this topic. To identify relevant studies, we conducted searches of various databases including Academic OneFile, EconLit, Education Resources Information Center, National Technical Information Service, PAIS International, PASCAL, ProQuest, PsycINFO, Social Sciences Abstracts, Social Services Abstracts, Social SciSearch, Sociological Abstracts, and WorldCat. We conducted a search using the following criteria, which yielded 18 studies: Studies must address the integration of refugees into U.S. Studies must have been published from 1995 to the present; Studies must be in English; and Studies must be scholarly, such as peer-reviewed journal articles. We performed these searches and identified studies between August 2011 and October 2011. In addition, ORR officials provided us with an ORR-commissioned study of promising practices that appear to facilitate refugee integration, and this study met our selection criteria. To assess the methodological quality of the 18 studies that met our selection criteria, we evaluated each study’s research methodology, including whether the study was original research, the reliability of the data set, if applicable, and the study’s findings, assumptions, and limitations. We determined that 13 of the 18 studies were sufficiently reliable for our purposes. We then analyzed the findings of these 13 studies. In addition to conducting a literature review, we met with officials from ORR and PRM to determine what, if any, efforts the federal government has to define, measure, or facilitate refugees’ integration into the United States. We discussed refugee integration in our interviews with state and local entities during our site visits. We also reviewed the ORR integration working group’s 2007 interim report and ORR’s annual reports to Congress. We also obtained secondary migration data from ORR’s annual report. We assessed the reliability of this data by interviewing ORR officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of background in this report. We conducted this performance audit from May 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Abu-Ghazaleh, F. “Immigrant Integration in Rural Communities: The Case of Morgan County.” National Civic Review, vol. 98, no. 1 (2009). Birman, D., and N. Tran. “Psychological Distress and Adjustment of Vietnamese Refugees in the United States: Association with Pre- and Postmigration Factors.” American Journal of Orthopsychiatry, vol. 78, no. 1 (2008). Duchon, D. A. “Home Is Where You Make It: Hmong Refugees in Georgia.” Urban Anthropology, vol. 26, no. 1 (1997). Franz, B. “Transplanted or Uprooted? Integration Efforts of Bosnian Refugees Based Upon Gender, Class and Ethnic Differences in New York City and Vienna.” The European Journal of Women’s Studies, vol. 10, no. 2 (2003). Grigoleit, G. “Coming Home? The Integration of Hmong Refugees from Wat Tham Krabok, Thailand, into American Society.” Hmong Studies Journal, vol. 7 (2006). Hume, S.E., and S.W. Hardwick. “African, Russian, and Ukrainian Refugee Resettlement in Portland, Oregon.” The Geographical Review, vol. 95, no. 2 (2005). ISED Solutions. Exploring Refugee Integration: Experiences in Four American Communities. A report prepared at the request of the Department of Health and Human Services Office of Refugee Resettlement. June 2010. Ives, N. “More than a ‘Good Back’: Looking for Integration in Refugee Resettlement.” Refuge, vol. 24, no. 2 (2007). Kenny, P., and K. Lockwood-Kenny. “A Mixed Blessing: Karen Resettlement to the United States.” Journal of Refugee Studies, vol. 24, no. 2 (2011). Patil, C.L., M. McGown, P.D. Nahayo, and C. Hadley. “Forced Migration: Complexities in Food and Health for Refugees Resettled in the United States.” NAPA Bulletin, vol. 34, issue 1 (2010). Shandy, D., and K. Fennelly. “A Comparison of the Integration Experiences of Two African Immigrant Populations in a Rural Community.” Journal of Religion & Spirituality in Social Work, vol. 25, no. 1 (2006). Smith, R.S. “The Case of a City Where 1 in 6 Residents is a Refugee: Ecological Factors and Host Community Adaptation in Successful Resettlement.” American Journal of Community Psychology, vol. 42, no. 3-4 (2008). Westermeyer, J.J. “Refugee Resettlement to the United States: Recommendations for a New Approach.” The Journal of Nervous and Mental Disease, vol. 199, no. 8 (2011). Description Provides financial support to partially cover resettlement services based on a fixed per capita sum per refugee resettled in the United States. Services include arranging for refugees’ placement and providing refugees with basic necessities and core services during their initial resettlement period. Reimburses states and alternative refugee assistance programs for the cost of cash and medical assistance provided to refugees during the first 8 months after their arrival in this country or grant of asylum. It does not provide reimbursement for refugees deemed eligible for Temporary Assistance for Needy Families, Supplemental Security Insurance, and Medicaid. Funds are provided on a matching basis to provide private, nonprofit organizations to fund an alternative to public cash assistance and to support case management, employment services, maintenance assistance, cash allowance, and social services for new arrivals for 4 to 6 months. Provides funding for employment and other social services to refugees for 5 years after their data of arrival or grant of asylum. Provides funding for employment-related and other social services for refugees in counties with large refugee populations and high refugee concentrations. Provides funds to provide medical screenings to newly arriving refugees, interpreter services, information and referral, and health education. Funds to states to implement special employment services not implemented with formula social services grants. Provides funding for employment-related and other social services for refugees in counties with large refugee populations and high refugee concentrations. Provides funds to subcontract with local school systems and nonprofits to support local school systems that are impacted by significant numbers of newly arrived refugee children. Provide funds to ensure that older refugees will be linked to mainstream aging services in their communities or to provide services directly to older refugees if they are not currently being provided for in the community. Description The Preferred Communities Program supports the resettlement of newly arriving refugees with the best opportunities for their self-sufficiency and integration into new communities, and supports refugees with special needs that require more intensive case management, culturally and linguistically appropriate linkages and coordination with other service providers to improve their access to services. Provides funding for a comprehensive program of support for survivors of torture, including rehabilitation, social and legal services, and training for providers. Funds projects to establish and manage Individual Development Accounts, which are matched savings accounts available for the purchase of specific assets. Matching funds, together with the refugee’s own savings, are available for purchasing one (or more) of four savings goals: home purchase; microenterprise capitalization; postsecondary education or training; and purchase of an automobile if necessary for employment or educational purposes. Grants to enable organizations with expertise in a particular area to provide assistance to ORR-funded agencies. Provides funding to assist refugees to become financially independent by helping them develop capital resources and business expertise to start, expand, or strengthen their own businesses. Microenterprise projects typically include components of training and technical assistance in business skills and business management, credit assistance, and credit in the form of micro loans. Provides agricultural and food related resources and technical information to refugee families that are consistent with their agrarian backgrounds, and results in rural and urban farming projects that supports increased incomes, access to quality and familiar foods, better physical and mental health, and integration into this society. Provides funds to provide services to newly arriving refugees or sudden and unexpected large secondary migration of refugees where communities are not sufficiently prepared in terms of linguistic or culturally appropriate services and/or do not have sufficient service capacity. Provides funds to support ethnic community based organizations in providing refugee populations with critical services to assist them in becoming integrated members of American society. For the purposes of this table, states refers to state agencies, state alternative programs, and state replacement designees. State alternative programs include (1) the Wilson/Fish program, which gives states flexibility in how they provide assistance to refugees, including whether to administer assistance primarily through local voluntary agencies, and (2) the Public Private Partnership program, which allows states to partner with local voluntary agencies to provide assistance. State replacement designees are authorized by ORR to administer assistance to refugees when a state withdraws from all or part of the refugee program. For the purposes of this table, refugees refers to refugees, certain Amerasians from Viet Nam, Cuban and Haitian entrants, asylees, victims of a severe form of trafficking, and Iraqi and Afghan Special Immigrants. In January 2007, ORR’s Integration Working Group made short-term and long-term recommendations regarding ways in which ORR could more fully support the integration process for refugees. Include integration language in all grant announcements. Review discretionary grant programs offered in the standing announcement, ensuring that they promote integration. Establish the Department of Health and Human Services as the lead federal agency for integration. Consider expanding ORR’s discretionary programs. Focus on integration in the areas of employment, English language acquisition, health, housing, and civic engagement. Focus technical assistance providers to support integration as an intentional process leading to civic engagement and citizenship. Seek and fund pilot programs such as the Building the New American Community project. Develop an initiative to support professional recertification and credentialing for qualified individuals. Identify and share best practices through a survey of states, mutual aid associations, and voluntary agencies. Identify lessons learned, including case studies, from communities in which integration appears to be working well and where there are challenges. Study the effect of ORR policy and funding initiatives to promote integration over a three to five year period. Refine/develop/disseminate an action model to be used for other immigrants and marginalized populations. Seek broader collaboration with nonfederal entities such as private foundations, businesses, financial institutions, and the United Way. In addition to the contact named above, Kathryn Larin, Assistant Director; Cheri Harrington and Lara Laufer, Analysts-in-Charge; James Bennett; David Chrisinger; Caitlin Croake; Bonnie Doty; Ashley McCall; Jean McSween; James Rebbe; and Carla Rojas made key contributions to this report. Sharon Hermes, Margaret Weber, and Amber Yancey Carroll verified our findings. | In fiscal year 2011, the United States admitted more than 56,000 refugees under its refugee resettlement program. Upon entry, a network of private, nonprofit voluntary agencies (voluntary agencies) selects the communities where refugees will live. The Department of State's PRM and the Department of Health and Human Services' ORR provide funding to help refugees settle in their communities and obtain employment and monitor implementation of the program. Congress has begun to reexamine the refugee resettlement program, and GAO was asked to examine (1) the factors resettlement agencies consider when determining where refugees are initially placed; (2) the effects refugees have on their communities; (3) how federal agencies ensure program effectiveness and integrity; and (4) what is known about the integration of refugees. GAO reviewed agency guidance, monitoring protocols, reports, and studies; conducted a literature review; reviewed and analyzed relevant federal and state laws and regulations; and met with federal and state officials, voluntary agency staff, and local stakeholders in eight selected communities. Voluntary agencies consider various factors when determining where refugees will be placed, but few agencies we visited consulted relevant local stakeholders, which posed challenges for service providers. When deciding how many refugees to place in each community, some voluntary agencies prioritize local agency capacity, such as staffing levels, while others emphasize community capacity, such as housing availability. Although the Immigration and Nationality Act states that it is the intent of Congress for voluntary agencies to work closely with state and local stakeholders when making these decisions, the Department of State's Bureau of Population, Refugees, and Migration (PRM) offers limited guidance on how this should occur. Some communities GAO visited had developed formal processes for obtaining stakeholder input after receiving an overwhelming number of refugees, but most had not, which made it difficult for health care providers and school systems to prepare for and properly serve refugees. State and local stakeholders reported that refugees bring cultural diversity and stimulate economic development, but serving refugees can stretch local resources, including safety net services. In addition, refugee students can negatively affect performance outcomes for school districts because they often have limited English proficiency. Furthermore, some refugees choose to relocate after their initial placement, and this secondary migration may stretch communities that do not have adequate resources to serve them. In fact, capacity challenges have led some communities to request restrictions or temporary moratoriums on resettlement. PRM and the Department of Health and Human Services' Office of Refugee Resettlement (ORR) monitor their refugee assistance programs, but weaknesses in performance measurement may hinder effectiveness. Although refugees are eligible for ORR services for up to 5 years, the outcome data that ORR collects focuses on shorter-term employment outcomes. ORR officials said that their performance measurement reflects the goals outlined by the Immigration and Nationality Act--to help refugees achieve economic self-sufficiency as quickly as possible. However, the focus on rapid employment makes it difficult to provide services that may increase refugees' incomes, such as helping them obtain credentials to practice their professions in the United States. Little is known about the extent of refugee integration into U.S. communities, but research offers a framework for measuring and facilitating integration. PRM and ORR both promote refugee integration, but neither agency currently measures integration as a program outcome. While integration is part of ORR's mission, ORR officials said one of the reasons they have not measured it is that there is no clear definition of integration. In addition, research on refugee resettlement does not offer an overall assessment of how well refugees have integrated into the United States. Most of the 13 studies GAO reviewed were limited in scope and focused on particular refugee groups in specific geographic locations. However, these studies identified a variety of indicators that can be used to assess integration as well as factors that can facilitate integration, such as English language acquisition, employment, and social support from other refugees. Despite limited national information, some U.S. communities have developed formal plans for refugee integration. GAO makes several recommendations to the Secretaries of State and Health and Human Services to improve refugee assistance programs in the United States. HHS and State generally concurred with the recommendations and each identified efforts they have under way or plan to undertake to address them. |
In the United States, commercial motor carriers account for less than 5 percent of all highway crashes, but these crashes result in about 13 percent of all highway deaths, or about 5,500 of the approximately 43,000 nationwide highway fatalities that occur annually. In addition, about 160,000 of the approximately 3.2 million highway injuries per year involve motor carriers. While the fatality rate for trucks has generally decreased over the past 30 years, it has been fairly stable since 2002. (See fig. 1.) The fatality rate for buses decreased slightly from 1975 to 2005, but it has more annual variability than the fatality rate for trucks due to a much smaller total vehicle miles traveled. FMCSA’s primary mission is to reduce the number and severity of crashes involving large trucks and buses. FMCSA relies heavily on the results of compliance reviews to determine whether carriers are operating safely and, if not, to take enforcement action against them. FMCSA conducts these on-site reviews to determine carriers’ compliance with safety regulations that address areas such as alcohol and drug testing of drivers, driver qualifications, driver hours of service, vehicle maintenance and inspections, and transportation of hazardous materials. FMCSA uses a data-driven analysis model called SafeStat to assess carriers’ risks relative to all other carriers based on safety indicators, such as their crash rates and safety violations identified during roadside inspections and prior compliance reviews. A carrier’s score is calculated based on its performance in four safety evaluation areas: accidents and driver, vehicle, and safety management violations. (See fig. 2.) SafeStat identifies many carriers that pose a high risk for crashes and is about twice as effective (83 percent) as randomly selecting carriers for compliance reviews. As a result, it has value for improving motor carrier safety. However, two enhancements that we analyzed could lead to FMCSA identifying carriers that pose greater crash risks overall. These approaches entail giving more weight to crashes than the current SafeStat model does. FMCSA has concerns about these approaches, in part, because placing more emphasis on accidents would require it to place less emphasis on other types of problems. FMCSA recognizes that SafeStat can be improved, and as part of its Comprehensive Safety Analysis 2010 reform initiative—which is aimed at improving its processes for identifying and dealing with unsafe carriers and drivers—the agency is considering replacing SafeStat by 2010. In June 2007, we reported that FMCSA could improve SafeStat’s ability to identify carriers that pose high crash risks if it applied a statistical approach, called the negative binomial regression model, to the four SafeStat safety evaluation areas instead of its current approach. We used this approach to determine whether systematic analyses of data through regression modeling offered improved results in identifying carriers that pose high crash risks over FMCSA’s model, which uses expert judgment and professional experience to apply weights to each of the safety evaluation areas. The negative binomial model results in a rank order listing of carriers by crash risk and the predicted number of crashes. This differs from SafeStat’s current approach, which gives the highest priority to carriers that are deficient in three or more safety evaluation areas or that score over a certain amount—SafeStat categories A and B. (See table 1.) The other enhancement that we analyzed—the results of which are preliminary—utilized the existing SafeStat overall design but examined the effect of providing greater priority to carriers that scored among the worst 5 percent of carriers in the accident safety evaluation area (SafeStat category D). We chose this approach because we found that while the driver, vehicle, and safety management evaluation areas are correlated with the future crash risk of a carrier, the accident evaluation area correlates most with future crash risk. This approach would retain the overall SafeStat framework and categorization—categories A through G for carriers with safety problems—but would substitute carriers in category D (the accident category) for carriers in categories A and B that have either (1) lower overall SafeStat scores or (2) lower accident area scores. We compared the performance of our regression model approach and placing greater weight on carriers that scored among the worst 5 percent of carriers in SafeStat category D to the current SafeStat model. The comparison showed that both these approaches performed better than the current SafeStat approach. (See table 2.) For example, the regression model approach identified carriers with an average of 111 crashes per 1,000 vehicles over an 18-month period compared with the current SafeStat approach, which identified carriers for compliance reviews with an average of 102 crashes per 1,000 vehicles. This 9 percent improvement would have enabled FMCSA to identify carriers with almost twice as many crashes in the following 18 months as those carriers identified in its current approach (19,580 v. 10,076). Placing greater emphasis on carriers in category D provided superior results to the current SafeStat approach both in terms of identifying carriers with higher crash rates (from 6 to 9 percent higher) and greater numbers of crashes (from about 600 to 800 more). In addition, the regression approach performed at least as well as placing greater emphasis on carriers in category D in terms of identifying carriers with the highest crash rates and much better in identifying carriers with the greatest number of crashes. Because both the approaches that we analyzed would identify a larger number of carriers that pose high crash risks, FMCSA would choose the number of carriers to review based on the resources available to it, much as it currently does. We believe that our statistically based regression model is preferable to placing greater weight on carriers in category D because it provides for a systematic assessment of the relative contributions of accidents and driver, vehicle, and safety management violations. We recommended that FMCSA adopt such an approach. By its very nature the regression approach looks for the “best fit” in identifying the degree to which prior accidents and driver, vehicle, and safety management violations identify the likelihood of carriers having crashes in the future, compared to the current SafeStat approach, in which the relationship among the four evaluation areas is based on expert judgment. In addition, because the regression model could be run monthly—as is the current SafeStat model—any change in the degree to which accidents and driver, vehicle, and safety management violations better identify future crashes will be automatically considered as different weights to the four evaluation areas are assigned. This is not the case with the current SafeStat model, in which the evaluation area weights generally remain constant over time. FMCSA agreed that use of a negative binomial regression model looks promising but officials said that the agency believes that placing more emphasis on the accident area would be counterproductive. First, FMCSA is concerned that this would require placing correspondingly less emphasis on the types of problems the compliance review is designed to address so that crashes can be reduced (i.e., the lack of compliance with safety regulations related to drivers, vehicles, and safety management that is captured in the other evaluation areas). Along this line, FMCSA said that compliance reviews of carriers in SafeStat category D have historically resulted in fewer serious violations than compliance reviews of carriers in SafeStat category A or B. We agree with FMCSA that the use of the approaches that we are discussing here today could tilt enforcement heavily toward carriers with high crash rates and away from carriers with compliance issues. We disagree, however, that this would be counterproductive. We found that while driver, vehicle, and safety management evaluation area scores are correlated with the future crash risk of a carrier, high crash rates are a stronger predictor of future crashes than poor compliance with safety regulations. FMCSA’s mission—as well as the ultimate purpose of compliance reviews—is to reduce the number and severity of truck and bus crashes. Second, FMCSA officials said that placing more emphasis on the accident evaluation area would increase emphasis on the least reliable type of data used by SafeStat—crash data—and in so doing, it would increase the sensitivity of the results to crash data quality issues. However, in June 2007 we reported that FMCSA has made considerable efforts to improve the reliability of crash data. The report also concluded that as FMCSA continues its efforts to have states improve crash data, any sensitivity of results from our statistically based model to crash data quality issues should diminish. As part of its Comprehensive Safety Analysis 2010, a reform initiative aimed at improving its processes for identifying and dealing with unsafe carriers and drivers, FMCSA is considering replacing SafeStat with a new tool by 2010. The new tool could take on greater importance in FMCSA’s safety oversight framework because the agency is considering using the tool’s assessments of carriers’ safety to determine whether carriers are fit to continue operating. In contrast, SafeStat is primarily used now to prioritize carriers for compliance reviews, and determinations of operational fitness are made only after compliance reviews are completed. FMCSA also plans to develop a tool to assess the safety status of individual drivers, along with tools for dealing with unsafe drivers. Even though FMCSA is considering replacing SafeStat, we believe that implementing either of the approaches discussed in this statement would be worthwhile because it would be relatively easy to do and result in immediate safety benefits that could save lives. Our preliminary assessment is that FMCSA manages its compliance reviews in a way that meets our standards for internal control, thereby promoting thoroughness and consistency in the reviews. It does so by establishing compliance review policies and procedures through an electronic manual and training, using an information system to document the results of its compliance reviews, and monitoring performance. We also found that compliance reviews cover most of the major areas of the agency’s safety regulations. FMCSA’s communication of its policies and procedures related to conducting compliance reviews meets our standards for internal control. These standards state that an organization’s policies and procedures should be recorded and communicated to management and others within the entity who need it and in a form (that is, for example, clearly written and provided as a paper or electronic manual) and within a time frame that enables them to carry out their responsibilities. FMCSA records and communicates its policies and procedures electronically through its Field Operations Training Manual, which it provides to all federal and state investigators and their managers. The manual includes guidance on how to prepare for a compliance review (for example, by reviewing information on the carrier’s accidents, drivers, and inspections), and it explains how this information can help the investigator focus the compliance review. It also specifies the minimum number of driver and vehicle maintenance records to be examined and the minimum number of vehicle inspections to be conducted during a compliance review. FMCSA posts updates to the manual that automatically download to investigators and managers when they connect to the Internet. In addition to the manual, FMCSA provides classroom training to investigators and requires that investigators successfully complete that training and examinations before they conduct a compliance review. According to FMCSA officials, investigators then receive on-the-job training, in which they accompany an experienced investigator during compliance reviews. Investigators can also take additional classroom training on specialized topics throughout their careers. FMCSA’s documentation of compliance reviews meets our standards for internal control. These standards state that all transactions and other significant events should be clearly and promptly documented, and the documentation should be readily available for examination. FMCSA and state investigators use an information system to document the results of their compliance reviews, including information on crashes and any violations of the safety regulations that they identify. This documentation is readily available to FMCSA managers, who told us that they review it to help ensure completeness and accuracy. FMCSA officials told us that the information system also helps ensure thoroughness and consistency by prompting investigators to follow FMCSA’s policies and procedures, such as requirements to meet a minimum sample size. The information system also includes checks for consistency and reasonableness and prompts investigators when the information they enter appears to be inaccurate. FMCSA said managers may assess an investigator’s thoroughness by comparing the rate of violations the investigator identified over the course of several compliance reviews to the average rate for investigators in their division office; a rate that is substantially below the average suggests insufficient thoroughness. FMCSA’s performance measurement and monitoring of its compliance review activities meet our standards for internal control. These standards state that managers should compare actual performance to planned or expected results and analyze significant differences. According to FMCSA and state managers and investigators, the managers review all compliance reviews in each division office and state to ensure thoroughness and consistency across investigators and across compliance reviews. The investigators we spoke with generally found these reviews to be helpful, and several investigators said that the reviews helped them learn policies and procedures and ultimately perform better compliance reviews. In addition to assessing the performance of individual investigators, FMCSA periodically assesses the performance of FMCSA division offices and state agencies and conducted an agencywide review of its compliance review program in 2002. According to officials at one of FMCSA’s service centers, the service centers lead triennial reviews of the compliance review and enforcement activities of each division office and its state partner. These reviews assess whether the division offices and state partners are following FMCSA policies and procedures, and they include an assessment of performance data for items such as the number of compliance reviews conducted, rate of violations identified, and number of enforcement actions taken. The officials said that some reviews identify instances in which division offices have deviated from FMCSA’s compliance review policies but that only minor adjustments by the division offices are needed. The officials also said that the service centers compile best practices identified during the reviews and share these among the division offices and state partners. FMCSA’s review also concluded that most investigators were not following FMCSA’s policy requiring them to perform vehicle inspections as part of a compliance review if the carrier had not already received the required number of roadside vehicle inspections. Since conducting its 2002 review, FMCSA changed its policy so that inspecting a minimum number of vehicles is no longer a strict requirement—if an investigator is unable to inspect the minimum number of vehicles, he or she must explain why in the compliance review report. From fiscal year 2001 through fiscal year 2006, each of the nine major applicable areas of the safety regulations was consistently covered by most of the approximately 76,000 compliance reviews conducted by FMCSA and the states. (See table 3.) For the most part, 95 percent or more of the compliance reviews covered each major applicable area in the agency’s safety regulations. An FMCSA official told us that not every compliance review is required to cover these nine areas. For example, follow-up compliance reviews of carriers rated unsatisfactory or conditional are sometimes streamlined to cover only the one or a few areas of the regulations in which the carrier had violations. As another example, minimum insurance coverage regulations apply only to for-hire carriers and private carriers of hazardous materials; they do not apply to private passenger and nonhazardous materials carriers. However, according to an FMCSA official, the area of these regulations that had the lowest rate of coverage—vehicle parts and accessories necessary for safe operation—is required for all compliance reviews except streamlined reviews. Vehicle inspections are supposed to be a key investigative technique for assessing compliance with this area, and an FMCSA official said that the lower rate of coverage for the parts and accessories area likely reflects the small number of vehicle inspections that FMCSA and the states conduct during compliance reviews. Our preliminary assessment is that FMCSA placed many carriers rated unsatisfactory in fiscal year 2005 out of service and followed up with nearly all of the rest to determine whether they had improved. In addition, FMCSA monitors carriers to identify those that are violating out-of-service orders. However, it does not take additional action against many violators of out-of-service orders that it identifies. Furthermore, FMCSA does not assess maximum fines against all carriers, as we believe the law requires, partly because FMCSA does not distinguish between carriers with a pattern of serious safety violations and those that repeat a serious violation. FMCSA followed up with at least 1,189 of 1,196 carriers (99 percent) that received a proposed safety rating of unsatisfactory following compliance reviews completed in fiscal year 2005. These follow-ups resulted in either upgraded safety ratings or the carriers being placed out of service. Specifically, Based on follow-up compliance reviews, FMCSA upgraded the final safety ratings of 658 carriers (325 to satisfactory and 333 to conditional). FMCSA assigned a final rating of unsatisfactory to 309 carriers. FMCSA issued out-of-service orders to 306 of these carriers. An FMCSA official told us that it did not issue out-of-service orders to the remaining three carriers either because the agency could not locate them or because the carrier was still subject to an out-of-service order that FMCSA issued several years prior to the 2005 compliance review. After FMCSA reviewed evidence of corrective action submitted by carriers, it upgraded the final safety ratings of 214 carriers (23 to satisfactory and 191 to conditional). Due to an error in assigning the proposed safety rating to one carrier, FMCSA upgraded its final safety rating to conditional. For the remaining 14 carriers, FMCSA did not (1) provide us information on whether and how it followed up with 7 carriers in time for us to incorporate it in this statement and (2) respond to our request to clarify its follow-up approach for another 7 carriers in time for us to incorporate it in this statement. Under its policies, FMCSA is generally required to assign the carrier a final rating of unsatisfactory and to issue it an out-of-service order after either 45 or 60 days, depending on the nature of the carrier’s business. Of the about 300 out-of-service orders that FMCSA issued to carriers rated unsatisfactory following compliance reviews conducted in fiscal year 2005, FMCSA told us that 89 percent were issued on time, 9 percent were issued between 1 and 10 days late, and 2 percent were issued more than 10 days late. We are working with FMCSA to verify these numbers. An FMCSA official told us that in the few instances where an out-of-service order was issued more than 1 week late, the primary reason for the delay was that the responsible FMCSA division office had difficulty scheduling follow-up compliance reviews and thus held off on issuing the orders. FMCSA uses two primary means to try to ensure that carriers that have been placed out of service do not continue to operate. First, FMCSA partners with states to help them suspend, revoke, or deny vehicle registration to carriers that have been placed out of service. FMCSA refers to these partnerships as the Performance and Registration Information Systems Management program (PRISM). PRISM links FMCSA databases with state motor vehicle registration systems and roadside inspection personnel to help identify vehicles operated by carriers that have been issued out-of-service orders. As of January 2007, 45 states had been awarded PRISM grants and 27 states were operating with PRISM capabilities. Second, FMCSA monitors carriers for indicators—such as roadside inspections, moving violations, and crashes—that they may be violating an out-of-service order and visits some of the suspect carriers to examine their records to determine whether they did indeed violate the order. FMCSA told us it is difficult to detect carriers operating in violation of out- of-service orders because its resources do not allow it to visit each carrier or conduct roadside inspections on all vehicles, and we agree. In fiscal years 2005 and 2006, 768 of 1,996 carriers (38 percent) that were subject to an out-of-service order had a roadside inspection or crash; FMCSA cited only 26 of these 768 carriers for violating an out-of-service order. An FMCSA official told us that some of these carriers, such as carriers that were operating intrastate or that had leased its vehicles to other carriers, may not have been violating the out-of-service order. He said that FMCSA did not have enough resources to determine whether each of the carriers was violating an out-of-service order. From August 2006 through February 2007, FMCSA data indicate that the agency performed compliance reviews on 1,136 of the 2,220 (51 percent) carriers that were covered by its mandatory compliance review policy. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users act requires that FMCSA conduct compliance reviews on carriers rated as SafeStat category A or B for 2 consecutive months. In response to this requirement, FMCSA implemented a policy in June 2006 requiring a compliance review within 6 months for any such carrier unless the carrier had received a compliance review within the previous 12 months. An FMCSA official told us that the agency did not have enough resources to conduct compliance reviews on all of the 2,220 carriers within 6 months. In April 2007, FMCSA revised the policy because it believes that it required compliance reviews for some carriers that did not need them, leaving FMCSA with insufficient resources to conduct compliance reviews on other carriers that did need them. Specifically, FMCSA believes that carriers that had already had a compliance review were targeted unnecessarily after they had corrected identified violations, but these violations continued to adversely affect their SafeStat rating because SafeStat penalizes carriers for violations regardless of whether they have been corrected. The new policy requires compliance reviews within 6 months for carriers that have been in SafeStat category A or B for 2 consecutive months and received their last compliance 2 or more years ago (or have never received a compliance review) and offers some discretion to FMCSA division offices. For example, division offices can decide not to conduct a compliance review if its SafeStat score is based largely on violations that have been corrected or on accidents that occurred prior to the carrier’s last compliance review. We believe that these changes are consistent with the act’s requirement and give FMCSA appropriate discretion in allocating its compliance review resources. FMCSA does not assess the maximum fines against all carriers as we believe the law requires. The law requires FMCSA to assess the maximum allowable fine for each serious violation by a carrier that is found (1) to have committed a pattern of such violations (pattern requirement) or (2) to have previously committed the same or a related serious violation (repeat requirement). However, FMCSA’s policy on maximum fines does not fully meet these requirements. FMCSA enforces both requirements using what is known as the “three-strikes rule,” applying the maximum allowable fine when it finds that a motor carrier has violated the same regulation three times within a 6-year period. FMCSA officials said they interpret both parts of the act’s requirements to refer to repeat violations, and because they believe that having two distinct policies on repeat violations would confuse motor carriers, it has chosen to address both requirements with its single three-strikes policy. FMCSA’s interpretation does not carry out the statutory mandate to impose maximum fines in two different cases. In contrast to FMCSA, we read the statute’s use of the distinct terms “a pattern of violations” and “previously committed the same or a related violation” as requiring FMCSA to implement two distinct policies. A basic principle of statutory interpretation is that distinct terms should be read as having distinct meanings. In this case, the statute not only uses different language to refer to the violations for which maximum fines must be imposed but also sets them out separately and makes either type of violation subject to the maximum penalties. Therefore, one carrier may commit a variety of serious violations and another carrier may commit the same or a substantially similar serious violation as a previous violation; the language on its face requires FMCSA to assess the maximum allowable fine in both situations—patterns of violations as well as repeat offenses. FMCSA could define a pattern of serious violations in numerous ways that are consistent with the act’s pattern requirement. Our assessment of eight potential definitions shows that the number of carriers that would be subject to maximum fines depends greatly on the definition. (See table 4.) For example, a definition calling for two or more serious violations in each of at least four different regulatory areas during a compliance review would have made 38 carriers subject to maximum fines in fiscal year 2006. In contrast, a definition calling for one or more serious violations in each of at least three different regulatory areas would have made 1,529 carriers subject to maximum fines during that time. We also interpret the statutory language for the repeat requirement as calling for a “two-strikes” rule as opposed to FMCSA’s three-strikes rule interpretation. FMCSA’s interpretation imposes the maximum fine only after a carrier has twice previously committed such violations. The language of the statute does not allow FMCSA’s interpretation; rather, it requires FMCSA to assess the maximum allowable fine for each serious violation against a carrier that has previously committed the same serious violation. In fiscal years 2004 through 2006, more than four times as many carriers had a serious violation that constituted a second strike than carriers that had a third strike. (See table 5.) For example, in fiscal year 2006, 1,320 carriers had a serious violation that constituted a second strike, whereas 280 carriers had a third strike. Carriers that commit a pattern of violations may also commit a second strike violation. For example, three of the seven carriers that had two or more serious violations in each of at least five different regulatory areas also had a second strike in fiscal year 2006. Were FMCSA to make policy changes along the lines discussed here, we believe that the new policies should address how to deal with carriers with serious violations that both are part of a pattern and repeat the same or similar previous violations. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information on this statement, please contact Susan Fleming at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were David Goldstein, Eric Hudson, and James Ratzenberger. Motor Carrier Safety: A Statistical Approach Will Better Identify Commercial Carriers That Pose High Crash Risks Than Does the Current Federal Approach. GAO-07-585. Washington, D.C.: June 11, 2007. Unified Motor Carrier Fee System: Progress Made but Challenges to Implementing New System Remain. GAO-07-771R. Washington, D.C.: May 25, 2007. Consumer Protection: Some Improvements in Federal Oversight of Household Goods Moving Industry Since 2001, but More Action Needed to Better Protect Individual Consumers. GAO-07-586. Washington, D.C.: May 16, 2007. Transportation Security: DHS Efforts to Eliminate Redundant Background Check Investigations. GAO-07-756. Washington, D.C.: April 26, 2007. Truck Safety: Share the Road Safely Pilot Initiative Showed Promise, but the Program’s Future Success Is Uncertain. GAO-06-916. Washington, D.C.: September 8, 2006. Federal Motor Carrier Safety Administration: Education and Outreach Programs Target Safety and Consumer Issues, but Gaps in Planning and Evaluation Remain. GAO-06-103. Washington, D.C.: December 19, 2005. Large Truck Safety: Federal Enforcement Efforts Have Been Stronger Since 2000, but Oversight of State Grants Needs Improvement. GAO-06- 156. Washington, D.C.: December 15, 2005. Highway Safety: Further Opportunities Exist to Improve Data on Crashes Involving Commercial Motor Vehicles. GAO-06-102. Washington, D.C.: November 18, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Federal Motor Carrier Safety Administration (FMCSA) has the primary federal responsibility for reducing crashes involving large trucks and buses. FMCSA uses its "SafeStat" tool to select carriers for reviews for compliance with its safety regulations based on the carriers' crash rates and prior safety violations. FMCSA then conducts these compliance reviews and can place carriers out of service if they are found to be operating unsafely. This statement is based on a recent report (GAO-07-585) and other nearly completed work. GAO assessed (1) the extent to which FMCSA identifies carriers that subsequently have high crash rates, (2) how FMCSA ensures that its compliance reviews are conducted thoroughly and consistently, and (3) the extent to which FMCSA follows up with carriers with serious safety violations. GAO's work was based on a review of laws, program guidance, and analyses of data from 2004 through early 2006. FMCSA generally does a good job in identifying carriers that pose high crash risks for subsequent compliance reviews, ensuring the thoroughness and consistency of those reviews, and following up with high-risk carriers. SafeStat is nearly twice as effective (83 percent) as random selection in identifying carriers that pose high crash risks. However, its effectiveness could be improved by using a statistical approach (negative binomial regression), which provides for a systematic assessment to apply weights to the four SafeStat safety evaluation areas (accidents and driver, vehicle, and safety management violations) rather than FMCSA's approach, which relies on expert judgment. The regression approach identified carriers that had twice as many crashes in the subsequent 18 months as did the carriers identified by the current SafeStat approach. FMCSA is concerned that adopting this approach would result in it placing more emphasis on crashes and less emphasis on compliance with its safety management, vehicle, and driver regulations. GAO believes that because (1) the ultimate purpose of compliance reviews is to reduce the number and severity of truck and bus crashes and (2) GAO's and others' research has shown that crash rates are stronger predictors of future crashes than is poor compliance with FMCSA's safety regulations, the regression approach would improve safety. GAO's preliminary assessment is that FMCSA promotes thoroughness and consistency in its compliance reviews through its management processes, which meet GAO's standards for internal controls. For example, FMCSA uses an electronic manual to record and communicate its compliance review policies and procedures and teaches proper compliance review procedures through both classroom and on-the-job training. Furthermore, investigators use an information system to document their compliance reviews, and managers review these data, helping to ensure thoroughness and consistency between investigators. For the most part, FMCSA and state investigators cover the nine major applicable areas of the safety regulations (e.g., driver qualifications and vehicle condition) in 95 percent or more of compliance reviews, demonstrating thoroughness and consistency. GAO's preliminary assessment is that FMCSA follows up with almost all carriers with serious safety violations, but it does not assess the maximum fines against all serious violators that GAO believes the law requires. FMCSA followed up with at least 1,189 of 1,196 carriers (99 percent) that received proposed unsatisfactory safety ratings from compliance reviews completed in fiscal year 2005. For example, FMCSA found that 873 of these carriers made safety improvements and it placed 306 other carriers out of service. GAO also found that FMCSA (1) assesses maximum fines against carriers for the third instance of a violation, whereas GAO reads the statute as requiring FMCSA to do so for the second violation and (2) does not always assess maximum fines against carriers with a pattern of varied serious violations, as GAO believes the law requires. |
The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct assistance after floods. NFIP, which provides government-guaranteed flood insurance to homeowners and businesses, was intended to reduce the federal government’s escalating costs for repairing flood damage after disasters. FEMA, which is within the Department of Homeland Security (DHS), is responsible for the oversight and management of NFIP. Since NFIP’s inception, Congress has enacted several pieces of legislation to strengthen the program. The Flood Disaster Protection Act of 1973 made flood insurance mandatory for owners of properties in vulnerable areas who had mortgages from federally regulated lenders and provided additional incentives for communities to join the program. The National Flood Insurance Reform Act of 1994 strengthened the mandatory purchase requirements for owners of properties located in special flood hazard areas (SFHA) with mortgages from federally regulated lenders. Finally, the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized grant programs to mitigate properties that experienced repetitive flooding losses. Owners of these repetitive loss properties who do not mitigate may face higher premiums. To participate in NFIP, communities agree to enforce regulations for land use and new construction in high-risk flood zones and to adopt and enforce state and community floodplain management regulations to reduce future flood damage. Currently, more than 20,000 communities participate in NFIP. NFIP has mapped flood risks across the country, assigning flood zone designations based on risk levels, and these designations are a factor in determining premium rates. NFIP offers two types of flood insurance premiums: subsidized and full risk. The National Flood Insurance Act of 1968 authorizes NFIP to offer subsidized premiums to owners of certain properties. These subsidized premium rates, which represent about 40 to 45 percent of the cost of covering the full risk of flood damage to the properties, apply to about 22 percent of all NFIP policies. To help reduce or eliminate the long-term risk of flood damage to buildings and other structures insured by NFIP, FEMA has used a variety of mitigation efforts, such as elevation, relocation, and demolition. Despite these efforts, the inventories of repetitive loss properties—generally, as defined by FEMA, those that have had two or more flood insurance claims payments of $1,000 or more over 10 years—and policies with subsidized premium rates have continued to grow. In response to the magnitude and severity of the losses from the 2005 hurricanes, Congress increased NFIP’s borrowing authority from the Treasury to $20.8 billion. We have previously identified four public policy goals for evaluating the federal role in providing natural catastrophe insurance: charging premium rates that fully reflect actual risks, limiting costs to taxpayers before and after a disaster, encouraging broad participation in natural catastrophe insurance encouraging private markets to provide natural catastrophe insurance. Taking action to achieve these goals would benefit both NFIP and the taxpayers who fund the program but would require trade-offs. I will discuss the key areas that need to be addressed, actions that can be taken to help achieve these goals, and the trade-offs that would be required. As I have noted, NFIP currently does not charge all program participants rates that reflect the full risk of flooding to their properties. First, the act requires FEMA to charge many policyholders less than full-risk rates to encourage program participation. While the percentage of subsidized properties was expected to decline as new construction replaced subsidized properties, today nearly one out of four NFIP policies is based on a subsidized rate. Second, FEMA may “grandfather” properties that are already in the program when new flood maps place them in higher-risk zones, allowing some property owners to pay premium rates that apply to the previous lower-risk zone. FEMA officials told us that they made the decision to allow grandfathering because of external pressure to reduce the effects of rate increases, and considerations of equity, ease of administration, and the goals of promoting floodplain management. Similarly, FEMA recently introduced a new rating option called the Preferred Risk Policy (PRP) Eligibility Extension that in effect equals a temporary grandfathering of premium rates. While these policies typically would have to be converted to more expensive policies when they were renewed after a new flood map came into effect, FEMA has extended eligibility for these lower rates. Finally, we have also raised questions about whether NFIP’s full-risk rates reflect actual flood risks. Because many premium rates charged by NFIP do not reflect the full risk of loss, the program is less likely to be able to pay claims in years with catastrophic losses, as occurred in 2005, and may need to borrow from Treasury to pay claims in those years. Increasing premium rates to fully reflect the risk of loss—including the risk of catastrophic loss—would generally require reducing or eliminating subsidized and grandfathered rates and offers several advantages. Specifically, increasing rates could: result in premium rates that more fully reflected the actual risk of loss; decrease costs for taxpayers by reducing costs associated with postdisaster borrowing to pay claims; and encourage private market participation, because the rates would more closely approximate those that would be charged by private insurers. However, eliminating subsidized and grandfathered rates and increasing rates overall would increase costs to some homeowners, who might then cancel their flood policies or elect not to buy them at all. According to FEMA, subsidized premium rates are generally 40 to 45 percent of rates that would reflect the full risk of loss. For example, the projected average annual subsidized premium was $1,121 as of October 2010, discounted from the $2,500 to $2,800 that would be required to cover the full risk of loss. In a 2009 report, we also analyzed the possibility of creating a catastrophic loss fund within NFIP (one way to help pay for catastrophic losses). Our analysis found that in order to create a fund equal to 1 percent of NFIP’s total exposure by 2020, the average subsidized premium—which typically is in one of the highest-risk zones—would need to increase from $840 to around $2,696, while the average full-risk premium would increase from around $358 to $1,149. Such steep increases could reduce participation, either because homeowners could no longer afford their policies or simply deemed them too costly, and increase taxpayer costs for postdisaster assistance to property owners who no longer had flood insurance. However, a variety of actions could be taken to mitigate these disadvantages. For example, subsidized rates could be phased out over time or not transferred with the property when it is sold. Moreover, as we noted in our past work, targeted assistance could be offered to those most in need to help them pay increased NFIP premiums. This assistance could take several forms, including direct assistance through NFIP, tax credits, or grants. In addition, to the extent that those who might forego coverage were actually required to purchase it, additional actions could be taken to better ensure that they purchased policies. According to RAND Corporation, in SFHAs, where property owners with loans from federally insured or regulated lenders are required to purchase flood insurance, as few as 50 percent of the properties had flood insurance in 2006. In order to reduce expenses to taxpayers that can result when NFIP borrows from Treasury, NFIP needs to be able to generate enough in premiums to pay its claims, even in years with catastrophic losses—a goal that is closely tied to that of eliminating subsidies and other reduced rates. Since the program’s inception, NFIP premiums have come close to covering claims in average loss years but not in years of catastrophic flooding, particularly 2005. Unlike private insurance companies, NFIP does not purchase reinsurance to cover catastrophic losses. As a result, NFIP has funded such losses after the fact by borrowing from Treasury. As we have seen, such borrowing exposes taxpayers to the risk of loss. NFIP still owes approximately $17.8 billion of the amount it borrowed from Treasury for losses incurred during the 2005 hurricane season. The high cost of servicing this debt means that it may never be repaid, could in fact increase, and will continue to affect the program’s solvency and be a burden to taxpayers. Another way to limit costs to taxpayers is to decrease the risk of losses by undertaking mitigation efforts that could reduce the extent of damage from flooding. FEMA has taken steps to help homeowners and communities mitigate properties by making improvements designed to reduce flood damage—for example, elevation, relocation, and demolition. As we have reported, from fiscal year 1997 through fiscal year 2007, nearly 30,000 properties were mitigated using FEMA funds. Increasing mitigation efforts could further reduce flood damage to properties and communities, helping to put NFIP on a firmer financial footing and reducing taxpayers’ exposure. FEMA has made particular efforts to address the issue of repetitive loss properties through mitigation. These properties account for just 1 percent of NFIP’s insured properties but are responsible for 25 to 30 percent of claims. Despite FEMA’s efforts, the number of repetitive loss properties increased from 76,202 in 1997 to 132,100 in March 2011, or by about 73 percent. FEMA also has some authority to raise premium rates for property owners who refuse mitigation offers in connection with the Severe Repetitive Loss Pilot Grant Program. In these situations, FEMA can initially increase premiums to up to 150 percent of their current amount and may raise them again (by up to the same amount) on properties that incur a claim of more than $1,500. However, FEMA cannot increase premiums on property owners who pay the full-risk rate but refuse a mitigation offer, and in no case can rate increases exceed the full- risk rate for the structure. In addition, FEMA is not allowed to discontinue coverage for those who refuse mitigation offers. As a result, FEMA is limited in its ability to compel owners of repetitive loss properties to undertake flood mitigation efforts. Mitigation offers significant advantages. As I have noted, mitigated properties are less likely to be at a high risk for flood damage, making it easier for NFIP to charge them full-risk rates that cover actual losses. Allowing NFIP to deny coverage to owners of repetitive loss properties who refused to undertake mitigation efforts could further reduce costs to the program and ultimately to taxpayers. One disadvantage of increased mitigation efforts is that they can impose up-front costs on homeowners and communities required to undertake them and could raise taxpayers’ costs if the federal government elected to provide additional mitigation assistance. Those costs could increase still further if property owners who were dropped from the program for refusing to mitigate later received federal postdisaster assistance. These trade-offs are not insignificant, although certain actions could be taken to reduce them. For example, federal assistance such as low-cost loans, grants, or tax credits could be provided to help property owners pay for the up-front costs of mitigation efforts. Any reform efforts could explore ways to improve mitigation efforts to help ensure maximum effectiveness. For example, FEMA has three separate flood mitigation programs. Having multiple programs may not be the most cost-efficient and effective way to promote mitigation and may unnecessarily complicate mitigation efforts. Increasing participation in NFIP, and thus the size of the risk pool, would help ensure that losses from flood damage did not become the responsibility of the taxpayer. Participation rates have been estimated to be as low as 50 percent in SFHAs, where property owners with loans from federally insured and regulated lenders are required to purchase flood insurance, and participation in lower-risk areas is significantly lower. For example, participation rates outside of SFHAs have been found to be as low as 1 percent, leaving significant room to increase participation. Expanding participation in NFIP would have a number of advantages. As a growing number of participants shared the risks of flooding, premium rates could be lower than they would be with fewer participants. Currently, NFIP must take all applicants for flood insurance, unlike private insurers, and thus is limited in its ability to manage its risk exposure. To the extent that properties added to the program were in geographic areas where participation had historically been low and in low- and medium-risk areas, the increased diversity could lower rates as the overall risk to the program decreased. Further, increased program participation could reduce taxpayer costs by reducing the number of property owners who might draw on federally funded postdisaster assistance. However, efforts to expand participation in NFIP would have to be carefully implemented, for several reasons. First, as we have noted, NFIP cannot reject applicants on the basis of risk. As a result, if participation increased only in SFHAs, the program could see its concentration of high- risk properties grow significantly and face the prospect of more severe losses. Second, a similar scenario could emerge if mandatory purchase requirements were expanded and newly covered properties were in communities that did not participate in NFIP and thus did not meet standards—such as building codes—that could reduce flood losses. As a result, some of the newly enrolled properties might be eligible for subsidized premium rates or, because of restrictions on how much FEMA can charge in premiums, might not pay rates that covered the actual risk of flooding. Finally, historically FEMA has attempted to encourage participation by charging lower rates. However, doing so results in rates that do not fully reflect the risks of flooding and exposes taxpayers to increased risk. Moderating the challenges associated with expanding participation could take a variety of forms. Newly added properties could be required to pay full-risk rates, and low-income property owners could be offered some type of assistance to help them pay their premiums. Outreach efforts would need to include areas with low and moderate flood risks to help ensure that the risk pool remained diversified. For example, FEMA’s goals for NFIP include increasing penetration in low-risk flood zones, among homeowners without federally related mortgages in all zones, and in geographic areas with repetitive losses and low penetration rates. Currently, the private market provides only a limited amount of flood insurance coverage. In 2009, we reported that while aggregate information was not available on the precise size of the private flood insurance market, it was considered relatively small. The 2006 RAND study estimated that 180,000 to 260,000 insurance policies for both primary and gap coverage were in effect. We also reported that private flood insurance policies are generally purchased in conjunction with NFIP policies, with the NFIP policy covering the deductible on the private policy. Finally, we reported that NFIP premiums were generally less expensive than premiums for private flood insurance for similar coverage. For example, one insurer told us that for a specified amount of coverage for flood damage to a structure, an NFIP policy might be as low as $500, while a private policy might be as high as $900. Similar coverage for flood damage to contents might be $350 for an NFIP policy but around $600 for a private policy. Given the limited nature of private sector participation, encouraging private market participation could transfer some or all of the federal government’s risk exposure to the private markets and away from taxpayers. However, identifying ways to achieve that end has generally been elusive. In 2007, we evaluated the trade-offs of having a mandatory all-perils policies that would include flood risks. For example, it would alleviate uncertainty about the types of natural events homeowners insurance covered, such as those that emerged following Hurricane Katrina. However, at the time the industry was generally opposed to an all- perils policy because of the large potential losses a mandatory policy would entail. Increased private market participation is also not without potential disadvantages. First, if the private markets provide coverage for only the lowest-risk properties currently in NFIP, the percentage of high-risk properties in the program would increase. This scenario could result in higher rates as the amount needed to cover the full risk of flooding increased. Without higher rates, however, the federal government would face further exposure to loss. Second, private insurers, who are able to charge according to risk, would likely charge higher rates than NFIP has been charging unless they received support from the federal government. As we have seen, such increases could create affordability concerns for low-income policyholders. Strategies to help mitigate these disadvantages could include requiring private market coverage for all property owners— not just those in high-risk areas—and, as described earlier, providing targeted assistance to help low-income property owners pay for their flood coverage. In addition, Congress could provide options to private insurers to help lower the cost of such coverage, including tax incentives or federal reinsurance. As Congress weighs NFIP’s various financial challenges in its efforts to reform the program, it must also consider a number of operational and management issues that may limit efforts to meet program goals and impair NFIP’s stability. For the past 35 years, we have highlighted challenges with NFIP and its administration and operations. For example, most recently we have identified a number of issues impairing the program’s effectiveness in areas that include the reasonableness of payments to Write-Your-Own (WYO) insurers, the adequacy of financial controls over the WYO program, and the adequacy of oversight of non- WYO contractors. In our ongoing work examining FEMA’s management of NFIP—covering areas including strategic planning, human capital planning, intra-agency collaboration, records management, acquisition management, and information technology—some similar issues are emerging. For example, preliminary results of our ongoing work show that FEMA: does not have a strategic plan specific to NFIP with goals, objectives, and performance measures for guiding and measuring the program; lacks a strategic human capital plan that addresses the critical competencies required for its workforce; does not have effective collaborative practices that would improve the functioning of program and support offices; lacks a centralized electronic document management system that would allow its various offices to easily access and store documents; has only recently implemented or is still developing efforts to improve some acquisition management functions, making it difficult to assess the effects of these actions; and does not have an effective system to manage flood insurance policy and claims data, despite having invested roughly 7 years and $40 million on a new system whose development has been halted. While FEMA has begun to acknowledge and address some of these management challenges, additional work remains to be done to address these issues. Our final report will include recommendations to address them. Congressional action is needed to increase the financial stability of NFIP and limit taxpayer exposure. GAO previously identified four public policy goals that can provide a framework for crafting or evaluating proposals to reform NFIP. First, any congressional reform effort should include measures for charging premium rates that accurately reflect the risk of loss, including catastrophic losses. Meeting this goal would require changing the law governing NFIP to reduce or eliminate subsidized rates, limits on annual rate increases, and grandfathered or other rates that did not fully reflect the risk of loss. In taking such a step, Congress may choose to provide assistance to certain property owners, and should consider providing appropriate authorization and funding of such incentives to ensure transparency. Second, because of the potentially high costs of individual and community mitigation efforts, which can reduce the frequency and extent of flood damage, Congress may need to provide funding or access to funds for such efforts and consider ways to improve the efficiency of existing mitigation programs. Moreover, if Congress wished to allow NFIP to deny coverage to owners of properties with repetitive losses who refused mitigation efforts, it would need to give FEMA appropriate authority. Third, Congress could encourage FEMA to continue to increase participation in the program by expanding targeted outreach efforts and limiting postdisaster assistance to those individuals who choose not to mitigate in moderate- and high-risk areas. And finally, to address the goal of encouraging private sector participation, Congress could encourage FEMA to explore private sector alternatives to providing flood insurance or for sharing insurance risks, provided such efforts do not increase taxpayers’ exposure. For its part, FEMA needs to take action to address a number of fundamental operational and managerial issues that also threaten the stability of NFIP and have contributed to its remaining on GAO’s high-risk list. These include improving its strategic planning, human capital planning, intra-agency collaboration, records management, acquisition management, and information technology. While FEMA continues to make some progress in some areas, fully addressing these issues is vital to its long-term operational efficiency and financial stability. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any of the questions you or other members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Orice Williams Brown at (202) 512-8678 or [email protected]. This statement was prepared under the direction of Patrick Ward. Key contributors were Tania Calhoun, Emily Chalmers, Nima Patel Edwards, and Christopher Forys. FEMA Flood Maps: Some Standards and Processes in Place to Promote Map Accuracy and Outreach, but Opportunities Exist to Address Implementation Challenges. GAO-11-17. Washington, D.C.: December 2, 2010. National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-1063T. Washington, D.C.: September 22, 2010. National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-631T. Washington, D.C.: April 21, 2010. Financial Management: Improvements Needed in National Flood Insurance Program’s Financial Controls and Oversight. GAO-10-66. Washington, D.C.: December 22, 2009. Flood Insurance: Opportunities Exist to Improve Oversight of the WYO Program. GAO-09-455. Washington, D.C.: August 21, 2009. Information on Proposed Changes to the National Flood Insurance Program. GAO-09-420R. Washington, D.C.: February 27, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Flood Insurance: Options for Addressing the Financial Impact of Subsidized Premium Rates on the National Flood Insurance Program. GAO-09-20. Washington, D.C.: November 14, 2008. Flood Insurance: FEMA’s Rate-Setting Process Warrants Attention. GAO-09-12. Washington, D.C.: October 31, 2008. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. National Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA’s Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. Coastal Barrier Resources System: Status of Development That Has Occurred and Financial Assistance Provided by Federal Agencies. GAO-07-356. Washington, D.C.: March 19, 2007. Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations. GAO-07-139. Washington, D.C.: January 19, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. GAO’S High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. Budget Issues: Budgeting for Federal Insurance Programs. GAO/T-AIMD-98-147. Washington, D.C.: April 23, 1998. Budget Issues: Budgeting for Federal Insurance Programs. GAO/AIMD-97-16. Washington, D.C.: September 30, 1997. National Flood Insurance Program: Major Changes Needed If It Is To Operate Without A Federal Subsidy. GAO/RCED-83-53. Washington, D.C.: January 3, 1983. Formidable Administrative Problems Challenge Achieving National Flood Insurance Program Objectives. RED-76-94. Washington, D.C.: April 22, 1976. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Flood Insurance Program (NFIP) has been on GAO's high-risk list since 2006, when the program had to borrow from the U.S. Treasury to cover losses from the 2005 hurricanes. The outstanding debt is $17.8 billion as of March 2011. This sizeable debt, plus operational and management challenges that GAO has identified at the Federal Emergency Management Agency (FEMA), which administers NFIP, have combined to keep the program on the high-risk list. NFIP's need to borrow to cover claims in years of catastrophic flooding has raised concerns about the program's long-term financial solvency. This testimony 1) discusses ways to place NFIP on a sounder financial footing in light of public policy goals for federal involvement in natural catastrophe insurance and 2) highlights operational and management challenges at FEMA that affect the program. In preparing this statement, GAO relied on its past work on NFIP and on its ongoing review of FEMA's management of NFIP, which focuses on its planning, policies, processes, and systems. The management review includes areas such as strategic and human capital planning, acquisition management, and intra-agency collaboration. Congressional action is needed to increase the financial stability of NFIP and limit taxpayer exposure. GAO previously identified four public policy goals that can provide a framework for crafting or evaluating proposals to reform NFIP. These goals are: (1) charging premium rates that fully reflect risks, (2) limiting costs to taxpayers before and after a disaster, (3) encouraging broad participation in the program, and (4)encouraging private markets to provide flood insurance. Successfully reforming NFIP would require trade-offs among these often competing goals. For example, currently nearly one in four policyholders does not pay full-risk rates, and many pay a lower subsidized or "grandfathered" rate. Reducing or eliminating less than full-risk rates would decrease costs to taxpayers but substantially increase costs for many policyholders, some of whom might leave the program, potentially increasing postdisaster federal assistance. However, these trade-offs could be mitigated by providing assistance only to those who needed it, limiting postdisaster assistance for flooding, and phasing in premium rates that fully reflected risks. Increasing mitigation efforts to reduce the probability and severity of flood damage would also reduce flood claims in the long term but would have significant up-front costs that might require federal assistance. One way to address this trade-off would be to better ensure that current mitigation programs were effective and efficient. Encouraging broad participation in the program could be achieved by expanding mandatory purchase requirements or increasing targeted outreach to help diversify the risk pool. Such efforts could help keep rates relatively low and reduce NFIP's exposure but would have to be effectively managed to help ensure that outreach efforts were broadly based. Encouraging private markets is the most difficult challenge because virtually no private market for flood insurance exists for most residential and commercial properties. FEMA's ongoing efforts to explore alternative structures may provide ideas that could be evaluated and considered. Several operational and management issues also limit FEMA's progress in addressing NFIP's challenges, and continued action by FEMA will be needed to help ensure the stability of the program. For example, in previous reports GAO has identified weaknesses in areas that include financial controls and oversight of private insurers and contractors, and has made many recommendations to address them. While FEMA has made progress in addressing some areas, preliminary findings from GAO's ongoing work indicate that these issues persist and need to be addressed as Congress works to more broadly reform NFIP. GAO has made numerous recommendations aimed at improving financial controls and oversight of private insurers and contractors, among others. |
To assess IRS’ performance during the 2000 filing season, we interviewed IRS officials about ongoing efforts and future plans; analyzed IRS data on numerous activities, such as the extent to which taxpayers used various alternative signature and payment alternatives offered by IRS and the results of IRS efforts to identify and deny improper EIC claims; reviewed various IRS documents, including operating procedures and reports on program results and internal research efforts; reviewed data posted to IRS’ Web site and a private study of the site; reviewed reports that an IRS contractor prepared on customer satisfaction surveys; contacted private organizations that prepare tax returns and sponsor free tax return filing assistance; reviewed relevant congressional testimony; and reviewed the results of relevant audit work done by the Treasury Inspector General for Tax Administration (TIGTA). We did our work at IRS’ National Office; submission processing centers in Atlanta, GA, and Kansas City, MO; Customer Service Field Operations and Customer Service Operations Center in Atlanta; call sites in Atlanta, Dallas, TX, Jacksonville, FL, Kansas City, KS, and Nashville, TN; and district office in Georgia. We selected those offices for a variety of reasons—we selected some because they had management responsibility for the programs being reviewed, some because of the nature of their workload, and some because of their proximity to our audit staff. We did our work from January through October, 2000, in accordance with generally accepted government auditing standards. Representatives of several practitioner groups said, either to us or in congressional testimony, that the 2000 filing season went smoothly. The results of our review of IRS’ return and refund processing operations were consistent with that assessment. More specifically, our audit work showed the following: According to various indicators that IRS and we have traditionally used to assess IRS’ processing of returns and refunds during the filing season, IRS generally met or exceeded its processing performance levels in 1999. Among other things, refunds were generally issued within the time frames set by IRS and the number of returns filed electronically increased by about 21 percent compared to 1999. Despite the added risks associated with the Year 2000, IRS’ tax processing systems performed slightly better during the 2000 filing season than during the 1999 filing season. Changes that IRS made in an attempt to reduce taxpayer errors and enhance the processing of returns and payments seemed to have had a positive effect by, among other things, reducing the number of errors made by taxpayers and tax return preparers in claiming the Child Tax Credit. As shown in table 1, IRS, in 2000, generally met or exceeded its processing performance levels in 1999 for five of seven performance indicators. For a sixth indicator, IRS did not have data for the 2000 filing season, as explained in note “d” of the table. Although the seventh indicator (notice accuracy) seemed to show a decline between 1999 and 2000, data for the 2 years cannot be compared because they were generated by different methodologies. (See app. I for a description of the seven indicators listed in table 1.) We focused our attention on two areas covered by IRS’ indicators—refund timeliness and the use of electronic filing. A major part of IRS’ processing effort is directed at issuing refunds. In that regard, about 70 percent of the individual income tax returns processed by IRS in 2000 involved refund claims. IRS’ goals for the 2000 filing season were to process (1) 85 percent of the refunds on paper returns within 40 days and (2) 99 percent of the refunds on electronic returns within 21 days. IRS exceeded both of those goals and, in doing so, exceeded its accomplishments in 1999. The improvement over 1999 was especially significant for paper returns, where IRS’ performance increased from 84.7 percent to 92.1 percent. According to cognizant IRS staff, a new methodology for computing the timeliness of refunds on paper returns has been proposed that would change the date from which IRS starts counting. IRS had been using the signature date on the return as the starting point. However, because IRS felt that the signature date did not always reflect when the return was mailed, the proposed methodology calls for using the date IRS receives the return as the starting point. We do not know how many of the refunds that exceeded the 40-day goal during the 2000 filing season would have met the goal if IRS had been using the proposed methodology. As of September 20, 2000, that proposal was still under consideration. IRS first began receiving individual income tax returns electronically in 1986. Electronic filing enables taxpayers to file more accurate returns and get their refunds faster and provides taxpayers with evidence that IRS has received their returns. Electronic filing also reduces the number of errors IRS has to correct because (1) checks are built into the electronic filing system that are designed to catch certain taxpayer errors, such as computational mistakes, in advance so that they can be corrected by the taxpayer before IRS takes possession of the return and (2) returns filed electronically bypass the more error-prone manual procedures that IRS uses to process paper returns. The number of individual income tax returns filed electronically has been on an upward trend since 1995, during which time the number of electronic returns increased by 200 percent (from 11.8 million in 1995 to 35.4 million in 2000). The 35.4 million electronic returns filed in 2000 represent an increase of about 21 percent compared to the number filed in 1999. There are currently three types of electronic filing: (1) traditional, whereby taxpayers transmit returns to IRS through a third party (such as a tax return preparer); (2) TeleFile, whereby taxpayers send returns directly to IRS over telephone lines using a Touch-Tone telephone; and (3) on-line, whereby taxpayers send returns to IRS through an on-line intermediary using a personal computer and commercial software. As shown in table 2, the use of traditional and on-line filing increased in 2000, while the use of TeleFile decreased for the second year in a row. Besides the belief that taxpayers are becoming more familiar and comfortable with computer technology and electronic filing, IRS officials cited several factors that contributed to the increase in electronic filing in 2000, including IRS’ expansion of initiatives to make electronic filing paperless, and thus more appealing to taxpayers and tax return preparers. Those factors, with an emphasis on the paperless initiatives, are discussed in appendix II. The performance of systems that IRS uses for processing tax returns was of particular interest in 2000 because of the massive changes that IRS made to help ensure that its systems were Year 2000 compliant. Completing these changes involved correcting millions of lines of application software and upgrading or replacing thousands of computer hardware and software products. Although it extensively tested these changes, IRS anticipated that unexpected system-related problems might occur during the 2000 filing season that could affect service to taxpayers. As we reported in June 2000, although there were some relatively minor problems, IRS performance data and comments from IRS officials and representatives of large tax practitioners indicated that IRS’ tax processing systems performed slightly better during the 2000 filing season than in 1999. At the time that we prepared our June report, IRS had identified four system-related problems that affected relatively few individual taxpayers. IRS officials said that they had (1) corrected the four problems, (2) taken or were taking action to mitigate the effects on taxpayers, and (3) notified individuals affected by two of the four problems. One reason IRS officials cited for not always notifying affected individuals was that IRS could not quickly generate correspondence to address the problem. In preparing this report, we followed up with IRS officials about the potential impact of systems modernization on IRS’ ability to more quickly notify taxpayers. They told us that systems modernization may enable IRS to more quickly develop customized taxpayer correspondence to address specific problems but may not reduce the time involved in identifying taxpayers affected by the problems because IRS still would need to develop unique software programs for that purpose. After we completed our audit work for the June 2000 report, IRS officials told us of a fifth system-related problem. That problem involved the freezing of 27,493 refunds because they were mistakenly identified as involving an injured spouse. According to IRS, it identified the problem on February 12, corrected it on February 20, and generated the refunds within a week. IRS made several changes for the 2000 filing season in an attempt to reduce the number of taxpayer errors and enhance its processing efforts. Of particular note, IRS simplified the Child Tax Credit worksheet for 2000, which contributed to a decrease of 37 percent in the number of Child Tax Credit errors made by taxpayers and tax return preparers; revised the criteria for filing Schedule D (Capital Gains and Losses), which likely contributed to a reduction in the number of Schedule Ds that IRS had to process in 2000; had taxpayers who were getting refunds mail their returns to a different address than taxpayers who were making payments so that IRS could better identify returns with remittances; and began checking the validity of secondary taxpayers’ SSNs, which resulted in about 36,000 notices to taxpayers about invalid SSNs. These changes are discussed more fully in appendix III. IRS has various ways to help taxpayers meet their filing requirements. These ways include (1) call sites that assist taxpayers who telephone IRS with questions about the tax law, their accounts, or their refunds; (2) walk- in sites where, among other things, taxpayers can get answers to questions and help in preparing their returns; (3) IRS-sponsored volunteer organizations that provide return preparation assistance and other help to eligible taxpayers; (4) IRS’ Web site on the Internet, which, among other things, enables taxpayers to get answers to tax law questions via electronic mail (E-mail); and (5) various outlets through which taxpayers can receive tax forms and publications. Table 3 shows 1999 and 2000 performance data for various customer service-related indicators that IRS and we have used in the past to assess the filing season. IRS also has a new indicator for the quality of service provided by its walk-in sites. However, as discussed later in this report, IRS did not have data on that measure for the 2000 filing season at the time we completed our audit work. (See app. I for a description of the five indicators listed in table 3.) Concerning IRS’ various modes of assistance, we noted the following: Although taxpayers were better able to reach IRS over the telephone in 2000 compared to 1999, IRS’ performance in providing telephone service was still below the level achieved in 1998. IRS implemented measures for assessing the performance of its walk-in sites but still lacked some critical information, such as reliable data on customer satisfaction. IRS procedures provided for assessing the quality of returns prepared by volunteer sites. However, IRS had no measures for assessing the timeliness of service provided by the sites or taxpayer satisfaction with those services. Also, (1) IRS district offices were required to visit each volunteer assistance site but were not given specific guidance as to what to review during those visits and (2) late delivery of computer equipment and training materials hampered the ability of volunteer sites to effectively serve taxpayers. Data on IRS’ Web site showed increased use and improved performance in 2000 compared to 1999, but some information on the site was obsolete or inconsistent. IRS’ performance measures did not adequately reflect the timeliness with which IRS’ area distribution centers responded to taxpayers’ orders of forms and publications. One of the most important services IRS provides all year, but especially during the filing season, is toll-free telephone assistance. For 24 hours a day, 7 days a week during the filing season, taxpayers can call IRS with questions about the tax law, their accounts, or their refunds. A key indicator of IRS’ performance in providing telephone service is the ability of taxpayers to reach IRS so that they can get their questions answered. IRS refers to that indicator as “level of service.” We reported last year that although IRS made several changes in an effort to improve its telephone service, its level of service in 1999 declined compared to 1998. Some of the decline was attributed to (1) IRS’ unrealistic assumptions about the implementation and impact of its changes and (2) other problems it had in managing staff training and scheduling and implementing new technology. As shown in table 4, IRS improved its level of service in 2000 by answering 28.2 million of the 45.7 million call attempts that taxpayers made from January 1 to April 29, 2000—a 62-percent level of service. However, that level of service was still considerably below the 72-percent level provided in 1998. Although the volume of incoming calls was similar for both 1998 and 2000, IRS answered about 4.7 million fewer calls in 2000 than in 1998. The ability of taxpayers to reach IRS so they can get their questions answered is one important measure of telephone service. Another important measure is the accuracy of those answers. IRS measures the accuracy of information provided by its telephone assistors by monitoring a sample of taxpayer calls and determining, for each of the monitored calls, whether the assistor responded accurately and followed correct procedures. The monitoring results for calls involving tax law questions showed an accuracy rate of 71.9 percent for the 2000 filing season—below IRS’ goal of 80 percent and, considering the confidence intervals surrounding the results of IRS’ sample, not statistically different from the 73.8-percent performance level achieved in 1999. We conducted a separate review of the key factors that affected IRS’ performance in providing toll-free telephone service during the 2000 filing season and expect to issue a separate report to the Subcommittee early in 2001. We are also preparing a report for the Subcommittee on various human capital issues associated with IRS’ toll-free telephone service, which we also expect to issue early in 2001. IRS’ walk-in sites answer tax law questions, distribute tax forms and publications, and help taxpayers prepare tax returns and resolve account issues. IRS data show that its walk-in sites served about 5.8 million taxpayers between January 1 and April 22, 2000—a 5-percent decrease from the about 6.1 million taxpayers served during the same time period in 1999. In our report on the 1999 filing season, we pointed out that IRS had not made much progress in measuring the performance of walk-in sites. We recommended that IRS implement a performance measurement program and, as part of that program, require that quality reviews be done and data on the results of quality reviews and wait-time monitoring be reported to a central location for analysis. For the 2000 filing season, IRS (1) instituted a quality review program for assessing the accuracy of services provided by walk-in staff, the results of which are reported to a central location, and (2) began requiring centralized reporting of wait-time data, although certain factors affected the data’s usefulness. IRS also conducted a walk-in customer satisfaction survey during the 2000 filing season but, according to TIGTA, IRS had not established an adequate management process to ensure that the survey was conducted appropriately. For 2000, IRS’ National Office instituted quality reviews of its walk-in sites. A team of 32 reviewers, posing as taxpayers, were to visit walk-in sites and act out various scenarios that would require assitors to help the “taxpayers” prepare their return or resolve an account problem. The reviewers were to complete a checksheet covering issues such as did the assistor indicate a willingness to help by using an appropriate phrase such as “May I help you?” and did the assistor provide a complete and accurate response, explaining any procedures and ordering the necessary forms and publications? The reviewers made 272 visits between late-October 1999 and mid-January 2000 and another group of visits during the 2000 filing season. As described by IRS, results from the first group of visits “indicated significant opportunity to improve our quality results.” Specifically, the results showed an error rate (incorrect answers) of 50 percent and indicated that reviewers were denied service in 21 percent of the visits (e.g., reviewers were told to take a form or publication and figure out the answer themselves). Several recommendations for improving performance were set forth, including an intensive back-to-basics training program and increased managerial oversight. Results from the second group of visits were not available at the time we completed our audit work. The National Office established taxpayer wait-time goals of 30 minutes for return preparation and 15 minutes for all other services during the 1998, 1999, and 2000 filing seasons. In our reports on IRS’ 1998 and 1999 filing seasons, we reported that although IRS monitored walk-in sites’ timeliness, it did not require that monitoring results be reported to the National Office. During the 2000 filing season, IRS did require that its four regional offices submit monthly reports on timeliness to the National Office. However, three factors adversely affected the usefulness of the timeliness data. First, even though IRS required that timeliness data be reported to the National Office, it did not specify what percentage of the time sites were to meet the 15- and 30-minute wait-time goals. The Southeast Region established its own goal, which called for districts in that region to meet the wait-time goals 90 percent of the time. According to Southeast regional analysts we talked with during the filing season, most of the nine districts in that region were meeting or exceeding the 90-percent goal. The analysts said that some districts were experiencing problems meeting the wait-time goals because of an unanticipated increase in the number of taxpayers visiting the walk-in sites for return preparation. Second, wait times at most walk-in sites were computed manually, which made the results more prone to error. To enable more accurate tracking of wait times, IRS has installed an automated wait-time tracking system known as Q-Matic at some walk-in sites. During the 2000 filing season, 76 (or 18 percent) of IRS’ 417 walk-in sites had that system. As customers arrive at walk-in sites with the Q-Matic system, they are to obtain a numbered ticket from the Q-Matic ticket printer. The ticket reflects the estimated wait time for the service, and the system automatically “calls” the customer when it is his or her turn. The system records the time that a customer received a ticket and the time that an assistor started helping the customer. Non-Q-Matic sites used manual systems to record wait times. At some of those sites, a greeter or receptionist was to record on a taxpayer contact card the time that the taxpayer arrived, and an assistor was to record on the same card the time that he or she started to fill out a sign-in sheet. Other non-Q-Matic sites relied on greeters or taxpayers to fill out a sign-in sheet. A third factor that affected the usefulness of wait-time data reported by the regional offices, according to a cognizant IRS official, was the use of different reporting formats by the regions. IRS conducted a customer satisfaction survey at all walk-in sites in January 2000 and every fifth week thereafter, which amounted to 1 week during each month of the filing season. Results of the walk-in surveys completed in January, February, and March, 2000, as summarized by IRS’ contractor responsible for analyzing survey results, showed that 91 percent of the respondents rated their overall satisfaction with the handling of their case at 6 or 7 on a 7-point scale and that the average overall satisfaction rating was 6.48. The survey results also showed that the three primary reasons why respondents visited a walk-in site were to get help preparing their returns (28 percent), ask a tax question (22 percent), and pick up a form or publication (21 percent); 72 percent of the respondents waited less than 15 minutes to be served, 21 percent waited between 15 and 30 minutes, and 7 percent waited more than 30 minutes; and taxpayers whose wait time was less than 15 minutes gave higher satisfaction ratings than did customers who waited longer. Although the survey results showed that respondents were generally satisfied with IRS’ assistance, only about 3 percent of the taxpayers who visited walk-in sites between January and March, 2000, responded to the survey. In that regard, TIGTA, in a May 2000 report, concluded that “while the Walk-In Customer Satisfaction Survey may be an effective marketing tool to gauge taxpayers’ satisfaction with the services provided by the IRS Walk-In offices, the Survey results are not statistically valid.” Specifically, TIGTA found the following: Survey forms were not offered to all taxpayers during the survey weeks, as was required. During visits to selected walk-in offices, TIGTA officials posing as taxpayers were offered a survey form only 8 percent of the time. There were no controls to prevent tampering with the survey responses, and IRS had not established controls to ensure that all walk-in offices participated in the survey. Three different IRS functions provided oversight for the survey, but none of them appeared to be accountable for the survey results. TIGTA recommended that IRS improve the process for overseeing the walk-in customer satisfaction survey to ensure that the survey is properly administered and that the results are accurate, valid, and reliable. In a May 22, 2000, memorandum to TIGTA, the Commissioner of Internal Revenue said that IRS would (1) stress the importance of providing the survey to all taxpayers who are helped; (2) determine the level of employee understanding of the survey process and provide additional training that reinforces the importance of surveying all customers and the need to adhere to instructions in the Internal Revenue Manual for survey procedures; and (3) issue program guidance to field offices that provides direction to management on establishing controls to protect survey forms, the integrity of the data, and the survey results. In addition to the help that is available to taxpayers over the telephone and at walk-in sites, taxpayers can receive assistance from various IRS- sponsored volunteer sites. Two major volunteer assistance efforts are the Volunteer Income Tax Assistance (VITA) and the Tax Counseling for the Elderly (TCE) programs—both of which provide free tax return preparation assistance. VITA offers free tax help to persons with low to limited income, persons who are non-English-speaking, elderly taxpayers, and persons with disabilities. TCE offers free tax help to elderly taxpayers. According to IRS data as of June 30, 2000, about 3.3 million taxpayers were assisted at about 17,600 VITA and TCE sites. Considering the significant role played by the VITA and TCE programs in helping taxpayers meet their filing responsibilities, it is important that IRS take reasonable steps to ensure that the assistance provided by those programs is timely and accurate. According to the Internal Revenue Manual, “ . . . critical to management of volunteers are: proper training; communication of expectations; review, evaluation and feedback of work performed; and recognition of performance.” In that regard, we noted the following: IRS procedures provided for assessing the quality of returns prepared by volunteer sites. In commenting on a draft of this report, IRS said that the accuracy rate for VITA sites was 97.8 percent and the rate for TCE sites was 95.4 percent. IRS did not have measures for assessing the timeliness of service provided by the volunteer sites or taxpayer satisfaction with those services. Although district office representatives were to make monitoring visits to volunteer sites within their jurisdiction during the filing season, they were not given specific guidance as to what to examine during these visits. Much of the computer hardware and software and training materials was not delivered in a timely fashion to the VITA and TCE sites. According to IRS officials, these problems affected the sites’ ability to serve taxpayers effectively. For example, three of IRS’ four regions reported that the untimely receipt of equipment hampered their sites’ electronic filing activities. One region stated that the “equipment did not arrive timely, and in many cases, so late that it was useless to field for this past filing season. A majority of the equipment did not arrive in working condition as some were completely missing operating systems, or had the wrong adapters for keyboard plug-in.” At the time we completed our audit work, IRS had not responded to our questions about the reasons for these problems and what was being done to prevent their recurrence in 2001. Among other things, IRS’ Web site offers taxpayers hundreds of tax forms and publications for downloading, current information on tax issues, details about electronic filing, and the opportunity to submit tax law and procedural questions via E-mail. Various data generated by IRS and others indicate that IRS’ Web site was used more and performed better in 2000 than in 1999. Our review of information on the Web site indicated that the site’s usefulness was somewhat impaired by the presence of obsolete or inconsistent data. Taxpayers used IRS’ Web site significantly more in fiscal year 2000 than in fiscal year 1999. As of June, (1) the number of “hits” had increased 31 percent—1.3 billion in 2000 compared to about 983 million in 1999—and (2) the number of downloaded files had increased 62 percent—about 115 million in 2000 compared to about 71 million in 1999. Also, the number of E-mail questions received during the 2000 filing season increased by 41 percent—218,405 compared to 155,421 for the 1999 filing season. Keynote—an independent Web site rater and recognized authority on Internet performance—reviewed IRS’ Web site during the week of March 27, 2000, and reported that the site was coping well with demands of the filing season and performing well overall. The independent rater found that the home page was delivered in 2.7 seconds on average, and the site had an availability rate of 96.8 percent from March 27 through March 31 (Monday through Friday). The average delivery time for April 1 and April 2 (Saturday and Sunday) was 2.62 seconds, with an availability rate of 98.9 percent. A similar review done in 1999 for Monday, April 12, through Friday, April 16, showed delivery times that ranged from 5.39 seconds to 14.45 seconds and availability rates that ranged from 93.6 percent to 97.4 percent. Although these results indicate improved performance in 2000, it is unclear how much, if any, of the apparent improvement is due to different measurement periods. Unlike in 2000, the site’s performance in 1999 was measured during the last week of the filing season when demands on the Web site might be heavier. One important feature of IRS’ Web site is the ability of taxpayers to ask tax law and procedural questions of IRS via E-mail. At a March 2000 Oversight Subcommittee hearing, witnesses representing two practitioner organizations spoke positively about this feature. IRS data indicate that IRS was more timely in responding to taxpayers’ questions during the 2000 filing season than it was in 1999. For the 2000 filing season, IRS took an average of 1.3 business days to respond to taxpayers E-mail questions compared to an average of 2.7 business days during the 1999 filing season. During the 2000 filing season, IRS responded 90.5 percent of the time within its goal of 2 business days compared to 69.2 percent of the time during the 1999 filing season. Although IRS improved its overall timeliness, there were certain types of questions (generally those involving more complex topics) for which IRS did not meet the 2 business day goal. For example, questions dealing with trusts averaged 5.8 business days, questions about aliens and U.S. citizens living abroad averaged 4.6 business days, and estate and gift tax questions averaged 3.2 business days. Accuracy is another important measure of IRS’ performance in responding to E-mail questions. IRS data on the results of its quality reviews of responses to E-mail questions during the 2000 filing season showed that 76 percent of the 1,321 responses reviewed between January and April were correct (IRS’ accuracy goal for all of fiscal year 2000 was 79 percent). In September 2000, TIGTA reported on the results of a test it conducted between March and June, 2000. For that test, TIGTA E-mailed to IRS and to 3 commercial Internet Web sites that offer free tax advice, 50 questions relating to issues affecting small businesses and/or self-employed individuals. According to TIGTA, IRS responded correctly to 54 percent of the questions while the commercial Web sites provided correct answers 47 percent of the time. In another report, TIGTA noted that although IRS had statistically valid nationwide data on the accuracy of responses to E-mail questions, its sampling plan was insufficient to produce statistically valid data for assessing the performance of each of the 10 sites that respond to E-mail questions. TIGTA recommended that IRS design a sampling plan to provide accuracy rates at the call-site level as well as the national level. All E-mail customers are to be given the opportunity to respond to a customer satisfaction survey. According to IRS data, of the about 4,300 taxpayers who responded to the survey between January 1 and April 17, 2000, (1) 94 percent said that they were satisfied with the time it took to get a response; (2) 78 percent said that the response they received answered their question; and (3) 93 percent said that they would use the E- mail system in the future. We found several instances of data on IRS’ Web site that were either obsolete or inconsistent. For example: In the “IRS Newsstand” part of the site, there is a section entitled “Tax Calendar for Small Businesses.” When we looked at that section in June 2000, we found a calendar for 1999 but no calendar for 2000. In the “Around the Nation” section of the site, there are one or more pages for all but one state. When we checked in June 2000, we found that the pages for four states still had data posted showing events that took place in 1999. There were also some inconsistencies between the dates of Problem Solving Days posted on the Problem Solving Days part of the Web site and the dates posted on some individual state pages. For example, as of June 22, 2000, one state’s page showed no Problem Solving Days in that state after March 2000, but the Problem Solving Days page showed that days were scheduled in that state for June and July. Such inconsistencies could cause users of the Web site to get incorrect information depending on which page they accessed. Some state pages included more information than others. Although this kind of inconsistency is not a problem in and of itself, some of the inconsistency involved basic information that we thought should be a part of every state page. For example, of 56 state pages (1 state had no page, 4 states had more than 1 page, and the District of Columbia had 1 page), 40 had information on walk-in site locations, while 16 had no such information. As a result, taxpayers in some states were able to get information on IRS walk-in locations from the Web site while taxpayers in other states were not. An official from IRS’ Electronic Information Services Office told us that there was no one person responsible for ensuring that data on the Web site were current and consistent. Each office that placed data on the site was responsible for ensuring that the data were accurate and up-to-date. Thus, for example, there was no one responsible for ensuring that information entered on a particular state page by one of IRS’ district offices was consistent with information on the Problem Solving Day part of the Web site that had been entered by another office. IRS provides various means through which taxpayers can obtain copies of forms and publications to help them prepare their tax returns. We have already discussed two of those means—IRS’ walk-in sites and Web site. Table 5 identifies other channels through which IRS distributes forms and publications and, for each of those channels, shows comparative data for the 1999 and 2000 filing seasons. Of the various distribution channels listed in table 5, we focused most of our audit work on the performance of the area distribution centers in filling taxpayers’ orders. Taxpayers can order forms and publications from the distribution centers either by mail or by calling a toll-free telephone number. In the tax packages mailed to taxpayers before the filing season, IRS tells taxpayers how to order forms and publications from the area distribution centers and that they should expect to receive the documents within 10 days after IRS receives their order. According to IRS data for January 1 through April 30, 2000, the three area distribution centers filled about 5 million orders for forms and publications, filled 98 percent of the orders accurately, and took no longer than 2.6 days on average to fill those orders. However, the latter measure does not provide a reliable basis for judging how well IRS met its 10-day goal because the measure (1) does not track order-filling time from when IRS received the order, only from when an order was assigned a “picking” number for processing, and (2) reflects turnaround time only for inventory on hand and not for forms or publications that were out of stock and, thus, had to be backordered. Regarding backorders, there were several significant stockouts during the 2000 filing season, involving such documents as Form W-2 (Wage and Tax Statement) and Publication 596 (Earned Income Credit). IRS data for January through April, 2000, also indicated that taxpayers had an easier time accessing the toll-free forms-ordering telephone line than they did accessing the three customer service-related telephone lines previously discussed. The data show that of 5.3 million call attempts to the forms-ordering line, 4.0 million were answered—a 75-percent level of service. The 5.3 million call attempts represented a significant decrease from the 7.0 million call attempts during the same 4 months in 1999. The decrease in call attempts, like the decrease in orders filled shown in table 5, could indicate that more taxpayers are using other sources, such as IRS’ Web site, to obtain needed forms and publications. On a related matter, a recent IRS study found that when taxpayers called the toll-free forms-ordering telephone number, they were not told how long it could take to receive the form or publication until after the order was placed. The study concluded that this procedure could cause wasted IRS effort and increased taxpayer burden if taxpayers, after placing their order on the telephone and being told the delivery time, decided to go to a walk-in site to get the material rather than wait for the material to be sent by mail. The study recommended that IRS rearrange its forms telephone menu and advise its assistors to tell taxpayers at the beginning of the call how long it generally takes to receive IRS forms ordered over the telephone. Doing so would allow taxpayers to terminate the call immediately if the wait time is unacceptable to meet their needs. When we asked whether they had implemented this recommendation, cognizant IRS officials said that they had not. They explained that providing the information at the beginning of the call could be very awkward because assistors do not know (1) what the call is about and (2) what the taxpayer is ordering. The officials stated that, depending on the product and time of the year, the order time may change (e.g., due to products not being available or being on backorder). However, between April 1 and 15, IRS officials did institute two automated messages on the forms-ordering line that were to come on when a caller was put on hold because all of the assistors were occupied. The messages stated that it could take up to 10 days for the caller to receive his or her order, and that the caller may want to ask the assistor for alternate ways of obtaining forms. However, these messages were operational only when the forms operators were on duty, which was 7 days a week, 12 hours a day. If a taxpayer called the forms line during the other 12 hours, the call rolled over to another telephone line, which did not have these messages. During the past several filing seasons, IRS has undertaken several efforts aimed at reducing noncompliance with the EIC eligibility requirements.Generally speaking, those efforts involved (1) using IRS’ math error authority to deny EIC claims that were not accompanied by valid SSNs and (2) conducting in-depth reviews of EIC claims that met certain criteria. In 2000, IRS continued these efforts and began a new effort directed at tax return preparers. Although IRS identified and stopped hundreds of millions of dollars in erroneous EIC claims in 2000, a recent IRS study indicated that IRS might still be paying out billions of dollars in erroneous EIC claims. However, because that study involved returns filed in 1998, it predated many of IRS’ more recent EIC compliance efforts. IRS has studies under way and planned that should help determine the impact of those more recent efforts on noncompliance. As IRS processes individual tax returns, it looks for computational errors made by taxpayers or their representatives in preparing the returns. When IRS finds such errors, it can automatically adjust the return through the use of math error authority. In 1996, Congress first authorized IRS to treat invalid SSNs as math errors, similar to the way that it had historically handled computational mistakes. IRS now has the authority to (1) automatically disallow, through its math error program, any deductions and credits, such as the EIC, associated with an invalid SSN and (2) make appropriate adjustments to any refund that the taxpayer might be claiming. According to IRS data as of June 30, 2000, IRS had denied about $321 million in erroneous EIC claims through its math error authority. Although significant, this amount represents a decrease from the $410 million stopped as of the same point in time in 1999. In that regard, as shown in table 6, although the number of EIC recipients in 2000 was about the same as in 1999, the number of EIC-related math errors involving SSNs and the number of other EIC-related math errors both declined by more than 20 percent. These declines would seem to indicate that IRS’ efforts have caused taxpayers and practitioners to be more careful in preparing EIC claims. Other types of EIC noncompliance are not as easy to identify as math errors. These types can be detected only through an audit. In 2000, IRS continued to target for in-depth review certain types of EIC claims, such as those involving the use of a child’s SSN on multiple returns for the same year, that IRS had identified as the main sources of EIC noncompliance. Taxpayers whose returns were identified for inclusion in one of these programs were to be audited to determine if their EIC claims were valid. During the first 11 months of fiscal year 2000, according to IRS, it closed about 218,000 of those audits and identified about $336 million in erroneous claims. For the 2000 filing season, IRS implemented an integrated EIC education and compliance effort directed at tax return preparers. IRS decided to implement this effort, known as the EIC Preparer Outreach Program, because IRS data indicated that 62 percent of the returns with EIC claims were prepared by paid preparers. This program focused on preparers who generated at least 100 tax returns claiming the EIC because, according to IRS, that universe of preparers accounted for 75 percent of the EIC tax returns done by paid preparers. Preparers were divided into five groups, with each group getting a different type of visit from IRS, ranging from education to criminal investigation. At the time we completed our audit work, not enough information was available on the results of the program to assess its overall effectiveness. Additional information on the EIC Preparer Outreach Program is presented in appendix IV. The previously discussed EIC-related efforts are part of a 5-year initiative for which Congress has appropriated about $140 million a year since fiscal 1998. That initiative was begun after IRS, in April 1997, reported the results of its tax year 1994 EIC compliance study. The study showed that of the $17.2 billion in EIC claimed during the study period, 26 percent, or about $4.4 billion, was overclaimed. In September 2000, IRS published the results of another EIC compliance study involving tax year 1997 returns. That study showed that of the estimated $30.3 billion in EIC claims made by taxpayers who filed returns in 1998 for tax year 1997, an estimated $9.3 billion (30.6 percent) was overclaimed. After deducting about $1.5 billion in overclaims that IRS estimated it would recover as a result of its enforcement programs, such as audits of tax returns and corrections of math errors, IRS estimated that it paid out about $7.8 billion in overclaims (25.6 percent of the total amount of EIC claimed). According to IRS, these results are not comparable to the results of the tax year 1994 study because of (1) various legislative changes since 1994 that affected eligibility for the credit, the credit amounts, and IRS’ administration of the credit and (2) methodological changes to the study design. Because IRS had not yet fully implemented many of the efforts that it undertook as part of the EIC compliance initiative, the results of the tax year 1997 study do not reflect the full impact of those efforts. In that regard, IRS is doing a study of tax year 1999 returns and plans to study tax year 2001 returns. Going into the 2000 filing season, we had two predominant questions: how would IRS’ tax processing systems function given the challenges associated with the Year 2000, and would IRS be able to improve its toll- free telephone service in light of the performance problems experienced in 1999. Except for a few relatively minor glitches, which were not unexpected given the enormity of IRS’ processing task, the processing systems worked well. On the other hand, although taxpayers were better able to reach IRS over the telephone compared to 1999, IRS’ performance was still well below the level achieved in 1998. Our forthcoming reports on IRS’ toll-free telephone service will contain recommendations directed at helping IRS improve its performance. In addition to telephone service, IRS provides other forms of assistance that are used by tens of millions of taxpayers. While our review identified several positive aspects with respect to IRS’ monitoring of those assistance efforts (such as development of a quality review program for walk-in sites and various positive performance indicators related to IRS’ Web site), we also identified several opportunities for improvement. In some respects, such as with the volunteer assistance programs and the assistance provided by IRS’ walk-in sites and area distribution centers, the opportunities centered around performance measures. In those areas, unlike the situation with respect to IRS’ telephone service, it was not easy to assess IRS’ performance because either IRS did not have good measures or there were problems with the data behind the measures. Management needs good measures backed by reliable data if it is to draw meaningful conclusions about its performance and make sound decisions about any need for change. Other improvement opportunities we identified centered around management oversight—the kind of oversight that would enhance the level of service provided by better ensuring that (1) training materials and computer equipment were delivered to the volunteer assistance sites on time and in working condition and (2) data being entered on the Web site by various offices within IRS are current and consistent. We recommend that the Commissioner of Internal Revenue direct the appropriate officials to do the following: Enhance the usefulness of walk-in site wait-time data by providing a standard format for field offices to use in reporting that information to the National Office and specifying the percentage of time that walk-in sites are to meet established wait-time goals. In collaboration with IRS’ partners in providing volunteer assistance, develop performance measures for volunteer assistance sites that can be used to ensure that taxpayers are receiving an adequate level of service. Identify the underlying causes for (1) untimely delivery of an adequate supply of materials to volunteer assistance sites and (2) inadequate district review of site operations, and take action to address those underlying causes. To better ensure that area distribution centers provide timely service in filling orders for forms and publications, revise the order-filling timeliness measure so that (1) time is tracked from the day the order is received and (2) tracking does not end until the entire order is filled, even if backorders are involved. To better ensure that IRS’ Web site contains accurate and useful information, (1) assign clear responsibility in a central location for identifying and correcting outdated and inconsistent data and (2) develop minimum requirements for information to be included on the state pages in the “Around the Nation” section of the site. Regarding the latter, consider including information on the location of walk-in sites and their hours of operation. We requested comments on a draft of this report from IRS. We obtained written comments in a December 8, 2000, letter from the Commissioner of Internal Revenue (see app. V). In his letter, the Commissioner said that (1) our report provided a fair and balanced assessment of IRS’ efforts to deliver a filing season that was relatively error-free while providing taxpayers with top quality service and (2) IRS would make every effort to resolve the issues noted in our report. While agreeing generally with our recommendations, the Commissioner disagreed with parts of two recommendations (although, in one case, IRS’ plans are consistent with our recommendation) and provided additional perspective that led to a rewording of one recommendation. The Commissioner also expressed some concern about one aspect of our assessment of IRS’ telephone service. The Commissioner agreed that IRS should provide a standard format for field offices to use in reporting wait-time data for walk-in sites. However, he did not agree that IRS should specify the percentage of time that walk- in sites are to meet established wait-time goals because doing so might pressure IRS staff to serve taxpayers too quickly and, thus, negatively affect service quality. We believe that just measuring average wait time can mask a circumstance in which many taxpayers are waiting more than the length of time specified in IRS’ goal (i.e., 30 minutes for return preparation and 15 minutes for all other services) even though the average wait time is below IRS’ goal. Concern about staff working too fast and thus providing poor quality service should be offset by the influence of other measures (i.e., quality and customer satisfaction) on employee behavior. The Commissioner agreed that goals and measures are needed for volunteer sites and noted that one aspect of their performance—quality— is being measured. We revised this report to make that clear. The Commissioner said that IRS will discuss with its largest partner in providing volunteer assistance the possibility of developing a timeliness measure. We believe that any such discussion should also include a measure of customer satisfaction. The Commissioner cautioned that IRS cannot require that its partner adopt any measure. We recognize that collaboration is required and revised our recommendation accordingly. The Commissioner agreed with the need to improve the process of ordering and delivering supplies and materials to volunteer assistance sites and said that actions have been taken to address that issue. The Commissioner did not comment on our recommendation that IRS identify the underlying causes of inadequate district review of site operations and take necessary corrective action. The Commissioner agreed that area distribution centers should track orders for forms and publications from the day an order is received, but he did not agree that tracking should continue until all backorders associated with an order have been shipped. However, the tracking plans described by the Commissioner include plans to measure overall elapsed time to fill an order, including associated backorders. That is fully consistent with our recommendation. The Commissioner agreed with our recommendation regarding IRS’ Web site and said that steps have already been taken to assign clear responsibility for ensuring accurate, useful, and timely information. The Commissioner also commented on our discussion of IRS’ performance in providing telephone service. While agreeing that IRS can improve its delivery of telephone service, the Commissioner did not believe that we should compare IRS’ performance to 1998 because IRS had significantly changed its telephone service operating environment after 1998. We agree, and have acknowledged in this report, that there were major changes after 1998, but we do not agree that those changes make it inappropriate to compare IRS’ performance in 1998 to its performance in 1999 and 2000. To the contrary, we believe that such a comparison is essential. The changes made after 1998 were intended to improve IRS’ telephone service. The only way to tell if service improved is to compare performance levels after the change (1999 and 2000) with levels before the change (1998). We are sending copies of this report to Senator William V. Roth, Jr., Chairman, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance; Representative Bill Archer, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means; and Representative William J. Coyne, Ranking Minority Member of this Subcommittee. We are also sending copies to the Honorable Lawrence H. Summers, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; the Honorable Jacob J. Lew, Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. This report was prepared under the direction of David J. Attianese, Assistant Director. Other major contributors are acknowledged in appendix VI. If you have any questions about this report, contact me or Mr. Attianese on (202) 512-9110. This appendix contains descriptions of the various performance indicators listed in tables 1 and 3. The percentage of individual income tax refunds that are free of any Internal Revenue Service (IRS)-caused errors in the name and address field or in the refund amount. The percentage is based on a sample of individual income tax returns filed on paper. The percentage of other-than-full-paid, individual paper returns that Code and Edit staff process accurately. Other-than-full-paid returns involve either a refund or an unpaid liability and account for most of the paper returns processed. The percentage of other-than-full-paid, individual paper returns that are processed without transcription errors. The percentage of orders that are processed accurately determined by randomly checking selected taxpayer orders, monitoring telephone calls from taxpayers, and reviewing the transcription of written requests from taxpayers. Determined through surveys of a random sample of taxpayers who call IRS’ toll-free telephone numbers and choose to participate. Determined through surveys of a sample of taxpayers who visit IRS’ walk- in sites and choose to participate. Calculated by dividing the number of calls answered by the total call attempts. Answered calls include calls to a voice messaging system that were subsequently returned by IRS. Total call attempts is the sum of calls answered, calls abandoned by the caller before receiving assistance, and calls that receive a busy signal. The percentage of notices reviewed that are correct. The notice accuracy indicator is based on a sample of returns processing notices to be sent to individual and business taxpayers. Among other things, IRS uses returns processing notices to advise taxpayers of missing schedules or forms, missing Social Security numbers (SSN), or refunds being delayed. IRS reviewers compare the printed notice to various data, including information in the taxpayer’s account and on the taxpayer’s tax return. IRS told us that the results for individual and business taxpayers could not be separated. The number of electronically filed individual income tax returns as a percentage of all individual income tax returns filed. The percentage of calls answered accurately determined by monitoring a sample of telephone calls. The percentage of refunds on electronically filed returns that are processed within 21 days. The percentage is based on a sample of electronically filed returns, and the days are counted from the date the return was received to the date the refund was issued. The percentage of refunds on paper individual income tax returns that are processed within 40 days. The percentage is based on a sample of paper returns, and the days are counted from the signature date on the return to 1 day after the issuance of the refund. In discussing the increase in electronic filing in 2000 compared to 1999, IRS officials cited several contributing factors, in addition to the belief that taxpayers are becoming more familiar and comfortable with computer technology and electronic filing. The number of electronic return originators (ERO) increased from about 90,000 in 1999 to about 108,000 in 2000. Also, some EROs offered free electronic filing to any taxpayer, while others offered free electronic filing to taxpayers who met certain criteria. IRS expanded its electronic filing marketing efforts by allocating $9 million in 2000 compared to $7.8 million in 1999. As part of this expansion, IRS launched an effort to strengthen the “E-file” brand name by expending over $5 million on promotions, such as television and radio commercials; magazine ads; Internet banners; video productions; and billboards. IRS continued to enter into new partnerships with private sector companies to broaden the electronic services accessible through IRS’ Web site. As part of these arrangements, IRS placed hyper-links from its Web site to the partners’ Web sites, and partners offered services such as free electronic filing and free tax preparation software. IRS added five forms and schedules to the list of documents that can be filed electronically. The five forms and schedules included Schedule J (Farm Income Averaging) and Form 8586 (Low Income Housing Credit).IRS expanded the alternative signature and payment initiatives that it had begun in 1999. Further discussion of these initiatives follows. One frequently cited barrier to the greater use of electronic filing is that it has not been a paperless process. In that regard, electronic filers, other than those who used TeleFile, have had to submit a paper signature document (Form 8453) along with copies of their Wage and Tax Statements (Form W-2). Also, taxpayers who filed electronically (including those who used TeleFile) and had a balance due had to mail a check and payment voucher to IRS. In 1999, IRS began various alternative signature and payment initiatives that were aimed at making electronic filing paperless and, therefore, more attractive to taxpayers and tax return preparers. IRS expanded those initiatives in 2000. Two initiatives—the Personal Identification Number (PIN) and E-File Customer Number (ECN) programs—enabled participating taxpayers to use electronic signatures and waived the need for them to submit Forms 8453 and W-2. The PIN Program allows taxpayers who file returns through a participating ERO to use a self-selected PIN instead of completing a Form 8453. IRS, in 2000, expanded the program by increasing the number of EROs selected to participate from about 8,100 in 1999 to 18,000 in 2000. As of October 4, 2000, about 5.4 million taxpayers had used this option in comparison to about 500,000 in 1999. According to IRS, starting in 2001, all EROs will be able to file electronic returns using a self-selected PIN. Also starting in 2001, both spouses will not have to be present when filing an electronic joint return through a preparer using a PIN because IRS has developed an unavailable spouse signature authorization worksheet. As noted in our report on the 1999 tax filing season, a representative of the largest national tax return preparation company had mentioned this as one of the changes he would like to see made to the PIN Program. The ECN Program offered taxpayers who used a computer to prepare their tax returns the opportunity to file on-line and use an ECN instead of completing a Form 8453. In 2000, IRS expanded the ECN Program by increasing the number of ECNs mailed to taxpayers from about 8 million in 1999 to about 12 million in 2000. As of October 4, 2000, about 1.4 million taxpayers had used this option in comparison to about 660,000 in 1999. IRS’ District Office of Research and Analysis surveyed taxpayers who used the ECN in 1999. Of the respondents, 54 percent said that the ECN made them more likely to file on-line and 60 percent said that the ECN would make them more likely to file electronically in future years. According to IRS, the ECN Program is being terminated. Instead, starting in 2001, on- line filers will be able to use a self-selected PIN just like EROs. An Electronic Tax Administration official told us that IRS believes that more taxpayers took advantage of the PIN and ECN programs during the 2000 filing season because (1) practitioners and software companies did a better job of marketing the programs; (2) software companies involved in the PIN Program increased from 3 to 10; (3) 25 new software packages were available, and 23 supported the ECN Program; and (4) the postcard alerting taxpayers about the ECN Program was redesigned. In 1999, for the first time, many taxpayers who electronically filed balance due returns could pay their balance due either by credit card or by direct debit from a checking or saving account. On-line filers who used certain software packages were able to indicate on-line when filing their returns that they wanted to pay any balance due by credit card. Taxpayers who used traditional electronic filing or TeleFile could charge their balance due by credit card with a toll-free telephone call to private companies that processed the credit card payments. IRS expanded the credit card payment option for the 2000 filing season by (1) promoting its use to paper filers and (2) expanding its use to the payment of estimated taxes and the payment of taxes accompanying applications for extensions to file. Taxpayers filing electronic balance due returns could also pay their balance due by direct debit to a checking or saving account through an automated clearinghouse. IRS expanded the direct debit option for the 2000 filing season by making it available to all electronic filers. This option previously had not been available to TeleFile users. The direct debit is only paperless for on-line filers who participated in the ECN Program. Those filers used the ECN as their signature and were to indicate via an on-line prompt that they wanted to use the direct debit option. Other on-line filers and other electronic filers who chose the direct debit option had to submit a Form 8453, which contains a disclosure statement that requires the taxpayer’s signature authorizing the direct debit. According to IRS data, as of September 30, 2000: The number of credit card payments had increased to about 218,000 compared to about 53,000 in 1999. At least 63,000 of the 218,000 payments were associated with individual income tax returns that were filed electronically. The number of direct debit payments had increased to about 237,000 compared to about 76,000 in 1999. Of the 237,000, about 36,000 were TeleFile users. IRS informed us that virtually no problems were encountered in processing credit card and direct debit payments in 2000. Although we saw no data with which to determine a direct cause/effect relationship, it appears that the availability of electronic payment options led to the electronic filing of more balance due income tax returns in 2000. In that regard, the Electronic Tax Administration Advisory Committee reported that about 2.3 million balance due returns were filed electronically in 2000—about 51 percent more than in 1999. IRS made several changes for the 2000 filing season in an attempt to reduce the number of taxpayer errors and enhance its processing efforts. Of particular note, IRS simplified the Child Tax Credit worksheet, revised the criteria for filing Schedule D (Capital Gains and Losses), began using dual mailing addresses for taxpayers to use in sending their returns to IRS, and began verifying secondary SSN. As we reported last year, the Child Tax Credit caused processing problems for IRS during the 1999 filing season because many taxpayers did not claim the credit even though they checked a box on the return indicating that one or more of their dependents was eligible for the credit. IRS data indicate that taxpayers and tax return preparers had fewer problems with the Child Tax Credit in 2000. As of June 2, 2000, according to IRS, the number of Child Tax Credit errors by taxpayers and preparers was 37 percent lower than at the same time in 1999. This decrease is even more significant considering that, according to IRS’ Statistics of Income Division, more taxpayers claimed the Child Tax Credit in 2000 than in 1999. The fewer errors in 2000 can be attributed, at least in part, to changes IRS made to the Child Tax Credit worksheet, which, in our opinion, reduced the chance for error. Before the revision, taxpayers were required to complete an 11-line worksheet that incorporated the criteria for eligibility along with the calculations for the credit. IRS simplified the worksheet by presenting the criteria in the form of questions to which the taxpayer was to answer “yes” or “no.” If taxpayers plainly met the criteria, they were directed to complete a simple five-line worksheet to calculate the credit. Taxpayers who did not plainly meet the criteria were directed to use a separate publication to determine if they were entitled to any part of the credit. Our review of the revision and the conditions that require use of the separate publication indicated that most taxpayers would have been able to use the simple five-line worksheet and avoid the publication. As we discussed in our report on the 1998 filing season, IRS’ implementation of a legislative change relating to capital gains led to additional burdens for IRS and taxpayers. In 1998, if Schedule D, which is used to report capital gains, was missing from a return, IRS would stop processing the return and write the taxpayer asking for a Schedule D. For the 1999 filing season, IRS changed that procedure to correspond with the taxpayer only if the capital gain reported on the return was over a certain amount. For the 2000 filing season, IRS raised that amount and dropped the requirement for a Schedule D if the only capital gain was from a mutual fund distribution. Consistent with those changes, IRS data indicate that (1) about 17 percent of the individual income tax returns that were received as of May 5, 2000, included a Schedule D compared to about 21 percent at the same point in time in 1999 and (2) as of June 2, 2000, the number of error notices related to problems with Schedule D had decreased 7.7 percent compared to 1999. When tax returns come into a service center, it is important that IRS be able to quickly distinguish those that include remittances from those that do not. IRS gives priority processing attention to returns with remittances so that the money can be quickly deposited to the U.S. Treasury. The mail- sorting equipment IRS uses to identify and segregate mail containing remittances relies on a magnetic ink detection system to determine if there is a check in the envelope. However, because laser computer printers use magnetic ink, the equipment often misreads that print as indicating the presence of a check. According to a report prepared by IRS’ Statistics of Income Division, this problem caused many returns to be misidentified as containing remittances. This misidentification causes an excessive amount of nonremittance work to receive priority processing attention, which, according to IRS, ultimately delays monetary deposits to the Treasury. For the 2000 filing season, in an effort to better identify returns with remittances, IRS tested the use of dual mailing addresses for taxpayers to use in sending their Form 1040 return to IRS (persons using Forms 1040A and 1040EZ still used only one address). IRS’ test involved the use of one address for returns claiming a refund and another address for returns not claiming a refund. An IRS sample of returns at 2 of its 10 service centers showed that taxpayers used the correct address (thus correctly identifying their return as a refund or no refund return) 82 percent of the time. Another IRS effort to enhance its processing efforts for the 2000 filing season involved the systematic verification of secondary SSNs. As part of that effort, IRS sent notices to taxpayers whose secondary SSNs were invalid and who met other criteria. Before the 2000 filing season, IRS had focused its SSN verification efforts on primary SSNs and the SSNs of dependents and EIC-qualifying children. As of June 2, 2000, IRS had issued about 36,000 notices related to invalid secondary SSNs. By contrast, about 1.6 million notices were generated relating to invalid dependent SSNs, and about 152,000 notices were generated relating to invalid primary SSNs. For the 2000 filing season, IRS implemented the Earned Income Credit (EIC) Preparer Outreach Program—an integrated education and enforcement effort whose goals are to (1) educate EIC preparers, (2) reduce EIC errors, and (3) lower EIC overclaims. IRS divided preparers into five groups, with each group getting a different type of visit from IRS, ranging from education to criminal investigation. The type of visit that each preparer received was based on the preparer’s filing history. The first group consisted of about 9,000 preparers. Visits to those preparers were to have an education and outreach focus. During these visits, IRS employees were to, among other things, give preparers the EIC Practitioner’s Kit. The second group consisted of about 880 preparers who were to receive a limited due diligence review. In conducting these reviews, revenue agents were to look at 10 returns and associated documents done by each preparer to determine if the preparer complied with the due diligence requirements specified in section 6695(g) of the Internal Revenue Code.The revenue agents could recommend a $100 penalty for each failure to comply with the due diligence requirements. The third group consisted of about 325 preparers who were to receive a more comprehensive due diligence review. Revenue agents were to review up to 100 returns and associated documents in increments of 25 returns. According to IRS guidelines, an agent’s decision regarding whether to review each succeeding increment was to be based on the results of the agent’s review of the previous increment. Once again, the agents could recommend a $100 penalty for each failure to comply with the due diligence requirements. The fourth group consisted of 118 preparers who were treated as program action cases. A program action case consists of an examination of returns done by the preparer when information indicates a pattern of noncompliance with preparer provisions of the Internal Revenue Code. According to IRS, a program action case can result in a variety of penalties being asserted against both the preparers and their clients. The fifth group consisted of 75 preparers who were to be criminally investigated by IRS’ Criminal Investigation Division. IRS completed 7,152 education and outreach visits, 751 limited due diligence visits, and 264 comprehensive due diligence visits. The number of completed visits was less than the number planned for several reasons. For example, some preparers in the first group opted out of their education and outreach visits, and other preparers had either gone out of business or could not be located. As of June 27, 2000, IRS had started 118 program action cases and had not begun any criminal investigations. In late January and early February 2000, The Gallup Organization conducted a telephone survey of 401 preparers who had received education and outreach visits. The survey showed that 83 percent of the respondents were quite satisfied with the visits. According to the survey, preparers most liked (1) the thorough explanations of the EIC requirements, (2) the friendly and courteous IRS representatives, and (3) the ability to discuss concerns with IRS representatives. In addition to those named above, Jyoti Gupta, Doris Hynes, Ron Jones, John Lesser, Joanna Stamatiades, and Bradley Terry made key contributions to this report. | GAO reviewed the Internal Revenue Service's (IRS) performance during the 2000 tax filing season. Except for a few relatively minor glitches, which were not unexpected given the enormity of the IRS processing task, the processing systems worked well. On the other hand, although taxpayers had an easier time reaching IRS by telephone compared to 1999, IRS' performance was still well below the level achieved in 1998. GAO identified several opportunities for improvement. In some areas, such as with the volunteer assistance programs and the assistance provided by IRS' walk-in sites and area distribution centers, the opportunities centered around performance measures. In those areas, it was not easy to assess the agency's performance because either IRS lacked good measures or there were problems with the data behind the measures. Other improvement opportunities centered around management oversight-the kind of oversight that would enhance the level of service provided by better ensuring that (1) training materials and computer equipment were delivered to the volunteer assistance sites on time and in working condition and (2) data being entered on the website by various IRS offices are current and consistent. |
In conducting nonresponse follow-up, the bureau has historically faced the twin challenge of (1) collecting quality data (by obtaining complete and accurate information directly from household members) while (2) finishing the operation on schedule, before error rates can increase as people move or have trouble recalling who was living at their homes on Census Day (April 1), as well as keeping subsequent operations on-track. Nonresponse follow-up was scheduled to begin on April 27, 2000, and end 10 weeks later, on July 7, 2000. Local census offices generally finished their nonresponse follow-up workloads ahead of the bureau’s 10-week schedule. As shown in figure 1, of the bureau’s 511 local offices in the 50 states, 463 (91 percent) finished nonresponse follow-up by the end of the eighth week of the operation, consistent with the bureau’s internal stretch goals. Moreover, nine local offices completed their workloads in as little as 5 weeks or less. The timely completion of nonresponse follow-up in 2000 stands in sharp contrast to the bureau’s experience during the 1990 Census. As shown in figure 2, at the end of the 6-week scheduled time frame for nonresponse follow-up during the 1990 Census, the bureau had not completed the operation. In fact, as of two days prior to the scheduled end date, just two local census offices had completed the operation and the bureau had only completed about 72 percent of its 34 million household follow-up workload. It took the bureau a total of 14 weeks to complete the entire operation. By comparison, as noted above, the bureau completed nonresponse follow-up in less than 10 weeks during the 2000 Census. Figure 2 also highlights the drop-off in production that occurs during the later weeks of nonresponse follow-up. According to the bureau, the decline occurs because unresolved cases at the end of nonresponse follow- up are typically the most difficult to reach, either because they are uncooperative or are rarely at home and are unknown to neighbors. To meet our objectives, we used a combination of approaches and methods to examine the conduct of nonresponse follow-up. These included statistical analyses; interviews with key bureau headquarters officials, regional census center officials, and local census office managers and staff; observations of local census offices’ nonresponse follow-up operations; and reviews of relevant documentation. To examine the factors that contributed to the timely completion of nonresponse follow-up, we interviewed local census office managers and other supervisory staff at 60 local census offices we visited across the country. These offices generally faced specific enumeration challenges when nonresponse follow-up began in late April, and were thus prone to operational problems that could affect data quality (see app. I for a complete list of the offices we visited). Specifically, these offices had (1) a larger nonresponse follow-up workload than initially planned; (2) multiple areas that were relatively hard-to-enumerate, such as non-English-speaking groups; and (3) difficulties meeting their enumerator recruiting goals. During these visits, which took place in June and July 2000, we also observed office operations to see how office staff were processing questionnaires; at 12 of these offices we attended enumerator training; and at 31 offices we reviewed key reinterview documents in a given week during nonresponse follow-up. The local census offices we visited represent a mix of urban, suburban, and rural locations. However, because they were judgmentally selected, our findings from these visits cannot be projected to the universe of local census offices. To obtain a broader perspective of the conduct of nonresponse follow-up, we used the results of our survey of a stratified random sample of managers at 250 local census offices. The survey—which asked these managers about the implementation of a number of key field operations— is generalizable to the 511 local census offices located in the 50 states. We obtained responses from managers at 236 local census offices (about a 94 percent overall response rate). All reported percentages are estimates based on the sample and are subject to some sampling error as well as nonsampling error. In general, percentage estimates in this report for the entire sample have confidence intervals ranging from about ± 4 to ± 5 percentage points at the 95 percent confidence interval. In other words, if all managers in our local census office population had been surveyed, the chances are 95 out of 100 that the result obtained would not differ from our sample estimate in the more extreme cases by more than ± 5 percent. To examine whether the pace of nonresponse follow-up was associated with the collection of less complete data, in addition to the efforts described above, we analyzed bureau data on the weekly progress of nonresponse follow-up. Specific measures we analyzed included the time it took local census offices to finish nonresponse follow-up and the proportion of their cases completed by (1) “close-out” interviews, where questionnaires only contain basic information on the status of the housing unit (e.g., whether it was occupied), or (2) “partial” interviews, which contain more information than a close-out interview but are still less than complete. The completeness of the data collected by enumerators is one measure of the quality of nonresponse follow-up, and these two measures were the best indicators of completeness available from the database. We included data from the 511 offices located in the 50 states and controlled for enumeration difficulty using an index measure developed by the bureau. We did not include any outliers that the bureau identified as erroneous (for example, outliers resulting from coding errors). We did our audit work at the local census offices identified in appendix I and their respective regional census centers; bureau headquarters in Suitland, Maryland; and Washington, DC, from March 2000 through September 2001. Our work was done in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce. On January 10, 2002, the Secretary forwarded the bureau’s written comments on the draft (see app. II) which we address at the end of this report. Key to the bureau’s timely completion of nonresponse follow-up in 2000 was a higher than expected initial mail response rate that decreased the bureau’s follow-up workload. In addition to reducing the staff, time, and money required to complete the census count, the bureau’s past experience and evaluations suggest that the quality of data obtained from questionnaires returned by mail is better than the data collected by enumerators. To help raise the mail response rate, the bureau (1) hired a consortium of private-sector advertising agencies, led by Young & Rubicam, to develop a national, multimedia paid advertising program, and (2) partnered with local governments, community groups, businesses, nongovernmental organizations, and other entities to promote the census on a grassroots basis (we discuss the bureau’s partnership program in more detail in our August 2001 report). The outreach and promotion campaign encouraged people to complete their census questionnaires by conveying the message that census participation helped their communities. The bureau also helped boost the mail response rate by using simplified questionnaires, which was consistent with our past suggestions, and by developing more ways to respond to the census, such as using the Internet. The bureau achieved an initial mail response rate of about 64 percent, which was about 3 percentage points higher than the 61 percent response rate the bureau expected when planning for nonresponse follow-up.This, in turn, resulted in a nonresponse follow-up workload of about 42 million housing units, which was about 4 million fewer housing units than the bureau would have faced under its planning assumption of a 61 percent mail response rate. In addition to surpassing its national response rate goals, the bureau exceeded its own expectations at the local level. Of the 511 local census offices, 378 (74 percent) met or exceeded the bureau's expected response rate. In so doing, these offices reduced their nonresponse follow-up workloads from the expected levels by between 54 and 58,329 housing units. The remaining 133 offices (26 percent) did not meet their expected response rate and the workload at these offices increased from their expected levels by between 279 and 33,402 housing units. The bureau’s success in surpassing its response rate goals was noteworthy given the formidable societal challenges it faced. These challenges included attitudinal factors such as concerns over privacy, and demographic trends such as more complex living arrangements. However, as the bureau plans for the next census in 2010, it faces the difficulty of boosting public participation while keeping costs manageable. As we noted in our December 2001 report, although the bureau achieved similar response rates in 1990 and 2000 (65 percent in 1990 and 64 percent in 2000), the bureau spent far more money on outreach and promotion in 2000: about $3.19 per household in 2000 compared to $0.88 in 1990 (in constant fiscal year 2000 dollars), an increase of 260 percent. Moreover, the societal challenges the bureau encountered in 1990 and 2000 will probably be more complex in 2010, and simply staying on par with the 2000 response rate will likely require an even greater investment of bureau resources. Further, while the mail response rate provides a direct indication of the nonresponse workload, it is an imperfect measure of public cooperation with the census as it is calculated as a percentage of all forms in the mail- back universe from which the bureau received a questionnaire. Because the mail-back universe includes housing units that the bureau determines are nonexistent or vacant during nonresponse follow-up, a more precise measure of public cooperation is the mail return rate, which excludes vacant and nonexistent housing units. According to preliminary bureau data, the mail return rate for the 2000 Census was 72 percent, a decline of 2 percentage points from the 74 percent mail return rate the bureau achieved in 1990. As shown in figure 3, in 2000, the bureau reduced, but did not reverse, the steady decline in public cooperation that has occurred with each decennial census since the bureau first initiated a national mail- out/mail-back approach in 1970. Bureau officials said they would further examine the reasons for the decline in the return rate as part of its Census 2000 evaluations. In addition, as shown in figure 4, the results to date show that just three states increased their mail return rates compared to the 1990 Census. Overall, preliminary bureau data shows the change in mail return rates from 1990 through 2000 ranged from an increase of about 1 percentage point in Massachusetts and California to a decline of about 9 percentage points in Kentucky. The bureau’s outreach and promotion efforts will also face the historical hurdle of bridging the gap that exists between the public’s awareness of the census on the one hand, and its motivation to respond on the other. Various polls conducted for the 2000 Census suggested that the public’s awareness of the census was over 90 percent; and yet, as noted earlier, the actual return rate was much lower—72 percent of the nation’s households. The bureau faced a similar issue in 1990 when 93 percent of the public reported being aware of the census, but the return rate was 74 percent. In our previous work, we noted that closing this gap would be a significant challenge for the bureau, and as the bureau plans for the 2010 Census, it will be important for it to explore approaches that more effectively convert the public’s awareness of the census into a willingness to respond. A second factor that was instrumental to the operational success of nonresponse follow-up was an ample and sufficiently skilled enumerator workforce. Based on anticipated turnover and the expected workload to carry out its four largest field data collection operations—of which nonresponse follow-up was the largest—the bureau set a recruitment goal of 2.4 million qualified applicants. In addition to the sheer volume of recruits needed, the bureau's efforts were complicated by the fact that it was competing for employees in a historically tight national labor market. Nevertheless, when nonresponse follow-up began on April 27, the bureau had recruited over 2.5 million qualified applicants. The bureau surmounted its human capital challenge with an aggressive recruitment strategy that helped make the bureau a more attractive employer to prospective candidates and ensured a steady stream of applicants. Key ingredients of the bureau’s recruitment efforts included the following: 1. A geographic pay scale with wages set at 65 to 75 percent of local prevailing wages (from about $8.25 to $18.50 per hour for enumerators). The bureau also used its flexibility to raise pay rates for those census offices that were encountering recruitment difficulties. For example, a manager at one of the Charlotte region’s local census offices told us that the office was having difficulty obtaining needed staff in part because census wages were uncompetitive. According to this manager, the region approved a pay increase for the office’s enumerators and office clerks, which helped the office obtain staff. In all, when nonresponse follow-up began, the bureau raised pay rates for field staff at eight local offices to address those offices’ recruiting challenges. 2. Partnerships with state, local, and tribal governments, community groups, and other organizations to help recruit employees and provide free facilities to test applicants. For example, Clergy United, an organization representing churches in the Detroit metropolitan area, provided space for testing census job applicants in December 1998. The organization even conducted pre-tests several days before each bureau-administered test so those applicants could familiarize themselves with the testing format. 3. A recruitment advertising campaign, which totaled over $2.3 million, that variously emphasized the ability to earn good pay, work flexible hours, learn new skills, and do something important for one’s community. Moreover, the advertisements were in a variety of languages to attract different ethnic groups, and were also targeted to different races, senior citizens, retirees, and people seeking part-time employment. The bureau advertised using traditional outlets such as newspaper classified sections, as well as more novel media including Internet banners and messages on utility and credit card bills. 4. Obtaining exemptions from the majority of state governments so that individuals receiving Temporary Assistance for Needy Families, Medicaid, and selected other types of public assistance would not have their benefits reduced when earning census income, thus making census jobs more attractive. At the start of nonresponse follow-up, 44 states and the Virgin Islands had granted an exemption for one or more of these programs. 5. Encouraging local offices to continue their recruiting efforts throughout nonresponse follow-up, regardless of whether offices had met their recruiting goals, to ensure a steady stream of available applicants. The bureau matched these initiatives with an ongoing monitoring effort that enabled bureau officials to rapidly respond to recruiting difficulties. For example, during the last 2 weeks of April, the bureau mailed over 5 million recruiting postcards to Boston, Charlotte, and other locations where it found recruitment efforts were lagging. Based on the results of our local census office visits, it is clear that the bureau’s human capital strategy had positive outcomes. Of the 60 local census offices we visited, officials at 59 offices provided useable responses to our question about whether their offices had the type of staff they needed to conduct nonresponse follow-up, including staff with particular language skills to enumerate in targeted areas. Officials at 54 of the 59 offices said they had the type of staff they needed to conduct nonresponse follow-up. For example, officials in the Boston North office said they hired enumerators who spoke Japanese, Vietnamese, Portuguese, Spanish, French, Russian, and Chinese, while Pittsburgh office officials said they had enumerators that knew sign language to communicate with deaf residents. Managers at local census offices we surveyed provided additional perspective on recruiting needed field staff. As shown in figure 5, 30 percent of the respondents believed that the bureau’s ability to recruit and hire high-quality field staff needed no improvements. While managers at 52 percent of the local offices commented that some improvement to the recruiting and hiring process was needed and another 17 percent commented that a significant amount of improvement was needed, their suggestions varied. Managers’ suggestions generally related to various hiring practices, such as a greater use of face-to-face interviews to select managers at local census offices and earlier recruitment advertising. Once nonresponse follow-up began, bureau officials tracked production rates as the primary measure of whether local offices had met their staffing goals. For example, bureau officials said that both bureau headquarters and regional census center staff monitored local census offices’ production daily. If an office was not meeting its production goals, bureau headquarters officials said they worked with regional census personnel, who in turn worked with the local census office manager, to determine the reasons for the shortfall and the actions necessary to increase production. Possible actions included bringing in enumerators from neighboring local census offices. Overall, preliminary bureau data shows that about 500,000 enumerators worked on nonresponse follow-up. Nationally, the bureau established a hiring goal of 292,000 enumerator positions for nonresponse follow-up, which represented two people working approximately 25 hours per week for each position and assumed 100 percent turnover, according to bureau officials. The bureau has not yet analyzed how many enumerators charged at least 25 hours per week during nonresponse follow-up. Moreover, according to a senior bureau official, the bureau has not decided whether it will do such an analysis for 2010 planning purposes. According to this official, because the bureau hired about 500,000 enumerators and completed the operation a week ahead of schedule, they believe the bureau generally met its hiring goal. A third factor that contributed to the timely completion of nonresponse follow-up was preparing in advance for probable enumeration challenges. To do this, the bureau called on local census offices and their respective regional census centers to develop action plans that, among other things, identified hard-to-enumerate areas within their jurisdictions, such as immigrant neighborhoods, and propose strategies for dealing with those challenges. These strategies included such methods as paired/team enumeration for high-crime areas, and hiring bilingual enumerators. While this early planning effort helped local census offices react to a variety of enumeration challenges, the currency and accuracy of the nonresponse follow-up address lists and maps remained problematic for a number of local census offices. Of the 60 local census offices we visited, officials at 55 offices provided useable responses to our question about how, if at all, their offices used their action plan for hard-to-enumerate areas during nonresponse follow- up. Officials at 51 of 55 offices said their offices used the strategies in their action plan to address the enumeration challenges they faced. At the offices we visited, a frequently cited enumeration challenge was gaining access to gated communities or secure apartment buildings. Officials at 42 of the 60 offices we visited identified this as a problem. To address it, officials said they developed partnerships with building management and community leaders, among other strategies. In an Atlanta office, for example, local officials said they sent letters to managers of gated communities that stressed the importance of the census. Similarly, officials in a Chicago office said they personally phoned managers of secure apartment buildings. When enumerators from a Milwaukee local census office encountered problems accessing locked apartment buildings, local census officials told us that the City of Milwaukee sent aldermen to visit the building managers and encourage them to participate in the census. Another common enumeration challenge appeared to be obtaining cooperation from residents—cited as a difficulty by officials at 34 of the 60 offices we visited. One problem they noted was obtaining responses to the long-form questionnaire—either in its entirety or to specific items, such as income-related questions--which, according to local census officials, some residents found to be intrusive. Enumerators also encountered residents who were unwilling to participate in the census because of language and cultural differences, or their fears of government. The bureau’s standardized training for enumerators included procedures for handling refusals. Local census officials encouraged public participation with a variety of approaches as well. For example, census officials in Cleveland and Cincinnati said they provided additional training for enumerators on how to handle refusals and practiced what was taught in mock interviews. Officials in other census offices said they partnered with local community leaders who subsequently helped reach out to hard- to-enumerate groups, hired people who were bilingual or otherwise trusted and known by residents, and held media campaigns. Overall, according to bureau data, close to 470,000 households of the approximately 42 million making up the nonresponse follow-up workload (about 1 percent), refused to participate in the census. Of the 60 local census offices we visited, officials at 52 offices provided useable responses to our question about whether their offices’ nonresponse follow-up address list reflected the most accurate and current information. Officials at 21 of the 52 offices said that their lists generally were not accurate and current. Nationwide, as shown in figure 6, based on our survey of local census office managers, we estimate that managers at approximately 50 percent of local census offices believed that some improvement was needed in the accuracy of address lists for nonresponse follow-up. We estimated that managers at about 22 percent of local census offices believed that a significant amount of improvement was needed. Among the more frequent problems managers cited were duplicate addresses and changes not being made from prior operations. For example, at a local census office in the Seattle region, managers said that some addresses were residences or businesses that had been gone for 10-15 years and should have been deleted in previous census operations but were not. Local census officials we visited cited problems with the accuracy of the census maps as well. Of the 60 local census offices we visited, officials at 58 offices provided useable responses to our question about whether the most accurate and current information was reflected on the nonresponse follow- up maps. Officials at about a third of local census offices—21 of 58 offices—said the nonresponse follow-up maps did not reflect the most accurate and current information. Further, as shown in figure 7, based on our survey of local census office managers, at about 41 percent of the offices, managers believed that some improvement was needed in maps for nonresponse follow-up. At about 23 percent of the offices, managers believed that a significant amount of improvement was needed in these maps. Managers who commented that improvements were needed to the nonresponse follow-up maps said the maps were difficult to use, not updated from prior operations, and contained errors. For example, an official at a local census office in the Atlanta region said that some roads on the map did not exist or were not oriented correctly on the census maps. To address this difficulty, local office staff purchased commercial maps or used the Internet to help them locate some housing units. The bureau developed its master address list and maps using a series of operations that made incremental updates designed to continuously improve the completeness and accuracy of the master address file and maps. A number of these updates occurred during nonresponse follow-up when enumerators encountered, for example, nonexistent or duplicate housing units, or units that needed to be added to the address list. As a result, the bureau was expecting some discrepancies between the nonresponse follow-up address list and what enumerators found in the field when they went door-to-door, which could account for some of the local census officials’ perceptions. Another factor that affected the currency of the nonresponse follow-up address list was the cut-off date for mail-back responses. The bureau set April 11, 2000, as the deadline for mail-back responses for purposes of generating the address list for nonresponse follow-up. In a subsequent late mail return operation, the bureau updated its field follow-up workload by removing those households for which questionnaires were received from April 11 through April 18. However, according to bureau officials, the bureau continued to receive questionnaires, in part because of an unexpected boost from its outreach and promotion campaign. For example, by April 30—less than 2 weeks after the bureau removed the late mail returns that it had checked-in as of April 18--the bureau received 773,784 additional questionnaires. Bureau headquarters officials told us it was infeasible to remove the late returns from the nonresponse follow-up address lists and thus, enumerators needed to visit these households. The cost of these visits approached $22 million, based on our earlier estimate that a 1-percentage point increase in workload could add at least $34 million in direct salary, benefits, and travel costs to the price tag of nonresponse follow-up. In addition, the bureau’s data processing centers then had to reconcile the duplicate questionnaires. According to officials at some local offices we visited, the visits to households that had already responded confused residents who questioned why enumerators came to collect information from them after they had mailed back their census forms. To help ensure that local census offices completed nonresponse follow-up on schedule, the bureau developed ambitious interim stretch goals. These goals called on local census offices to finish 80 percent of their nonresponse follow-up workload within the first 4 weeks of the operation and be completely finished by the end of the eighth week. Under the bureau’s master schedule, local census offices had 10 weeks to complete the operation. Our survey of local census office managers asked what impact, if any, scheduling pressures to complete nonresponse follow-up had on the quality of the operation. On the one hand, as shown in figure 8, about 41 percent of the local census office managers believed that scheduling pressures had little or no impact on the quality of the operation, while about 17 percent believed that such pressure had a positive or significantly positive impact. At a local census office in the New York region, for example, the local census office manager stated that, "pressuring people a little gave them the motivation to produce.” Managers in local census offices located in the Dallas region commented that the schedule “kept people on their toes and caused them to put forth their best effort" and that it “had a positive impact, particularly on quality.” On the other hand, managers at a substantial number of local census offices had the opposite view. As shown in figure 8, about 40 percent of the respondents believed that scheduling pressure during nonresponse follow- up had a negative or significantly negative impact on the quality of the operation. Of those managers who believed that the pressure to complete nonresponse follow-up adversely affected the quality of the operation, a common perception appeared to be that production was emphasized more than accuracy and that the schedule required local census offices to curtail procedures that could have improved data quality. For example, managers at some local census offices told us that the bureau’s regional census centers encouraged competition between local census offices by, among other actions, ranking local census offices by their progress and distributing the results to local managers. Managers at some local census offices believed that such competition fostered a culture where quantity was more important than quality. As one manager told us, the bureau’s ambitious nonresponse follow-up schedule led the manager “to put enormous pressure on people in the field to complete the operation quickly, and this affected the quality of data.” However, none of the managers we surveyed cited specific examples of where corners were cut or quality was compromised. One measure of the quality of nonresponse follow-up is the completeness of the data collected by enumerators. The bureau went to great lengths to obtain complete data directly from household members. Bureau procedures generally called for enumerators to make up to three personal visits and three telephone calls to each household on different days of the week at different times until they obtained needed information on that household. However, in cases where household members could not be contacted or refused to answer all or part of the census questionnaire, enumerators were permitted to obtain data via proxy (a neighbor, building manager, or other nonhousehold member presumed to know about its residents), or collect less complete data than called for by the census questionnaire. Such data include (1) “closeout” interviews, where questionnaires only contain the information on the status of the housing unit (e.g., whether or not it was occupied), and the number of residents and (2) “partial” interviews, which contain more information than a closeout interview but less than a completed questionnaire. There were several well-publicized breakdowns in these enumeration procedures at a small number of local census offices that took short cuts to complete their work (which the bureau later took steps to rectify). Nationally, however, our analysis of bureau data found no statistically significant association between the week individual local census offices finished their nonresponse follow-up workload and the percentage of partial or closeout interviews they reported, after controlling for the enumeration difficulty level of each local office’s area (at the time of our review, the bureau did not have information on data collected via proxy interviews). Neither did we find a statistically significant relationship between the week that local census offices finished their nonresponse follow-up workload and the amount of residual workload, they had, if any. The residual workload consisted of households that were part of the original follow-up workload, but from which the bureau did not receive a questionnaire from the local census offices, and thus had not been processed through data capture. According to bureau data, 519 local offices had to conduct residual nonresponse follow-up on 121,792 households. Similarly, we did not find an association between week-to-week “spikes” in local census offices’ production and the percentage of either partial or closeout interview data reported. Spikes or surges in production could indicate that local census offices were cutting corners to complete their workloads by a specific deadline. Nationally, we found no relationship between the number of questionnaires finished each week and either the percentage of those finished that were closeout interviews or partial interviews. Overall, as shown in figure 9, as nonresponse follow-up progressed, the proportion of closeout and partial interview data collected relative to the amount of questionnaires finished remained relatively constant. Moreover, only a small percentage of most local census offices’ nonresponse follow-up workload was finished using closeout and partial interviews. As shown in figure 10, of the 499 local offices where reliable closeout data were available, 413 (83 percent) reported that less than 2 percent of their questionnaires were finished in this manner, while 19 offices (4 percent) reported 5 percent or more of their finished nonresponse follow-up work as closeout interviews. For partial interviews, of the 508 offices where reliable data were available, 267 (53 percent) reported collecting less than 2 percent of such data, while 47 offices (9 percent) reported 5 percent or more of their finished work as partial interviews. The median percentages of closeout and partial interviews were .8 percent and 1.9 percent, respectively. At those local census offices that had substantially higher levels of closeout and partial interview data than other offices, the bureau said that some of this was understandable given the enumeration challenges that these census offices faced. For example, according to the bureau, the relatively high partial interview rate at a New York local office (3.8 percent of that office’s finished nonresponse follow-up workload) was in line with the regional average of 2.2 percent, partly due to the difficulty that staff had in gaining access to apartment buildings. Once building managers gave enumerators access and they were able to obtain information from proxies, the number of refusals may have decreased, but the number of partial interviews increased because the proxies could not provide complete information. Still, as noted above, some local census offices inappropriately used certain enumeration techniques. For example, the Hialeah, Florida, office reported finishing its nonresponse follow-up workload in 5 weeks—well ahead of the 8-week stretch goals and 10 weeks allotted for the operation. The Homestead, Florida, office—where Hialeah-trained enumerators were later transferred to help complete nonresponse follow-up—reported finishing its workload in 7 weeks. The Commerce Department’s Office of the Inspector General later found that Hialeah-trained enumerators did not make the required number of visits and telephone calls before contacting a proxy for information, and did not properly implement quality control procedures designed to detect data falsification. The bureau responded to these findings by, among other actions, reworking over 64,000 questionnaires from the Hialeah and Homestead offices. To help ensure that enumerators followed proper enumeration procedures and were not falsifying data, the bureau “reinterviewed” households under certain circumstances to check enumerators’ work. As such, reinterviews were a critical component of the bureau’s quality assurance program for nonresponse follow-up. If falsification was detected during a reinterview, the local office was to terminate the enumerator and redo all of the enumerator’s work. Enumerators making inadvertent errors were to correct their mistakes and be retrained. The bureau conducted three types of reinterviews: 1. Random reinterviews were to be performed on a sample of enumerators’ work during the early weeks of their employment. Seven randomly selected questionnaires from each enumerator’s first 70 cases were to have been reinterviewed. 2. Administrative reinterviews checked the work of enumerators whose performance in certain dimensions (e.g., the number of partial interviews conducted) differed significantly from that of other enumerators employed in the same area—and there was no justification for the difference. In such cases, enumerators could be fabricating data. According to the bureau, administrative tests were designed to identify enumerators who were making errors that were more likely to occur toward the end of the operation, after the random check of enumerators’ initial work. They were conducted at the discretion of local census officials. 3. Supplemental reinterviews were to be conducted at the discretion of local census officials when they had some basis for concern about the quality of an enumerator’s work. On the basis of our work and that of the bureau, we found that local census office officials often used their discretion to not conduct administrative and supplemental reinterviews and thus, a number of local offices did not conduct such reinterviews. At those offices, once the random check of enumerators’ initial work was completed, there were no additional checks specifically designed to catch enumerators suspected of falsifying data. This raises questions about the reinterview program’s ability to ensure the quality of enumerators’ work over the full duration of their employment on nonresponse follow-up. Of the 520 local census offices, 52 offices (10 percent) conducted no administrative and no supplemental reinterviews, according to bureau data.An additional 14 offices (3 percent) conducted no administrative reinterviews, and an additional 231 offices (44 percent) conducted no supplemental reinterviews. A chief in the bureau’s Quality Assurance Office expressed concern about the adequacy of quality assurance coverage toward the end of nonresponse follow-up for offices that did not conduct administrative and supplemental reinterviews. According to this official, this meant that once random reinterviews were completed at those offices, there were no additional checks specifically designed to detect fabricated data. Although enumerators’ immediate supervisors were to check enumerators’ work daily, these reviews were generally designed to identify enumerators who were completing questionnaires incorrectly (e.g., not following the proper question sequence and writing illegibly), whereas administrative and supplemental reinterviews were aimed at identifying enumerators who were intentionally falsifying data. Bureau officials said that at those local census offices that did not conduct any administrative reinterviews, local census office managers could conduct supplemental reinterviews if warranted. However, managers employed this option infrequently. Of the 66 local offices that did not conduct any administrative reinterviews, just 14 conducted supplemental reinterviews. Reasons that local census managers could use—as specified by the bureau—for not conducting an administrative reinterview included (1) the enumerator no longer worked in the area for which the administrative test was conducted; (2) the enumerator’s work was characteristic with the area (e.g., the enumerator reported a large number of vacant housing units and the area had a large number of seasonal housing units); or (3) other reason, with an accompanying explanation. Managers were to document their decision on the bureau’s administrative reinterview trouble reports listing the suspect enumerators. Our analysis of a week’s worth of administrative reinterview trouble reports at 31 local census offices found that while a number of enumerators were flagged for administrative reinterviews, local census office officials typically decided against conducting them. Specifically, of the 3,784 enumerators identified for possible reinterview, local officials subjected the work of 154 enumerators (4 percent) to reinterviews, and passed on 3,392 enumerators (90 percent). For 306 of the 3,874 enumerators (8 percent) listed on the administrative trouble reports we reviewed, there was no indication of a final decision on whether or not to subject the future work of these enumerators to administrative reinterview. Overall, local census offices conducted far fewer administrative reinterviews than the bureau had anticipated. Local census offices conducted 276,832 administrative reinterviews—146,993 (35 percent) fewer than the 423,825 administrative reinterviews the bureau had expected based on a number of factors, including the number of cases completed per hour during the 1990 Census, and the estimated workload in 2000. Whether this was due to better quality work on the part of enumerators, or local managers deciding against subjecting enumerators’ work to reinterviews, is unknown. However, as administrative reinterviews were designed to detect fabrication and other quality problems more likely to occur toward the end of nonresponse follow-up after the random check of enumerators’ initial work, it will be important for the bureau to examine whether local census offices properly conducted administrative reinterviews, and thus ensure the quality of nonresponse follow-up data throughout the duration of the operation. Although nonresponse follow-up was fraught with extraordinary managerial and logistical challenges, the bureau generally completed nonresponse follow-up consistent with its operational plan—a remarkable accomplishment given the scope and complexity of the effort. Our review highlighted several strategies that were key to the bureau’s success including (1) an aggressive outreach and promotion campaign and other efforts aimed at boosting the mail response rate and lowering the bureau’s nonresponse follow-up workload; (2) a flexible recruiting strategy that made the bureau a competitive employer in a tight national labor market; (3) advance planning for addressing location-specific enumeration challenges; and (4) ambitious stretch goals that encouraged local managers to accelerate the pace of the operation. It will be important for the bureau to document the lessons learned from these initiatives and use them to help inform planning efforts for the next decennial census in 2010. It will also be important for the bureau to address the continuing significant challenges that were revealed by the conduct of nonresponse follow-up in 2000, including achieving an acceptable response rate (and thus lowering the bureau’s follow-up workload) while controlling costs; reversing the downward trend in public participation in the census, in part by converting the relatively large number of people who are aware of the census into census respondents; keeping the address list and maps used for nonresponse follow-up finding the right mix of incentives to motivate local census offices to complete nonresponse follow-up on schedule without compromising data quality; and ensuring that reinterview procedures provide sufficient quality assurance coverage through the full duration of enumerators’ employment on nonresponse follow-up. As the bureau plans for the next national head count in 2010, we recommend that the Secretary of Commerce ensure that the bureau take the following actions to help ensure that nonresponse follow-up is conducted as cost effectively as possible: Identify and refine lessons learned from the 2000 nonresponse follow-up operation and apply them to the bureau’s plans for the 2010 Census. Assess to the extent practicable, why people who were aware of the census did not return their census questionnaires and develop appropriate marketing countermeasures to bridge the gap between their awareness of the census on the one hand, and their motivation to respond on the other. Develop and test procedural and technological options that have the potential to generate a more accurate and up-to-date address list and set of maps for nonresponse follow-up. As part of this effort, the bureau should explore how to refresh the nonresponse follow-up address list more frequently, even as nonresponse follow-up is underway, so that enumerators would not have to make costly visits to late-responding households. The bureau also needs to examine the methods it uses in activities that precede nonresponse follow-up to develop and update the nonresponse address list and associated maps. Specifically, the bureau should determine the extent to which updates that should have been made were properly reflected in the nonresponse follow-up list and maps, and take appropriate corrective actions to address any problems it identifies. Ensure that the bureau’s procedures and incentives for the timely completion of nonresponse follow-up emphasize the collection of quality data and proper enumeration techniques as much as speed. Examine the bureau’s reinterview procedures—particularly as they relate to the discretion given to local census officials—to help ensure that the procedures are sufficient for consistently and reliably detecting potential problems throughout the duration of enumerators’ employment on nonresponse follow-up. Agency Comments and The Secretary of Commerce forwarded written comments from the Bureau Our Evaluation of the Census on a draft of this report. The bureau concurred with all five of our recommendations and had no specific comments on them. The bureau also clarified several key points and provided additional information and perspective, which we incorporated in our report as appropriate. The bureau noted that, in addition to the locked apartment buildings that we cited in the Results in Brief section of our report, gated communities were also an enumeration challenge. While the body of the report already contained this information, we added it to the Results in Brief section as well. Our draft report stated: “One reason for the errors in the nonresponse follow-up address lists was that the bureau found it was infeasible to remove late-responding households. As a result, enumerators needed to visit over 773,000 households that had already mailed back their questionnaires. . . .” The bureau commented that it made a conscious decision to conduct these visits based on logistical concerns and, as a result, the bureau believes that our use of the terms “errors” and “needlessly” do not take this into consideration and are misleading. Because the bureau could not refresh its nonresponse follow-up address list to reflect households that responded after April 18, the bureau had no choice but to send enumerators to those households and collect the information in-person. However, the term “needed to” better characterizes the bureau’s lack of options and we revised the text accordingly. We also deleted the term “errors.” In response to our finding that 52 local census offices did not conduct any reinterviews after an initial random check of enumerators’ work, the bureau commented that the initial random check was not a minimal activity in that it involved reinterviewing up to seven cases per enumerator. The bureau also noted that there were no operational requirements to conduct a specific number of administrative or supplemental reinterviews. We agree with the bureau’s comments. Indeed, the draft report already included information on the number of initial random reinterviews the bureau conducted and the discretionary nature of administrative and supplemental reinterviews. Nevertheless, it is also true, as we note in our report, that once those 52 local census offices completed the seven random reinterviews, there were no additional checks specifically designed to catch enumerators suspected of falsifying data. Moreover, we reported that nationwide, local census offices conducted far fewer administrative reinterviews than the bureau had expected. As we note in the report, whether this was due to the quality of enumerators’ work or local managers using their discretion and opting not to subject enumerators’ work to reinterviews, is unknown. With respect to the bureau’s monitoring of local census office’s productivity, the bureau noted that headquarters officials did not work directly with local census office staff as noted in the draft; rather, headquarters personnel worked with the bureau’s regional census centers, and they in turn worked with the local offices. We revised the text to reflect this information. With respect to our observation that several local census offices had to quickly respond to unanticipated challenges, such as working with nonresponse follow-up address lists and maps that were not accurate or current, the bureau commented that there were standard procedures in the nonresponse follow-up enumerator manual on how to deal with map/register discrepancies. We verified this and revised the text accordingly. In describing the steps that local census officials took to encourage public participation in the census, we noted that census officials in Cleveland and Cincinnati said they provided additional training for enumerators on how to handle refusals. The bureau noted that standardized training was provided, across the nation, on options for handling refusals, and information was also provided in the nonresponse follow-up enumerator manual. We verified this information and added it to the report. The bureau commented that the address list and map difficulties that enumerators encountered were not nonresponse problems because, as we note in the report, and the bureau agrees, they should have been dealt with in earlier census operations. Nevertheless, the problems did not surface until nonresponse follow-up when enumerators encountered duplicate and nonexistent addresses, and were less productive as a result. For this reason, the report recommends that the bureau examine the methods it uses in activities that precede nonresponse follow-up to ensure the address lists and maps used for nonresponse follow-up are accurate and up-to-date. In response to our statement that nonresponse follow-up was to help verify changes to the address list from earlier address list development operations, the bureau commented that nonresponse follow-up was conducted to enumerate households from which it did not receive a completed questionnaire; map and address updates were incidental. We agree with the bureau on the primary purpose of nonresponse follow-up and revised the text to better reflect this point. However, the bureau’s program master plan for the master address file includes nonresponse follow-up as one of a number of address list development and maintenance operations, and the bureau expected enumerators to update maps and address registers as needed as part of their field visits. The bureau said it could not confirm data in our draft report on the number of vacant and deleted units identified during nonresponse follow-up and suggested removing this information. Although we obtained the data directly from the bureau, given the bureau’s concerns, we deleted the section. In commenting on the fact that we did not find a statistically significant relationship between the week that local census offices finished their follow-up workload and the amount of their residual workload, the bureau stated that the report needed to reflect the fact that residual nonresponse consisted of housing units for which completed questionnaires had not been processed through data capture. We revised the draft accordingly. The bureau noted that assistant managers for field operations, among other local census officials, could request supplemental reinterviews, and not just field operations supervisors as we stated in our report. We revised our draft to include this information. With respect to our findings concerning the reinterview program’s ability to detect problems, particularly at the end of nonresponse follow-up, the bureau commented that there was turnover in the enumerator workforce; consequently, with new hires, random reinterviews were conducted during all stages of the operation. As we note in the report, 52 local census offices (about 10 percent of all local offices), did not conduct any administrative and supplemental reinterviews. Thus, once these offices completed the random reinterviews on the initial work of newly hired enumerators, there were no additional checks specifically designed to catch enumerators suspected of falsifying data. We added language to better clarify this point. The bureau said that it was uncertain as to the methodology and documentation used for deriving figures on the number of reinterviews the bureau conducted. We obtained the data from the bureau’s cost and progress system. The bureau stated that there was no evidence that data quality was compromised to motivate on-time completion of nonresponse follow-up. Our research suggests that the impact of the bureau’s incentives to motivate timeliness was less clear-cut given the fact that, as we note in our report, (1) about 40 percent of the local census office managers believed that scheduling pressures had a negative or significantly negative impact on the quality of nonresponse follow-up, and (2) a small number of local census offices took short-cuts to complete their work (which the bureau later took steps to rectify). Thus, while we agree with the bureau that maintaining data quality should be a given in determining motivational elements, the extent to which the bureau accomplished this goal for nonresponse follow-up appeared to have had mixed results. In commenting on our conclusion that it will be important for the bureau to ensure that reinterview procedures provide sufficient quality assurance through the full duration of nonresponse follow-up, the bureau noted that the reinterview operation must be designed to provide sufficient quality assurance coverage. We revised the text accordingly We are sending copies of this report to the Honorable Dan Miller and Carolyn B. Maloney, House of Representatives, and those in other interested congressional committees; the Secretary of Commerce; and the Acting Director of the Bureau of the Census. Copies will be made available to others on request. Major contributors to this report are included in appendix III. If you have any questions concerning this report, please call me on (202) 512-6806. In addition to those named above, the following headquarters staff made key contributions to this report: Wendy Ahmed; Tom Beall; James Fields; Rich Hung; Lily Kim; J. Christopher Mihm; Victoria E. Miller; Vicky L. Miller; Ty Mitchell; Anne Rhodes-Kline; Lynn Wasielewski; Susan Wallace. The following staff from the Western Regional Office also contributed to this report: James Bancroft; Robert Bresky; Arthur Davis; Julian Fogle; Araceli Hutsell; RoJeanne Liu; Elizabeth Dolan; Thomas Schulz; Nico Sloss; Cornelius Williams. The following staff from the Central Regional Office also contributed to this report: Richard Burrell; Michael De La Garza; Maria Durant; Donald Ficklin; Ron Haun; Arturo Holguin, Jr.; Reid Jones; Stefani Jonkman; Roger Kolar; Tom Laetz; Miquel Salas; Enemencio Sanchez; Jeremy Schupbach; Melvin Thomas; Richard Tsuhara; Theresa Wagner; Patrick Ward; Linda Kay Willard; Cleofas Zapata, Jr. The following staff from the Eastern Regional Office also contributed to this report: Cammillia Campbell; Lara Carreon; Betty Clark; Johnetta Gatlin-Brown; Marshall Hamlett; Carlean Jones; Janet Keller; Cameron Killough; Jean Lee; Christopher Miller; S. Monty Peters; Sharon Reid; Matthew Smith. 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed (GAO-02-26, December 31, 2001). 2000 Census: Analysis of Fiscal Year 2000 Budget and Internal Control Weaknesses at the U.S. Census Bureau (GAO-02-30, December 28, 2001). 2000 Census: Significant Increase in Cost Per Housing Unit Compared to 1990 Census (GAO-02-31, December 11, 2001). 2000 Census: Better Productivity Data Needed for Future Planning and Budgeting (GAO-02-4, October 4, 2001). 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations (GAO-01-579, August 20, 2001). Decennial Censuses: Historical Data on Enumerator Productivity Are Limited (GAO-01-208R, January 5, 2001). 2000 Census: Information on Short- and Long-Form Response Rates (GAO/GGD-00-127R, June 7, 2000). The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system). | Nonresponse follow-up--in which Census Bureau enumerators go door-to-door to count individuals who have not mailed back their questionnaires--was the most costly and labor intensive of all 2000 Census operations. According to Bureau data, labor, mileage, and administrative costs totaled $1.4 billion, or 22 percent of the $6.5 billion allocated for the 2000 Census. Several practices were critical to the Bureau's timely competition of nonresponse follow-up. The Bureau (1) had an aggressive outreach and promotion campaign, simplified questionnaire, and other efforts to boost the mail response rate and thus reduce the Bureau's nonresponse follow-up workload; (2) used a flexible human capital strategy that enabled it to meet its national recruiting and hiring goals and position enumerators where they were most needed; (3) called on local census offices to identify local enumeration challenges, such as locked apartment buildings and gated communities, and to develop action plans to address them; and (4) applied ambitious interim "stretch" goals that encouraged local census offices to finish 80 percent of their nonresponse follow-up workload within the first four weeks and be completely finished by the end of the eighth week, as opposed to the ten-week time frame specified in the Bureau's master schedule. Although these initiatives were key to meeting tight time frames for nonresponse follow-ups, the Bureau's experience in implementing them highlights challenges for the next census in 2010. First, maintaining the response rate is becoming increasingly expensive. Second, public participation in the census remains problematic. Third, the address lists used for nonresponse follow-up did not always contain the latest available information because the Bureau found it was infeasible to remove many late-responding households. Fourth, the Bureau's stretch goals appeared to produce mixed results. Finally, there are questions about how reinterview procedures aimed at detecting enumerator fraud and other quality problems were implemented. |
Section 287(g) of the INA, as amended, authorizes ICE to enter into written agreements under which state or local law enforcement agencies may perform, at their own expense and under the supervision of ICE officers, certain functions of an immigration officer in relation to the investigation, apprehension, or detention of aliens in the United States. The statute also provides that such an agreement is not required for state and local officers to communicate with ICE regarding the immigration status of an individual or otherwise to cooperate with ICE in the identification and removal of aliens not lawfully present in the United States. Thus, 287(g) agreements go beyond state and local officers’ existing ability to obtain immigration status information from ICE and to alert ICE to any removable aliens they identify. Under these agreements, state and local officers are to have direct access to ICE databases and act in the stead of ICE agents by processing aliens for removal. They are authorized to initiate removal proceedings by preparing a notice to appear in immigration court and transporting aliens to ICE-approved detention facilities for further proceedings. Section 287(g) and its legislative history do not detail the exact responsibilities to be carried out, the circumstances under which officers are to exercise 287(g) authority, or which removable aliens should be prioritized for removal, thus giving ICE the discretion to establish enforcement priorities for the program. The statute does, however, contain a number of detailed requirements or controls for the program. It requires that a written agreement be developed to govern the delegation of immigration enforcement functions (e.g., MOA), ICE determine that any officer performing such a function is qualified to do so (e.g., background security check), the officer have knowledge of, and adhere to, federal law relating to immigration (e.g., training), officers performing immigration functions have received adequate training regarding enforcement of federal immigration laws (e.g., written certification of training provided upon passing examinations), any officer performing such a function be subject to the direction and supervision of ICE, with the supervising office to be specified in the written agreement, and specific powers and duties to be exercised or performed by state or local officers be set forth in the written agreement. Currently, the 287(g) program is the responsibility of ICE’s Office of State and Local Coordination (OSLC). The OSLC is responsible for providing information about ICE programs, initiatives, and authorities available to state and local law enforcement agencies. In August 2007, OSLC organized its various programs to partner with state and local law enforcement agencies as Agreements of Cooperation in Communities to Enhance Safety and Security (ACCESS). ACCESS offers state and local law enforcement agencies the opportunity to participate in 1 or more of 13 programs, including the Border Enforcement Security Task Forces, the Criminal Alien Program, and the 287(g) program. More detailed descriptions of the ACCESS programs appear in appendix IV. Under ACCESS, OSLC officials are to work with state and local applicants to help determine which assistance program would best meet their needs. For example, before approving an applicant for 287(g) program participation, OSLC officials are to assess first whether ICE has the resources to support the applicant, such as available detention space and transportation assets based on what historical patterns indicate will be the approximate number of removable aliens apprehended per year by the applying law enforcement agency. Based on an overall assessment of these and other factors, such as the type of agreement requested, availability of training, congressional interest, and proximity to other 287(g) programs, ICE may suggest that one or more of the other assistance programs under ACCESS would be more appropriate. Within the 287(g) program, ICE has developed three models for state and local law enforcement participation. One model, referred to as the “jail model,” allows for correctional officers working in state prisons or local jails to screen those arrested or convicted of crimes by accessing federal databases to ascertain a person’s immigration status. Another option, referred to as the “task force model,” allows law enforcement officers participating in criminal task forces such as drug or gang task forces to screen arrested individuals using federal databases to assess their immigration status. ICE has approved some local law enforcement agencies to concurrently implement both models, an arrangement referred to as the “joint model.” The 287(g) program has grown rapidly in recent years as more state and local communities seek to address criminal activity by those in the country illegally with specialized training and tools provided by ICE. From its initiation 287(g) authority was viewed by members of Congress as an opportunity to provide ICE with more resources—in the form of state and local law enforcement officers—to assist ICE in the enforcement of immigration laws. In 2005, the conference committee report for DHS’s appropriation encouraged ICE to be more proactive in encouraging state and local governments to participate in the program. Beginning in fiscal year 2006, DHS appropriations acts expressly provided funds for the 287(g) program, and accompanying committee reports provided guidance on program implementation. In fiscal year 2006, the DHS Appropriations Act provided $5.0 million to facilitate 287(g) agreements, and the accompanying conference report noted full support for the program, describing it as a powerful force multiplier to better enforce immigration laws and, consequently, to better secure the homeland. In fiscal year 2007, ICE received $5.4 million for the 287(g) program in its regular appropriation and allocated $10.1 million in supplemental funding towards the program. In fiscal year 2008, ICE received $39.7 million for the program, and has received $54.1 million for fiscal year 2009 to support the program. Accompanying committee reports have emphasized that ICE should perform close monitoring of compliance with 287(g) agreements, extensive training prior to delegation of limited immigration enforcement functions, direct supervision of delegated officers by ICE, and enrollment of correctional facilities in the program to identify more removable aliens. Participating state and local law enforcement agencies in the 287(g) program may apply for financial assistance to cover some costs associated with the program either directly from ICE or through grants provided by the Department of Justice (DOJ). For example, for agencies with contractual reimbursement agreements, ICE can reimburse law enforcement agencies for (1) detention of incarcerated aliens in local facilities who are awaiting processing by ICE upon completion of their sentences and (2) transportation of incarcerated aliens, upon completion of their sentences, from a jurisdiction’s facilities to a facility or location designated by ICE. In addition, state and local law enforcement agencies may apply for grants from the DOJ’s State Criminal Alien Assistance Program (SCAAP) for a portion of the costs of incarcerating certain removable aliens convicted of a felony or two or more misdemeanors. The 287(g) program lacks several management controls that limit ICE’s ability to effectively manage the program. First, ICE has not documented the program’s objectives in program-related materials. Second, program- related documents, including the MOA, lack specificity as to how and under what circumstances participating agencies are to use 287(g) authority, or how ICE will supervise the activities of participating agencies. Third, ICE has not defined what program information should be tracked or ensured that program information is being consistently collected and communicated, which would help ensure that management directives are followed. And finally, ICE has not developed performance measures to assess the effectiveness of the 287(g) program and whether it is achieving its intended results. According to ICE senior program officials, the main objective of the 287(g) program is to enhance the safety and security of communities by addressing serious criminal activity such as violent crimes, human smuggling, gang/organized crime activity, sexual-related offenses, narcotics smuggling and money laundering committed by removable aliens. However, program-related documents, including the MOAs and program case files for the initial 29 participating agencies, the 287(g) brochure, training materials provided to state and local officers, and a “frequently asked questions” document do not identify this as the objective of the 287(g) program. Internal controls also call for agencies to establish clear, consistent objectives. In addition, GPRA requires agencies to consult with stakeholders to clarify their missions and reach agreement on their goals. Successful organizations we have studied in prior work involve stakeholders in program planning efforts, which can help create a basic understanding among the stakeholders of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. The statute that established the 287(g) program and associated legislative history do not set enforcement priorities for the program, which leaves the responsibility to ICE. Therefore, ICE has the discretion to define the 287(g) program objectives in any manner that is reasonable. Although ICE has prioritized its immigration enforcement efforts to focus on serious criminal activity because of limited personnel and detention space, ICE officials told us they did not document the stated 287(g) program objectives as such because a situation could arise where detention space might be available to accommodate removable aliens arrested for minor offenses. We identified cases where participating agencies have used their 287(g) authority to process for removal aliens arrested for minor offenses. For example, of the 29 participating agencies we reviewed, 4 agencies told us they used 287(g) authorities to process for removal those aliens the officers stopped for minor violations such as speeding, carrying an open container of alcohol, and urinating in public. None of these crimes fall into the category of serious criminal activity that ICE officials described to us as the type of crime the 287(g) program is expected to pursue. Due to the rapid growth of the 287(g) program, an unmanageable number of aliens could be referred to ICE if all the participating agencies sought assistance to remove aliens for such minor offenses. Another potential consequence of not having documented program objectives is misuse of authority. The sheriff from a participating agency said that his understanding of the 287(g) authority was that 287(g)-trained officers could go to people’s homes and question individuals regarding their immigration status even if the individual is not suspected of criminal activity. Although it does not appear that any officers used the authority in this manner, it is illustrative of the lack of clarity regarding program objectives and the use of 287(g) authority by participating agencies. While agencies participating in the 287(g) program are not prohibited from seeking the assistance of ICE for aliens arrested for minor offenses, detention space is routinely very limited and it is important for ICE to use these and other 287(g) resources in a manner that will most effectively achieve the objective of the program—to process for removal those aliens who pose the greatest threat to public safety. According to ICE’s Office of Detention and Removal (DRO) strategic plan, until more alternative detention methods are available, it is important that their limited detention bed space is available for those aliens posing greater threats to the public. ICE’s former Assistant Secretary made this point in her congressional testimony in February 2008, stating that given the rapid growth of the program in the last 2 years, it is important to ensure that ICE’s bed space for the 287(g) program is used for the highest priority aliens. This may not be achieved if ICE does not document and communicate to participating agencies its program objective of focusing limited enforcement and detention resources on serious and/or violent offenders. ICE has not consistently articulated in program-related documents, such as MOAs, brochures and training materials, how participating agencies are to use their 287(g) authority, nor has it described the nature and extent of ICE supervision over these agencies’ implementation of the program. Internal control standards state that government programs should establish control activities to help ensure management’s directives are carried out. According to ICE officials, they use various controls to govern the 287(g) program, including conducting background checks on officers working for state and local law enforcement agencies that apply to participate in the 287(g) program, facilitating a training program with mandatory examinations to prepare law enforcement officers to carry out 287(g) program activities, and documenting agreements reached on program operations in the MOA. ICE has not consistently communicated, through its MOAs with participating agencies, how and under what circumstances 287(g) authority is to be used. Internal control standards state that government programs should establish control activities, including ensuring that significant events are authorized and executed only by persons acting within the scope of their authority. For the 287(g) program, ICE officials identified the MOA as a key control document signed by both ICE and participating agency officials. The MOA is designed to help ensure that management’s directives for the program are carried out by program participants. However, the MOAs we reviewed were not consistent with statements by ICE officials regarding the use of 287(g) authority. For example, according to ICE officials and other ICE documentation, 287(g) authority is to be used in connection with an arrest for a state offense; however, the signed agreement that lays out the 287(g) authority for participating agencies does not address when the authority is to be used. While all 29 MOAs we reviewed contained language that authorizes a state or local officer to interrogate any person believed to be an alien as to his right to be or remain in the United States, none of them mentioned that an arrest should precede use of 287(g) program authority. Furthermore, the processing of individuals for possible removal is to be in connection with a conviction of a state or federal felony offense. However, this circumstance is not mentioned in 7 of the 29 MOAs we reviewed, resulting in implementation guidance that is not consistent across the initial 29 participating agencies. Due to the rapid expansion of the 287(g) program in the last 2 years, it is important that ICE consistently communicate to participating agencies how this authority is to be used to help ensure that state and local law enforcement agents are not using their 287(g) authority in a manner not intended by ICE. ICE has also not defined in its program-related documents the responsibilities required of ICE agents directing and supervising local officers under the 287(g) program. Internal control standards state that a good internal control environment requires that an agency’s organizational structure define key areas of authority and responsibility. The statute that established the program specifically requires ICE to direct and supervise the activities of the state and local officers who participate in the 287(g) program. The statute and associated legislative history, however, do not define the terms of direction and supervision, which leaves the responsibility for defining them to ICE. Although ICE has the discretion to define these terms in any manner that it deems reasonable, it has not defined them in program documents. In our analysis of the 29 MOAs, we found little detail regarding the nature and extent of supervisory activities to be performed by ICE working with state and local law enforcement officers. For example, the MOAs state that participating officers will be supervised and directed by ICE regarding their immigration enforcement functions. The MOAs also state that participating officers cannot perform any immigration officer functions except when being supervised by ICE, and that those actions will be reviewed by ICE supervisory officers on an ongoing basis to ensure compliance and to determine if additional training is needed. The MOAs further state that the participating state or local agency retains supervisory responsibilities over all other aspects of the officers’ employment. However, details regarding the nature and extent of supervision, such as whether supervision is to be provided remotely or directly, the frequency of interaction, and whether reviews are conducted as written assessments or through oral feedback, are not described in the MOAs or in any documentation provided to us by ICE. In response to our inquiry, ICE officials did not provide a clear definition of the nature and extent of ICE supervision to be provided to participating agencies. These officials also cited a shortage of supervisory resources. The Assistant Director for the Office of State and Local Coordination that manages the 287(g) program said the ICE officer who supervises the activities of a participating agency’s officers is responsible for conducting general tasks, such as reviewing and providing oversight over the information added to immigration files; however, he also said the ICE official responsible for supervising the activities of a participating agency’s officers may not have a supervisory designation within ICE. He added that documentation of an ICE 287(g) supervisor’s responsibilities may be included in the position description of a Supervisory Detention and Deportation Officer. We examined seven position descriptions provided by ICE, including this position. Some of the activities described in this position description address such issues as level of supervision or direction and expectations setting for subordinates. For example, the position description for a Supervisory Detention and Deportation Officer establishes guidelines and performance expectations that are clearly communicated, observes workers’ performance and conducts work performance critiques, provides informal feedback, assigns work based on priorities or the capabilities of the employee, prepares schedules for completion of work, gives advice and instruction to employees, and identifies developmental and training needs, in addition to other duties. However, because supervision activities specific to the 287(g) program (or more generally, state and local law enforcement officers carrying out immigration enforcement activities) were not contained in the description, it is unclear the extent to which the supervisory activities enumerated in those position descriptions would apply to the supervision of state and local officers in the 287(g) program. Further, ICE officials in headquarters noted that the level of ICE supervision provided to participating agencies has varied due to a shortage of supervisory resources. The officials said it has been necessary in many instances for ICE to shift local resources or to utilize new supervisory officers to provide the required oversight and to manage the additional workload that has resulted from the 287(g) program. For example, agents from ICE’s Office of Investigations (OI) and DRO have been detailed to the 287(g) program to fulfill the requirement within section 287(g) of the INA, which mandates that ICE supervise officers performing functions under each 287(g) agreement. Officials explained that these detailees have been taken away from their permanent positions, which affects ICE’s ability to address other criminal activity. ICE officials noted that the small number of detailed agents does not have a significant impact on ICE’s overall ability to supervise the 287(g) program in the field. In addition to the views by ICE officers in headquarters, we asked ICE field officials about 287(g) supervision. There was wide variation in the perceptions of what supervisory activities are to be performed. For example, one ICE official said ICE provides no direct supervision over the local law enforcement officers in the 287(g) program in their area of responsibility. Conversely, another ICE official characterized ICE supervisors as providing frontline support for the 287(g) program. ICE officials at two additional offices described their supervisory activities as overseeing training and ensuring the computer systems are working properly. Officials at another field office described their supervisory activities as reviewing files for completeness and accuracy. We also asked state and local officers about ICE supervision related to this program. Officials from 14 of the 23 agencies that had implemented the program gave positive responses when asked to evaluate ICE’s supervision of their 287(g)-trained officers. Another four law enforcement agencies characterized ICE’s supervision as fair, adequate, or provided on an as- needed basis. Three agencies said they did not receive direct ICE supervision or that supervision was not provided daily, which one agency felt was necessary to assist with the constant changes in requirements for processing of paperwork. Officials from two law enforcement agencies said ICE supervisors were either unresponsive or not available. One of these officials noted that it was difficult to establish a relationship with the relevant managers at the local ICE office because there was constant turnover in the ICE agents responsible for overseeing the 287(g) program. Given the rapid growth of the program and ICE’s limited supervisory resources, defining supervision activities would improve ICE’s ability to ensure management directives are carried out appropriately. While ICE states in its MOAs that participating agencies are responsible for tracking and reporting data, the MOA did not provide details as to what data needs to be collected or in what manner data should be collected and reported. For example, in 20 of the 29 MOAs we reviewed, ICE generally required participating agencies to track data, but the MOA did not define what data should be tracked, or how data should be collected and reported to ICE. Specifically, the reporting requirements section in 20 of the MOAs states: The LEA will be responsible for tracking and maintaining accurate data and statistical information for their 287(g) program, including any specific tracking data requested by ICE. Upon ICE’s request, such data and information shall be provided to ICE for comparison and verification with ICE’s own data and statistical information, as well as for ICE’s statistical reporting requirements and to help ICE assess the progress and success of the LEA’s 287(g) program. Furthermore, results of our structured interview with 29 program participants indicated confusion regarding reporting requirements. For example, of the 20 law enforcement agencies we reviewed whose MOA contained a reporting requirement: 7 agencies told us they had a reporting requirement and reported data to ICE; 3 agencies told us they had a requirement, but were not sure what specific data was to be reported; 3 agencies told us they were not required to report any data; 2 agencies told us that while ICE did not require them to report data, they submitted data to ICE on their activities anyway; and 5 agencies did not respond directly regarding a reporting requirement. Of the nine program participants we interviewed without a reporting requirement in the MOA: 5 agencies told us they reported data to ICE; 2 agencies told us they were not required to report data to ICE, but did so anyway; 1 agency told us they do not report data to ICE; and 1 agency did not know if they were required to report data to ICE. According to internal control standards, pertinent information should be recorded and communicated to management and others within the entity that need it in a form and within a time frame that enables them to carry out internal control and other responsibilities. Consistent with these standards, agencies are to ensure that information relative to factors vital to a program meeting its goals is identified and regularly reported to management. For example, collecting information such as the type of crime for which an alien is detained could help ICE determine whether participating agencies are processing for removal those aliens who have committed serious crimes, as its objective states. Without clearly communicating to participating agencies guidance on what data is to be collected and how it should be gathered and reported, ICE management may not have the information it needs to ensure the program is achieving its objective. While ICE has defined the objective of the 287(g) program—to enhance the safety and security of communities by addressing serious criminal activity by removable aliens— the agency has not developed performance measures for the 287(g) program to track the progress toward attaining that objective. GPRA requires that agencies clearly define their missions, measure their performance against the goals they have set, and report on how well they are doing in attaining those goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. Our previous work has shown that agencies successful in evaluating performance had measures that demonstrated results, covered multiple priorities, provided useful information for decision making, and successfully addressed important and varied aspects of program performance. Internal controls also call for agencies to establish performance measures and indicators. ICE officials stated that they are in the process of developing performance measures, but have not provided any documentation or a time frame for when they expect to complete the development of these measures. In accordance with standard practices for program and project management, specific desired outcomes or results should be conceptualized and defined in the planning process as part of a road map, along with the appropriate projects needed to achieve those results, and milestones. ICE officials told us that, although they have not yet developed performance measures, in an effort to monitor how the program is being implemented, they are beginning to conduct compliance inspections based on information provided in the MOA in locations where the 287(g) program has been implemented. ICE’s Office of Professional Responsibility (OPR) was recently directed to conduct field inspections of all participating 287(g) program agencies. OPR officials state that the inspections are based on a checklist drawn from participating agencies’ MOAs as well as interviews with state and local law enforcement agencies and ICE officials who are responsible for overseeing these agencies. OPR’s checklists include items such as the review of the arrest and prosecution history of undocumented criminals, relevant immigration files, and ICE’s Enforcement Case Tracking System (ENFORCE) entries, as well as review of any complaints by those detained pursuant to the 287(g) program directed towards ICE, state and local law enforcement officers. OPR officials use this checklist to confirm whether the items agreed to in the MOA have been carried out. As discussed earlier in this report, the 29 MOAs we reviewed did not contain certain internal controls to govern program implementation consistent with federal internal control standards. According to OPR officials, they have completed six compliance inspections, and have a seventh inspection underway. In addition, OPR officials told us that they are planning to complete compliance inspections for the rest of the initial 29 program participants within the next 2 years. Although ICE has initiated compliance inspections for the 287(g) program, ICE officials stated that the compliance inspections do not include performance assessments of the program. ICE officials stated that developing performance measures for the program will be difficult because each state and local partnership agreement is unique, making it challenging to develop measures that would be applicable for all participating agencies. Nonetheless, these measures are important to provide ICE with a basis for determining whether the program is achieving its intended results. Without a plan for the development of performance measures, including milestones for their completion, ICE lacks a roadmap for how this project will be achieved. ICE and participating agencies used program resources mainly for personnel, training, and equipment. From fiscal years 2006 through 2008, ICE received approximately $60 million to provide 287(g) resources for 67 participating agencies nationwide as follows: Training. Once officers working for state and local law enforcement participating agencies pass a background investigation performed by ICE, they are also required to attend a 4-week course and pass mandatory examinations to be certified. Training is focused on immigration and nationality law, and includes modules on identifying fraudulent documents, understanding removal charges, cross-cultural communications, and alien processing (e.g., accessing federal databases). Of the 27 participating agencies that had received training at the time of our interviews 20 said the training prepared them to perform their 287(g) activities; four of these agencies also reported that their participation in the program was delayed due to problems with scheduling training. ICE provided information reflecting an average training cost per student of $2,622 using the on-site training facility—the Federal Law Enforcement Training Center—and $4,840 using off-site facilities. These average costs include travel, lodging, books, meals, and miscellaneous expenses. As of October 2008, ICE had trained and certified 951 state or local officers in the 287(g) program. Equipment. ICE is to provide the equipment necessary to link participating state and local law enforcement agencies with ICE to assist these agencies in performing their immigration enforcement activities. ICE estimates that, on average, for each participating agency it spends $37,000 for equipment set-up and installation, and about $43,000 for equipment hardware. These costs include installation of a secure transmission line, which connects the participating agency to ICE databases, one or more workstations, one or more machines that capture and transmit fingerprints electronically, and personnel labor and support costs. In addition, it spends on average about $107,000 annually for recurring equipment operations and maintenance costs for each participating agency. Supervision. ICE is to provide supervision to state and local law enforcement agencies participating in the 287(g) program. However, as mentioned earlier in this report, ICE has not identified what responsibilities are required of ICE agents directing and supervising local officers under the 287(g) program, and comments about program supervision from ICE officers at headquarters and in field offices, as well as officers from participating agencies, differ widely. Therefore, we are unable to provide more detail as to this 287(g) resource provided by ICE. In addition to the resources provided by ICE, state and local law enforcement agencies also provide resources to implement the 287(g) program. For example, state and local law enforcement agencies provide officers, space for equipment, and funding for any other expenses not specifically covered by ICE, such as office supplies and vehicles. Of the 29 state and local participating agencies we interviewed, 11 were able to provide estimates for some of their costs associated with participating in the 287(g) program; however, the data they provided was not consistent. Therefore, it was not feasible to total these costs. Those law enforcement agencies able to identify costs may be able to recover some of these expenses through an intergovernmental service agreement, or through DOJ’s SCAAP grant process. When we asked state and local law enforcement participating agencies whether they received federal reimbursement from any source for costs associated with the 287(g) program (e.g., detention or transportation), 18 of the 29 reported that they did not. Six participating state and local agencies said they received SCAAP funding for some of these costs, and another five said they received federal reimbursements for some costs related to detention, transportation, and hospitalization. The rapid growth of the 287(g) program has presented resource challenges that ICE has begun to address. For example, 11 of the 29 participating agencies we contacted told us of equipment-related problems. Specifically, two of these agencies did not have equipment to carry out the 287(g) program for several months after their staff had received training on how to use it, and they had concerns that refresher training would be needed, while another agency received more equipment than it needed. ICE has worked with participating agencies to address the problems with program equipment distribution. ICE headquarters and field staff also told us that their resources to supervise activities of program participants are being stretched to their maximum capacities to manage the increased growth of the program. To address these issues, ICE has detailed agents from OI and DRO to meet supervisory and other program requirements. ICE is also considering other ways to address the challenges presented by program growth. As discussed earlier in this report, the 287(g) program is 1 of 13 ICE programs to partner with state and local law enforcement agencies under ACCESS. ICE officials are working with state and local participants and applicants to help determine whether a different ACCESS program would better meet their needs, and as a result, ICE has reduced the backlog of applications to the 287(g) program from approximately 80 applications to 29 as of October 2008. Both ICE and state and local law enforcement agencies participating in the 287(g) program have reported activities, benefits, and concerns associated with the program. As of October 2008, ICE reported that 67 state and local law enforcement agencies had enrolled in the 287(g) program, and that about 25 state and local jurisdiction program applications were pending. In addition, ICE reported that 951 state and local officers received training in immigration law and enforcement functions and were certified to use 287(g) authority. ICE’s data show that for 25 of the 29 participating agencies we reviewed in fiscal year 2008 that about 43,000 aliens had been arrested under the 287(g) program authority, with individual agency participant arrests ranging from about 13,000 in one location to no arrests in two locations. Of those 43,000 aliens arrested by program participants pursuant to the 287(g) authority, ICE detained about 34,000 and placed about 14,000 (41 percent) of those detained in removal proceedings, and arranged for about 15,000 (44 percent) to be voluntarily removed. The remaining 5,000 (15 percent) arrested aliens detained by ICE were either given a humanitarian release, sent to a federal or state prison to serve a sentence for a felony offense, or not taken into ICE custody given the minor nature of the underlying offense and limited availability of detention space. State and local law enforcement agencies we interviewed have reported specific benefits of the 287(g) program, including the reduction of crime/making the community safer, identifying/removing repeat offenders, improving the quality of life for the community, and giving law enforcement officers a sense of accomplishment related to immigration enforcement. On the other hand, more than half of the 29 state and local law enforcement agencies we interviewed reported concerns that some members of their communities expressed about the 287(g) program, including concerns that law enforcement officers in the 287(g) program would be deporting removable aliens because of minor traffic violations (e.g., speeding); fear and apprehension in the Hispanic community about possible deportation; and concerns that officers would be performing increased enforcement of immigration laws at worksites and would engage in racial profiling. To help mitigate these fears and concerns, 27 of the 29 law enforcement agencies we reviewed reported that they had conducted outreach in their communities regarding the program (e.g., newspaper articles, press releases, TV and radio spots, speaking engagements, and public meetings). Removing aliens who have committed violent crimes is of great importance to the safety of the community at large. Through the 287(g) program and its partnerships with state and local agencies, ICE has an opportunity to identify and train additional law enforcement resources that could help it meet this challenge. However, the lack of internal controls governing the program limits ICE’s ability to take full advantage of this additional resource. For example, without documenting that the objective of the program is to remove aliens who have committed serious crimes or pose a threat to public safety, participating agencies may further burden limited detention resources by continuing to seek ICE assistance for aliens detained for minor crimes. According to ICE, it is important to ensure that their limited detention bed space is available for those aliens posing the greatest threat to the public. Moreover, without consistently communicating to participating agencies how and under what circumstances 287(g) authority is to be used, participating agencies may use this authority in a manner that is not intended by ICE. Additionally, given the rapid growth of the program, the lack of defined supervision activities could hamper ICE’s ability to ensure management directives are being carried out appropriately. Furthermore, without guidance for what data participating agencies are to collect and how this information is to be gathered and reported, ICE may not have the information it needs to help ensure participating agencies are adhering to program objectives. Finally, performance measures are important to provide ICE with a basis for determining whether the program is achieving its intended results. While it is encouraging that ICE is working to develop these measures, without establishing a plan, including a time frame for development, ICE lacks a roadmap for how it will achieve this goal. To help ensure that the ICE 287(g) program achieves the results intended, we are recommending that the Assistant Secretary for ICE take the following five actions: Document the objective of the 287(g) program for participants, Clarify how and under what circumstances 287(g) authority is to be used by state and local law enforcement officers in participating agencies, Document in MOAs the nature and extent of supervisory activities ICE officers are expected to carry out as part of their responsibilities in overseeing the implementation of the 287(g) program and communicate that information to both ICE officers and state and local participating agencies, Specify the program information or data that each agency is expected to collect regarding their implementation of the 287(g) program and how this information is to be reported, and Establish a plan, including a time frame, for the development of performance measures for the 287(g) program. We provided a draft of this report to DHS for review and comment. DHS provided written comments on January 28, 2009, which are presented in appendix V. In commenting on the draft report, DHS stated that it agreed with our recommendations and identified actions planned or underway to implement the recommendations. ICE also provided us with technical comments, which we considered and incorporated in the report where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the Secretary of State, the Attorney General, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-8777 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VI. 8 U.S.C. § 1357(g) (g) Performance of immigration officer functions by State officers and employees (1) Notwithstanding section 1342 of title 31, the Attorney General may enter into a written agreement with a State, or any political subdivision of a State, pursuant to which an officer or employee of the State or subdivision, who is determined by the Attorney General to be qualified to perform a function of an immigration officer in relation to the investigation, apprehension, or detention of aliens in the United States (including the transportation of such aliens across State lines to detention centers), may carry out such function at the expense of the State or political subdivision and to the extent consistent with State and local law. (2) An agreement under this subsection shall require that an officer or employee of a State or political subdivision of a State performing a function under the agreement shall have knowledge of, and adhere to, Federal law relating to the function, and shall contain a written certification that the officers or employees performing the function under the agreement have received adequate training regarding the enforcement of relevant Federal immigration laws. (3) In performing a function under this subsection, an officer or employee of a State or political subdivision of a State shall be subject to the direction and supervision of the Attorney General. (4) In performing a function under this subsection, an officer or employee of a State or political subdivision of a State may use Federal property or facilities, as provided in a written agreement between the Attorney General and the State or subdivision. (5) With respect to each officer or employee of a State or political subdivision who is authorized to perform a function under this subsection, the specific powers and duties that may be, or are required to be, exercised or performed by the individual, the duration of the authority of the individual, and the position of the agency of the Attorney General who is required to supervise and direct the individual, shall be set forth in a written agreement between the Attorney General and the State or political subdivision. (6) The Attorney General may not accept a service under this subsection if the service will be used to displace any Federal employee. (7) Except as provided in paragraph (8), an officer or employee of a State or political subdivision of a State performing functions under this subsection shall not be treated as a Federal employee for any purpose other than for purposes of chapter 81 of title 5 (relating to compensation for injury) and sections 2671 through 2680 of title 28 (relating to tort claims). (8) An officer or employee of a State or political subdivision of a State acting under color of authority under this subsection, or any agreement entered into under this subsection, shall be considered to be acting under color of Federal authority for purposes of determining the liability, and immunity from suit, of the officer or employee in a civil action brought under Federal or State law. (9) Nothing in this subsection shall be construed to require any State or political subdivision of a State to enter into an agreement with the Attorney General under this subsection. (10) Nothing in this subsection shall be construed to require an agreement under this subsection in order for any officer or employee of a State or political subdivision of a State — (A) to communicate with the Attorney General regarding the immigration status of any individual, including reporting knowledge that a particular alien is not lawfully present in the United States; or (B) otherwise to cooperate with the Attorney General in the identification, apprehension, detention, or removal of aliens not lawfully present in the United States. This report addresses (1) the extent to which ICE has designed controls to govern 287(g) program implementation and (2) how program resources are being used and the program activities, benefits, and concerns reported by participating agencies. To address our objectives, we contacted and obtained information from key people and organizations associated with the arrest, detention, and removal of aliens, and U.S. Immigration and Customs Enforcement’s (ICE) 287(g) program, including the following: ICE headquarters officials from the following offices: Office of Investigations, Office of the Principal Legal Advisor, Office of Detention and Removal, Office of the Chief Financial Officer/Budget Office, Office of State and Local Coordination, and Office of Professional Responsibility. ICE officials from ICE Field Offices in Phoenix, Arizona, and in the California offices of Los Angeles, Santa Ana, Riverside, and San Bernardino in conjunction with our site visits to state and local law enforcement agencies in these areas. Officials from all 29 state and local law enforcement agencies that had entered into agreements with ICE as of September 1, 2007, listed below. Six of these agencies reported that they had not yet begun implementing the program. Our analysis includes information from these six agencies as appropriate. We conducted structured interviews with officials from these organizations from October 2007 through February 2008. By interviewing officials from all participating agencies, we were able to obtain information and perspectives from participating agencies that had been involved in the program for the longest period of time as well as from those agencies that had just started participating to learn how law enforcement agencies get their program implemented. State and local law enforcement agencies that had entered into agreements with ICE as of September 1, 2007: Alabama Department of Public Safety; Arizona Department of Corrections; Arizona Department of Public Safety; Maricopa County Sheriff’s Office (Arizona); Los Angeles County Sheriff’s Office (California) Orange County Sheriff’s Office (California); Riverside County Sheriff’s Office (California); San Bernardino County Sheriff’s Office (California); Colorado Department of Public Safety/State Patrol; El Paso County Sheriff’s Office (Colorado); Collier County Sheriff’s Office (Florida); Florida Department of Law Enforcement; Cobb County Sheriff’s Office (Georgia); Georgia Department of Public Safety; Barnstable County Sheriff’s Office (Massachusetts); Framingham Police Department (Massachusetts); Massachusetts Department of Corrections; Alamance County Sheriff’s Office (North Carolina); Cabarrus County Sheriff’s Office (North Carolina); Gaston County Sheriff’s Office (North Carolina); Mecklenburg County Sheriff’s Office (North Carolina); Hudson Police Department (New Hampshire) New Mexico Department of Corrections; Tulsa County Sheriff’s Office (Oklahoma); Davidson County Sheriff’s Office (Tennessee); Herndon Police Department (Virginia); Prince William-Manassas Adult Detention Center (Virginia); Rockingham County Sheriff’s Office (Virginia); and Shenandoah County Sheriff’s Office (Virginia). We also conducted site visits with nine state and local law enforcement agencies that entered into an agreement with ICE as of September 1, 2007, and had begun implementing the program. These sites were selected to represent variation in length of partnership with ICE, type of model (e.g., jail, task force, or joint), geographic location, size of jurisdiction, and proximity to ICE Special-Agent-in-Charge or regional office. The offices from which we interviewed officials about their participation in the 287(g) program, include Rockingham County Sheriff’s Office, Shenandoah County Sheriff’s Office, Los Angeles County Sheriff’s Office, Orange County Sheriff’s Office, San Bernardino County Sheriff’s Office, Riverside County Sheriff’s Office, Arizona Department of Corrections, Maricopa County Sheriff’s Office (including the Enforcement Support, Human Smuggling Unit), and Arizona Department of Public Safety (including the Gang Enforcement Bureau and the Criminal Investigations Division). Although we are not able to generalize the information gathered from these visits to all other participating law enforcement agencies, they provided us with a variety of examples related to program implementation. To determine what the 287(g) program’s objectives are and to what extent ICE has designed controls to govern implementation, we collected and analyzed information regarding the program’s objective and obtained information from both ICE and the participating law enforcement agencies we interviewed and visited to determine if ICE objectives for the program were clearly articulated to law enforcement agencies. We reviewed available program-related documents, including program case files for the initial 29 participating agencies, the 287(g) brochure, training materials provided to state and local officers to become certified in the program, and a “frequently asked questions” document on the program. In addition, we analyzed the MOAs of each state and local agency participating in the 287(g) program as of September 1, 2007. Specifically, we examined sections of the MOAs related to program authority, designation of enforcement functions, and ICE supervision responsibilities, among other areas of these written agreements. We completed a content analysis of responses to structured interviews that were conducted with key officials from each of the participating law enforcement agencies in this review and from information gathered from site visits. Our content analysis consisted of reviewing the responses to the structured interview questions and identifying and grouping responses by theme or characterization. These themes were then coded and tallied. For some questions, participating agencies gave multiple responses or characterizations, therefore responses are not always mutually exclusive. Selection of themes and coding of responses were conducted separately by two analysts; any discrepancies were resolved. We also compared controls ICE told us they designed to govern implementation of the 287(g) program, including conducting background checks, providing formal training with qualifying exams for the applicants’ officers, and agreeing with state and local agencies to MOAs, with criteria in GAO’s Standards for Internal Control in the Federal Government, the Government Performance and Results Act (GPRA) and standard practices for program management. To corroborate the information we received from the law enforcement agencies through both the structured interviews and site visits, we interviewed officials from ICE both at headquarters and in the field, and examined documentation on guidance given to both ICE and state and local participants about the implementation of the program, as well as reviewed all 29 case files created and maintained by ICE on program participants. We identified for what purposes ICE relies on data collected from law enforcement agencies, and how data reliability checks are performed for data collection associated with the 287(g) program. We interviewed ICE officials and participating law enforcement agencies to determine what guidance ICE has provided to law enforcement agencies on how data are collected, stored, and reported to ICE. We interviewed officials and examined documentation from ICE to determine the measures established to monitor performance and improvements made to the program. We reviewed reports that use data from ICE’s Enforcement Case Tracking System or the ENFORCE database, which automates the processes associated with the identification, apprehension, and deportation of removable aliens. During our review, we learned that some data regarding the 287(g) program may not have been included in ENFORCE, and therefore, we are unsure of the completeness of the information relevant to this program in this database. We used this data to a limited extent in our Objective II discussion related to activities, benefits, and concerns of the 287(g) program. The data used was for illustrative purposes only and not used to draw conclusions about the program. To determine what resources ICE and participating law enforcement agencies provide to the program including the equipment and training for program participants, and the assignment of ICE supervisory staff for this program, we examined ICE’s budget for the 287(g) program, including how ICE calculates the funding requirements for each additional agreement. We also interviewed officials from the participating law enforcement agencies, analyzed information collected from these agencies to determine what resources they reported using to implement the program and the activities, benefits, and concerns they reported associated with the program. In addition, we examined budget and appropriations documentation from the program’s inception to the fiscal year 2009 budget request for the 287(g) program. We collected and analyzed information on the activities reported by ICE stemming from the program. Through our structured interviews, we gathered and analyzed the participating state and local agencies views on the activities, benefits, and concerns related to the program. We did not conduct a fiscal examination of the cost of detention facilities, nor review the budgetary effect on law enforcement agencies implementing the 287(g) program. In addition to the contact named above, Bill Crocker, Assistant Director, and Lori Kmetz, Analyst-in-Charge, managed this assignment. Susanna Kuebler, Carolyn Garvey, and Orlando Copeland made significant contributions to the work. Michele Fejfar assisted with design, methodology, and data analysis. Katherine Davis, Linda Miller, Adam Vogt and Peter Anderson provided assistance in report preparation, and Frances Cook provided legal support. | Section 287(g) of the Immigration and Nationality Act, as amended, authorizes the federal government to enter into agreements with state and local law enforcement agencies to train officers to assist in identifying those individuals who are in the country illegally. U.S. Immigration and Customs Enforcement (ICE) is responsible for supervising state and local officers under this program. GAO was asked to review this program. This report reviews (1) the extent to which ICE has designed controls to govern 287(g) program implementation; and (2) how program resources are being used and the activities, benefits, and concerns reported by participating agencies. GAO reviewed memorandums of agreement (MOA) between ICE and the 29 program participants as of September 1, 2007. GAO compared controls ICE designed to govern the 287(g) program with criteria in GAO's Standards for Internal Control in the Federal Government. GAO interviewed officials from both ICE and participating agencies on program implementation, resources, and results. ICE has designed some management controls to govern 287(g) program implementation, such as MOAs and background checks of state and local officers, but the program lacks other controls, which makes it difficult for ICE to ensure that the program is operating as intended. First, the program lacks documented program objectives to help ensure that participants work toward a consistent purpose. ICE officials stated that the objective of the program is to address serious crime, such as narcotics smuggling committed by removable aliens; however, ICE has not documented this objective in program materials. As a result, of 29 program participants reviewed by GAO, 4 used 287(g) authority to process individuals for minor crimes, such as speeding, contrary to the objective of the program. Second, ICE has not described the nature and extent of its supervision over participating agencies' implementation of the program, which has led to wide variation in the perception of the nature and extent of supervisory responsibility among ICE field officials and officials from the participating agencies. ICE is statutorily required to supervise agencies participating in the 287(g) program, and internal control standards require an agency's organizational structure to clearly define key areas of authority and responsibility. Defining the nature and extent of the agency's supervision over this large and growing program would strengthen ICE's assurance that management's directives are being carried out. Finally, while ICE states in its MOAs that participating agencies are responsible for tracking and reporting data to ICE, in 20 of 29 MOAs GAO reviewed, ICE did not define what data should be tracked or how it should be collected and reported. Communicating to participating agencies what data is to be collected and how it should be gathered and reported would help ensure that ICE management has the information needed to determine whether the program is achieving its objective. ICE and program participants use resources for personnel, training, and equipment, and participants report activities, benefits, and concerns regarding the program. In fiscal years 2006-2008, ICE received about $60 million to train, supervise, and equip program participants. As of October 2008, ICE reported enrolling 67 agencies and training 951 state and local law enforcement officers. According to data provided by ICE for 25 of the 29 program participants reviewed by GAO, during fiscal year 2008, about 43,000 aliens had been arrested pursuant to the program, and of those, ICE detained about 34,000. About 41 percent of those detained were placed in removal proceedings, and an additional 44 percent agreed to be voluntarily removed. The remaining 15 percent of those detained by ICE were given a humanitarian release, sent to federal or state prison, or released due to the minor nature of their crime and federal detention space limitations. Program participants report a reduction in crime, the removal of repeat offenders, and other public safety benefits. However, over half of the 29 agencies GAO contacted reported concerns from community members that use of program authority would lead to racial profiling and intimidation by law enforcement officials. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.